- Steal These Thoughts!
- Posts
- AI Can Do What Tests Never Could — Show What People Actually Know
AI Can Do What Tests Never Could — Show What People Actually Know
The future of learning isn’t about recall, it’s about reasoning
📨 Subscribe | 🤖 AI Training | 🚀 Courses
TODAY’S THOUGHTS ☠️
Hey there 👋,
We’re back again, and the sun has appeared - finally!
I can’t help but get excited about Spring.
New beginnings everywhere. I’m particularly stoked about the blueberry tree in my garden finally (it’s been a painful experience) yielding some blueberries in it’s third year.
This kinda ties in to today’s chat.
The new beginning, not the blueberries.
We’re exploring how AI has killed tests and quizzes in courses as a way to ‘measure’ skills and expertise.
It’s about to get controversial (probably).
Get your tea or beverage of choice ready, 🍵.
We've got lots to discuss!
P.S. Your app might clip this edition due to size, if so, read the full edition in all its glory in your browser.

IN THIS DROP 📔
Let’s move from recall to reasoning for skill assessment
How AI coaches are working with real humans to improve problem solving
The simple system I use to never forget what I learn from long-form content

TOGETHER WITH 360 LEARNING
How mature is your L&D function?
L&D leaders—the framework you’ve always wanted but never had is here: introducing the L&D Maturity Model by David James.
Complete with KPIs, an interactive self-assessment, and upskilling resources, the L&D Maturity Model will help you:
Identify where your L&D function stands
Discover how it needs to evolve
Learn ways to take it to the next level
Complete your self-assessment today.
🙋♀️ Want to reach over 4,500 L&D pros? Become a Newsletter sponsor in 2025
THE BIG THOUGHT 👀
AI Can Do What Tests Never Could — Show What People Actually Know

Run away!
I got an out of the blue message this week.
“I just saw this video where an AI agent completed a test for an employee, are you worried about this?”
There was no “Hey, Ross”, “How are you, Ross?”, “Hope life is good, Ross?” - just the question. I also find it incredibly unnerving to write about myself in the third person. This is how you know I write all of this and not AI. It’s way too polished to do any of that madness.
Lack of the sender seeming to care about me as a human aside, I have an answer.
I imagine it’s not the one you’d expect.
My take will be a little bit ranty, but nearly 20 years in the field has made me somewhat numb to dumb education and learning practices.
What was shown in the video (fyi, can’t find the video now, must have rage deleted the message) doesn't worry me as much as the fact that the education system is broken.
Yes, that sounds like some lame statement to say to court attention, but this isn’t an algorithm and I’ll go down with this ship.
OK, hear me out…
If we confirm 'skills' and 'understanding' through quizzes, then it's all just a game of who has the best memory, not who understands how to do 'x'.
I know millions of institutions and corporate learning experiences worship at the altar of the almighty multiple choice exam as the measurement stick for human intelligence, skills and expertise.
Doesn’t mean it’s right.
Agentic AI thrives here because it's a pretty easy process to tick a box of multiple-choice answers. There’s nothing inherently hard about that. It might be one of the easiest use cases of AI agents to date.
That is not the problem.
The actual problem is how we’ve shaped the process of measuring intelligence, skills and expertise.
What we should be doing is assessing critical thinking and how students/people approach and solve problems.
This is where AI tutors will play a big role. Instead of taking some stupid quiz or exam, you'll talk with a tutor to explain concepts, provide analogies and break down your human chain of thought.
Thus testing your problem-solving capabilities and helping you identify blindspots.
The latest AI tech finally enables us to do something about this.
But let’s not stay in the rant too long…
So, instead of me shaking my tea cup at the world, let me share the enormity of the possible and how you can experiment with this in your learning experiences.
Why AI Coaches beat any test or quiz

I think I’m in love 😍
A few weeks back, I finished a Machine Learning Cert with Google.
And I really enjoyed it.
Yes, you read that right, someone actually enjoyed learning something. But what triggered this emotion? Simple, they treated me like an adult throughout the entire process, not just the stereotypical category of a ‘student’.
The most impactful way they did this was by removing all those worthless and kinda insulting tests and quizzes that pollute too many education and workplace learning products.
Instead, I was assessed based on what I understood from the course by talking with an AI assistant, which acted as a semi-assessor and coach.
No multiple-choice questions for me to get ChatGPT to answer here.
At the end of each module, I had to face what we could call the final boss in Google’s AI coach.
After hours spent learning about the wonderful components of machine learning, I’d talk with the AI coach to share what I learnt, answer its probing questions and explain how I would convey these learnings with others through examples and analogies.
It was unique compared to the industry standard, and that made it very rewarding.
Even as I write these words, my recollection is so positive, and it helped me cement my understanding of many of the core concepts that I know a multiple-choice test couldn’t have.
Now, you could read this and think, “This guy is crazy, isn’t a test better?” — NO!
The problem we’ve created as a society is this odd ‘test your memory game’ with which we use to assess an individual's skills and expertise.
Completing a course on ‘x’ doesn’t mean you know how to do ‘x’ for real.
That goes for this very course I’ve shared with you.
I took the cert as a way to expand my understanding and capabilities in the ML realm, but don’t expect me to be building algorithms anytime soon.
Knowing how to do ‘x’ requires me to demonstrate that I can do it and explain the critical thinking behind it.
We need to spend more time prioritising how we solve problems, thinking critically and nurturing our human chain of thought, not being the top memory champion.
And I think AI can help us shape this.
How to rewire courses with AI to measure real impact
Even before the Google cert, I’ve had AI coaches in two of my online courses.
One in my performance consulting course and the other for my AI prompting masterclass.
Both are vital, imo, in helping students to take action on what they learnt and for me to validate whether they actually understand any of what they’ve done.
They’ve been transformative for my students.
Both coaches are used daily by new and returning students. They extend the value of the courses, make sure students get real-time support and keep making the content applicable for years to come.
This is the power AI can bring to education and learning if you go beyond content creation.

Why do this instead of quizzes, tests, etc.?
Easy:
I think they’re utterly pointless, but I assume you figured that out already.
Because I want to develop each student’s Human Chain of Thought (HCOT).
I need to do a bit of explaining with Human Chain of Thought.
Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.
That ability to break down problems and display its thinking is called a Chain of Thought technique.
This was comically exposed with any maths problem you’d throw at these early-stage LLMs.
They would struggle with even the most basic requests.
It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.
We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?
Providing AI coaches that help my students contextualise, apply and truly understand what they’ve learnt amplifies this Human Chain of Thought.
So next time you design a learning experience, maybe ditch the test and quizzes for a personalised AI coach/assessor/I’ll think of a better name in the future.
I plan to cover more on the ‘How to build’ these type of solutions in my next AI for solving real business problems boot camp. I only do these once or twice a year, you can join the waitlist to be first in line for the next one.

Final thoughts
I get that tests and quizzes are an easy way to measure "learning", but that doesn’t make them useful.
AI now lets us reshape this.
You can create a similar AI coach to the one I had with Google, or the ones I use across my own courses.
Saying goodbye to completions as the measure of success is the new reality. Now, you can actually see whether people truly ‘get it’, not just if they finished something and passed the test.
→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.
Till next time, you stay classy, learning friend!
PS… If you’re enjoying the newsletter, will you take 4 seconds to forward this edition to a friend? It goes a long way in helping me grow the newsletter (and cut through our industry BS with actionable insights).
And one more thing, I’d love your input on how to make the newsletter even more useful for you!
So please leave a comment with:
Ideas you’d like covered in future editions
Your biggest takeaway from this edition
I read & reply to every single one of them!

👀 ICYMI (In case you missed it!)
NotebookLM, one of my fav research tools, just got a neat feature drop that allows you to search for external sources from any notebook.
AI company Anthropic releases Claude for Education, giving students and educators the latest AI tools. Apparently, it develops critical thinking through Socratic questioning.
Does AI help or harm skill-building? That’s what I’m sharing at my next online keynote with the team at Datacamp. Join us on April 9th at 11 am ET.
OpenAI launched its very own Academy to help you make sense of AI with ChatGPT
This post from IBM’s VP of AI stopped me on my doom-scrolling adventures ↓. What do you think? Hit ‘reply’ to share your thoughts.

VIDEO THOUGHTS 💾
I Built A Simple System To Never Forget What You Learn From a Long Video Again
Long videos are packed with valuable ideas, but how much of them do you actually remember?
In this video, I share my workflow for using AI tools to extract key ideas, insights, and takeaways from long-form content like webinars, interviews, and lectures.
Whether the video is 30 minutes or 4 hours, I’ll show you how to break it down efficiently without losing depth or meaning.
You'll learn how I combine AI tools like NotebookLM, smart questioning, and human review to move from passive watching to active insight generation.
🙋♀️ Want to reach over 4,500 L&D pros? Become a Newsletter sponsor in 2025
P.S. Wanna build your L&D advantage?
Here’s a few ways I can help:
Build your confidence and skills with the only AI course designed for L&D pros.
Become a better L&D partner with the Art of Performance Consulting.
Get a backstage pass to exclusive industry insights, events and a secret monthly newsletter with a premium membership.
Reply