- Steal These Thoughts!
- Posts
- 4 Habits That’ll Help You Think Better with AI
4 Habits That’ll Help You Think Better with AI
Are you thinking with AI, or is AI shaping your thinking?
📨 Subscribe | 🤖 AI Training | 🚀 Courses
TODAY’S THOUGHTS ☠️
Hey there 👋,
While I hate to admit my doom-scrolling addiction, it does help with keeping track of the behaviours of my fellow humans.
A note I continue to make is, when did everyone become their own LLM?
I feel like, in the last 12 months, people’s content is looking and sounding more like an LLM. This leads me to believe my prior fears of people being shaped by AI, rather then us shaping AI, are now a reality.
That makes me sad.
I use AI every day, and it’s incredibly helpful in some areas. Yet, What I’m seeing is not an AI problem, but a human one. Too many of us are choosing the path of least resistance and relegating our brain to the background.
And now we're all starting to sound like LLMs, even if we didn't use one because its formulaic patterns have infiltrated our brain wiring. That's why you hear and see people using certain words or sentence structures that weren't often used pre-LLMs.
I know you won’t like that, and you might be thinking, “How do I think better with AI and not be shaped by it?”
That’s what we’re exploring today.
I’m unpacking 4 habits that’ll help you think better with AI.
Get your tea or beverage of choice ready 🍵.
We've got lots to discuss!
P.S. Your app might clip this edition due to size. If so, read the full edition in all its glory in your browser.

IN THIS DROP 📔
What happens to ‘human thinking’ if we over-rely on AI?
How to enhance your human chain of thought
Why this is (probably) how we’ll all work with AI now

TOGETHER WITH GO1
Learning doesn't start in the LMS anymore
Employees are developing skills every day, but often outside formal learning systems.
Go1 research of 1,000 professionals found 83% expect development to happen naturally in their workflow, yet few turn to a learning platform when they need help.
Discover what this shift means for HR and L&D leaders in Go1’s latest research report and what to consider next in your learning strategy.
🙋♀️ Want to reach over 5,000 L&D pros? Become a Newsletter sponsor in 2026
THE BIG THOUGHT 👀
4 Habits That’ll Help You Think Better with AI

Think deeply about that
While I’m not on the doomsday train of “AI will destroy all human thinking” entirely on its own. I can’t ignore the level of stupidity that some humans exhibit when working with AI.
I shouldn’t be surprised, really, as the path of least resistance is paved with instant gratification, which is a dopamine daydream for the digitally addicted.
Still…
What happens to human thinking when so many outsource it to an artificial construct?
I’m saying this as much to myself as I am to you.
This is turning into a strange “dear diary” entry, but stick with me.
This is the end…or is it?
We both see the polarising views plastered across social feeds.
Logic seems to be lost in most conversations.
Most posts are either “AI will destroy your brain” or “outsource all your thinking to AI” I don’t know about you, but I’m not cool with either of those options.
It doesn’t help when the majority blindly believe every headline that’s emotionally tweaked to grab attention. Taking time to look beneath the surface usually paints a different picture. Yes, I’m looking at you, MIT study.
Moving away from all the noise, only one question is worth asking right now:
Are you thinking with AI, or is AI shaping your thinking?
Maybe it’s doing both, and maybe you’re aware of that.
Saying all that, here are a few points I think are worth exploring.
What happens to ‘human thinking’ if we over-rely on AI?
This is a real grey area for me.
I’ve seen countless examples where too much AI support leads to less flexing of human skills (most notably common sense and deep thinking), and I’ve seen examples where human skills have improved.
In my own practice, my critical thinking skills have improved with weekly AI use over the last two years. It’s what I class as an unexpected but welcome benefit.
This doesn’t happen for all, though.
It depends on the person and their intent, of course.
Research and experiences seem to affirm my thoughts that the default will be to over-rely.
I mean, why wouldn’t you?
This is why any AI skills program you’re building must focus on behaviours and mindset, not just ‘using a tool.’
You can only make smart decisions if you know when, why, and how to work with AI.
One unignorable insight I’ve uncovered from collecting research over the last few years, and psychoanalysing that together with AI, is the importance of confidence in your capabilities to enable you to think critically with AI.

Where are you playing?
This is the battleground of most social spaces today.
High-performing organisations and teams will be those that think critically with AI, not outsource their thinking to it.
Being a “Balanced Evaluator” is the gold standard. So, we could say that thinking about thinking is the new premium skill (more on that ltr).
The combination of high AI literacy (skills, understanding how AI works, limitations) with high trust (knowing the right tool for the job and a willingness to use it effectively) is not straightforward.
That’s where you come in as the local L&D squad.
To be here, you must critically engage with AI by asking when, how, and why to trust its output. This requires questioning, verifying, and a dose of scepticism that too many fail to do but sorely regret when it backfires.
Also, don’t interpret “AI Trust” as blind faith. This is built through experimenting and learning how the best tools work.
What does meaningful learning with AI look like?
I (probably like some of you) have beef with traditional testing in educational systems.
It’s a memory game, rather than “Do you know how to think about and break down x problem to find the right answer?” We celebrate memory, not thinking (bizarre world).
My beef aside, research shows partnering intelligently with AI could change this.
This article, between The Atlantic and Google, which focuses on “How AI is playing a central role in reshaping how we learn through Metacognition”, gives me hope.
The TL;DR (too long; didn’t read) of the article is that using AI tools can enhance metacognition, aka thinking about thinking, at a deeper level.
The idea is, as Ben Kornell, managing partner of the Common Sense Growth Fund, puts it, “In a world where AI can generate content at the push of a button, the real value lies in understanding how to direct that process, how to critically evaluate the output, and how to refine one’s own thinking based on those interactions.”
In other words, AI could shift us to prize ‘thinking’ over ‘building alone.’
And that’s going to be an important thing in a land of ‘do it for me.’
Side note: I covered my view on the future of learning ditching recall and focusing on human reasoning, in a previous post. You’ll find a bunch of examples showing this in action there.
To learn, you must do
The Atlantic article shared two learning-focused experiments by Google.
In the first, pharmacy students interacted with an AI-powered simulation of a distressed patient demanding answers about their medication.
The simulation is designed to help students hone communication skills for challenging patient interactions.
The key is not the simulation itself, but the metacognitive reflection that follows.
Students are encouraged to analyse their approach: what worked, what could have been done differently, and how their communication style affected the patient’s response.
The second example asks students to create a chatbot.
Coincidentally, I used the same exercise in one of my “AI for Business Bootcamps” last year (fyi, join my next workshop in April 👀).
It’s never been easier for the everyday human to create AI-powered tools with no-code platforms.
Yet, you and I both know that easy doesn’t mean simple.
I’m sure you’ve seen the mountain of dumb headlines with someone saying we don’t need marketers/sales/learning designers because we can do it all in ‘x’ tool.
Ha ha ha ha is what I say to them.
Clicking a button that says ‘create’ with one sentence doesn’t mean anything.
To demonstrate this to my students, we spent 3 hours in an “AI Assistant Hackathon.” This involved the design, build, and delivery of a working assistant.
What they didn’t know is that I wasn’t expecting them to build a product that worked.
Not well, anyway.
I spent the first 20 minutes explaining that creating a ‘good’ assistant has nothing to do with what tool you build it in and everything to do with how you design it, ya know, the A-Z user experience.
Social media will try to convince you that all it takes is 10 minutes to build a high-performing chatbot.
While that’s true from a tech perspective, the product and its performance will suck.
You need to think deeply about it
When the students completed the hackathon, one thing became clear.
It’s not as simple or easy to create a high-quality product, and you’re certainly not going to do it in minutes.
But, like I said, the activity’s goal was not to actually build an assistant, but rather, to understand how to think deeply about ‘what it takes’ to build a meaningful product.
I’m talking about:
Understanding the problem you’re solving
Why it matters to the user
Why the solution needs to be AI-powered
How the product will work (this covers the user experience and interface)
Most students didn’t complete the assistant/chatbot build, and that’s perfect.
It’s perfect because they learned, through real practice, that it takes time and a lot of deep thinking to build a meaningful product.
“It’s not about whether AI helped write an essay, but about how students directed the AI, how they explained their thought process, and how they refined their approach based on AI feedback. These metacognitive skills are becoming the new metrics of learning.”
AI is only as good as the human using it
Perhaps the greatest ‘mistake’ made in all this AI excitement is forgetting the key ingredient for real success.
And that’s you and me, friend.
Like any tool, it only works in the hands of a competent and informed user.
I learned this fairly young when a power drill was thrust into my hands for a DIY mission. Always read the instructions, folks (another story for another time).
Anyway, all my research and real-life experience with building AI skills have shown me one clear lesson.
You need human skills to unlock AI’s capabilities.
You won’t go far without a strong sense (and clarity) of thinking, and the analytical judgment to review outputs.
Embrace your Human Chain of Thought
Yes, I made up this phrase (sort of).
Let me give you some context…
Early iterations of Large Language Models (LLMs) from all the big AI names you know today weren’t great at thinking through problems or explaining how they got to an answer.
That ability to break down problems and display its thinking is called a Chain of Thought technique.
This was comically exposed with any maths problem you’d throw at these early-stage LLMs.
They would struggle with even the most basic requests.
It’s a little different today, as we have reasoning models. These have been trained to specifically showcase how they solve your problems and present that information in a step-by-step fashion.
We now expect all the big conversational AI tools to do this, so why don’t we value the same in humans?
Those who nurture this will have greater command of their career.
So don’t ignore your Human Chain of Thought.
Focusing your energy on the ability to explain your reasoning is far more useful in a world littered with tech products that can recall info on command.

Brace yourselves
4 ways to enhance, not erode your thinking with AI
A couple of useful tools and frameworks to get you firing those neurons from the most powerful tool at your disposal (fyi, it’s your brain).
1/ Good prompting is just clear thinking
Full disclosure: There’s no such thing as a perfect prompt.
They’re often messy, don’t always work every time in the same pattern and need continuous iteration.
Saying that, you can do a lot (and I mean a lot!) to set yourself up for success.
Here’s a (sorta framework) I use to help think critically before, during and after working with AI.

Step 1: Assess
Can AI even help with your task? (It’s not magic, so yes, you need to ask that)
Step 2: Before the prompt
What does the LLM need to know to successfully support you?
What does ‘good look like’?
Do you have examples?
⠀And, most importantly, don’t prompt and ghost.
Step 3: Analyse the output
Does this sound correct?
Is it factual?
What’s missing?
Step 4: Challenge & question
I’m not talking about a police investigation here.
Just ask:
Based on my desired outcome, have we missed anything?
From what you know about me, is there anything else I should know about ‘x’? (works best with ChatGPT custom instructions and memory)
What could be a contrarian take on this?
Step 5: Flip the script
Now we turn the tables by asking ChatGPT to ask you questions:
Using the data/provided context or content (delete as needed), you will ask me clarifying questions to help shape my understanding of the material.They should be critical and encourage me to think deeply about the topics and outcomes we’ve covered so far. Let’s start with one question at a time, and build on this.
This is a powerful way to develop your critical skills and how you collaborate with AI.
P.S. Get more non-obvious insights and guidance on AI prompting in my short course designed specifically for busy people like you.
2/ Unpack the problem
Before you start building that next ‘thing’, check out this little framework, which has helped me to do my best work over the last decade.
3/ Partner with AI, don’t use it like a one-click delivery service
If I had a dollar for every time I said this, I’d be a billionaire by next year.
Often, it’s the small and simple actions that can bring the most valued results.
That’s not to say it’s easy to do.
In this video, I share how you can use AI to improve your critical thinking as a thought partner.
4/ Fuse your human insight with AI
Why be average with AI, when you could be a frigging superstar with it?
The ability to distil and humanise information makes you invaluable, and now, with AI, this skill becomes even more powerful. If you're smart about it.
Final thoughts
There’s much more to say about this, friend.
But we’ll pause here for now.
Thinking is cool, and thinking about thinking is even cooler.
Let your brain dwell on that for a bit. AI can be an extension of your thinking, but never let it shape it.
Keep being smart, curious and inquisitive as I know you are.
→ If you’ve found this helpful, please consider sharing it wherever you hang out online, tag me in and share your thoughts.

👀 ICYMI (In case you missed it!)
Sana launches across Workday platforms. Time to celebrate if you’re a Workday customer as the long waited Sana integration has arrived. Long time readers will know that Sana has been a partner of the newsletter for many years now, and I highly rate the team and the tech. I’m excited to see how they supercharge the Workday platform.
I’m hosting a virtual workshop for L&D pros to become AI Fluent. Mark your calendars for April 23rd, amigos. I’m welcoming 10 L&D pros to work with me for one afternoon in a hands-on session where you’ll get a complete operating system for working with AI in your L&D role. Plus…you’ll get a playbook, access to a private community and office hours session. I might never do this again, so join me while you can.
The most hotly contested 9 box grid in L&D tech is here. The Fosway Group publishes this view of the learning systems market every year.
We have a lot of vendors in our industry, yet all that really matters is who is right for you.

VIDEO THOUGHTS 💾
This Is Probably How We’ll All Work with AI Now
Claude, Claude, Claude is literally all I see anyone talk about of late.
For those that care, I use Claude. It's a useful tool in my work, particularly with its more recently launched Cowork feature.
Do I think it's the second coming? No...does it replace a team doing blah blah? Also no. Just so we're clear.
If you're reading this and thinking: "WTF is Claude and cowork?"
I think I can help with that.
This video is a quick tour of Claude, it's features and a exploration of the Cowork feature.
Enjoy 😊.
Till next time, you stay classy, learning friend!
PS… If you’re enjoying the newsletter, will you take 4 seconds to forward this edition to a friend? It goes a long way in helping me grow the newsletter (and cut through our industry BS with actionable insights).
And one more thing, I’d love your input on how to make the newsletter even more useful for you!
So please leave a comment with:
Ideas you’d like covered in future editions
Your biggest takeaway from this edition
I read & reply to every single one of them!
P.S. Wanna build your L&D advantage?
Here’s four ways I can help:
Build your confidence and skills with the only AI course designed for L&D pros.
Partner with me on AI Enablement for your L&D team. I’ve worked with 1,000’s of L&D pros’s to build AI Fluency that supports teams with the practical AI skills you need in your L&D role and to enable your organisation.
Get a backstage pass to exclusive industry insights, events and a secret monthly newsletter with a premium community membership.
Book a 1:1 consulting session with me for support with your product or L&D tech challenges
🙋♀️ Want to reach over 5,000 L&D pros? Become a Newsletter sponsor in 2026


Reply