We’ve officially entered the era where “Can you prompt?” is quickly dethroning “Can you code?” as the supreme gateway to getting things done. And while it’s simply thrilling that English is now the interface, it ushers in a quietly dangerous trend: great prompts are replacing great thinking.
Apparently, many of your teams now assume that if they can string together a clever request to an LLM, they’re miraculously off the hook for understanding the why, how, or, heaven forbid, what comes next. Bless their hearts.
The Prompting Paradox: Polished Nonsense, Freshly Generated
Generative AI is truly amazing, isn’t it? Your teams can whip up strategy slides in 30 seconds, rewrite code in plain English, and even spin up 50 versions of ad copy without their fingers ever grazing a keyboard. It’s almost like magic, if magic involved a highly sophisticated autocomplete function.
But here’s the catch: prompting without critical thinking is just polished nonsense. If your people don’t actually know what they’re asking, how to evaluate the output, or why the output matters, they’re merely producing better-formatted confusion. Congratulations, you’ve just automated the art of looking busy.
Real-World Symptoms of the Illusion of Progress (You’ve Definitely Seen These)
Welcome to the digital age’s greatest hits, likely playing on a screen near you:
- A perfectly summarized strategy document… with zero actual insight. It’s like a beautiful empty box, just waiting to collect dust.
- Code refactored via Copilot… that introduces a silent logic bug. Because who needs working software when you have fast software? Progress, people!
- “Generated policies” that sound smart… but brilliantly contradict your existing governance framework. Efficiency!
- A marketing campaign “AI-ed into existence” that completely misunderstands your Ideal Customer Profile (ICP). Turns out, even AI can be a little too enthusiastic about irrelevance.
This, my friends, is the illusion of progress—where output volume heroically disguises cognitive atrophy. Truly a sight to behold, especially in the QBRs.
English Is the Interface, Not the Thinking Engine (Surprise!)
Prompting is a tool. A superpowered one, sure. But it’s only as good as the squishy, organic brain behind it. Yes, those things your employees have between their ears.
So, do yourself a favor and ask:
- Does your team actually know what “good” looks like, or are they just hoping AI will magically define it for them?
- Can they troubleshoot AI outputs, or are they blindly accepting them like gospel?
- Are they asking better questions, or just… asking?
If you’re treating AI like a substitute for strategy, judgment, or domain context, you’re not automating productivity. No, you’re outsourcing your brain. And let’s be honest, that’s rarely a good look for anyone—especially not your bottom line.
The Elephant in the Room: Critical Thinking at Risk
A recent study from Microsoft Research highlights a concerning trend: higher confidence in Generative AI (GenAI) is associated with less critical thinking among knowledge workers, while higher self-confidence is linked to more critical thinking. It seems that while GenAI tools reduce the perceived effort for critical thinking tasks, they might also be encouraging an over-reliance on AI, subtly diminishing independent problem-solving. As workers shift from direct task execution to AI oversight, they swap hands-on engagement for the surprisingly demanding challenge of verifying and editing AI outputs. Source: Lee, et al. (2025) – The Impact of Generative AI on Critical Thinking Skills and Practices Among Knowledge Workers.
Key Takeaways from the Study:
- Confidence Paradox: More trust in AI, less personal critical thought.
- Effort vs. Engagement: AI reduces perceived effort, but at the cost of hands-on problem-solving.
- The New Oversight: The challenge isn’t doing the work, it’s meticulously checking AI’s work.
How to Keep Your Thinking Muscle Sharp (Before It Atrophies Completely)
Here’s what smart CIOs and leaders—the ones who still value thinking over mindless button-pushing—are doing:
- Teach Prompt Evaluation, Not Just Prompt Creation: The real skill isn’t making AI talk; it’s knowing if what the AI gave you is (a) wrong, (b) biased, or (c) legally dangerous. Because “AI told me to” won’t fly in court. Or in your annual review.
- Promote an “Answer Your Own Prompt First” Culture: Before anyone rushes to GPT for a solution, make them sketch their own rough idea. It’s amazing how much independent thought can reveal the gaps AI simply can’t fill. Plus, it’s just good exercise for those dormant neurons.
- Mix AI Work With Classic Problem Solving: Remember whiteboards? Use them. Force teams to break down logic and systems before asking an LLM to generate the flowchart. Shocking, I know, but sometimes the old ways are still the best.
- Reward Intellectual Rigor—Not Just Output Speed: Yes, the AI version was done faster. But was it useful? Accurate? Thoughtful? If not, you’re just paying to move faster in the wrong direction. And frankly, that’s just poor management, no matter how shiny the AI tool.
English may be the new programming language, but thinking is still the core operating system.
TL;DR: Use AI to Work Smarter—Not to Think Less
English may be the new programming language, but thinking is still the core operating system. Prompt engineering is not real engineering, and for the love of all that is rational, don’t let shiny outputs dull your team’s critical edge. AI won’t replace your smartest people. But it will absolutely, unequivocally expose the ones who stopped thinking.
So, CIO, are your people still thinking, or just prompting?



