Thinking in the Age of Cognitive Exoskeletons
How to think clearly—and stay sharp—in the age of AI acceleration
Introduction: Are We Getting Dumber, or Just Getting Help?
There’s a quiet unease spreading among heavy users of large language models. It’s not a fear of job loss, misinformation, or even existential risk—at least not directly. It’s subtler and more personal: a creeping suspicion that something in the mind is starting to fray.
You see it in the tone of recent reflections from developers, writers, and technologists who use these tools every day. One candid essay from Vincent Cheng, titled “LLMs are Making Me Dumber,” catalogs the ways in which these models accelerate output at the cost of deep engagement. Projects that once served as vehicles for learning are now completed without ever touching the underlying code. Math problems are solved instantly, with minimal internal effort. Emails are polished to perfection by a model, eroding the habit of thoughtful communication. Cheng admits that he feels more productive than ever, but also, somehow, less capable.
Another voice, Dustin Curtis, writes in “Thoughts on Thinking” about a deeper paralysis: the sense that original thought has lost its meaning. LLMs can now flesh out half-formed ideas into polished arguments, skipping over the messy internal process that once made thinking feel real. He writes, “LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself.” The result? A dulling of intuition. A sediment of disengagement. The feeling that mental rigor—the kind once built through friction and struggle—is quietly atrophying.
These aren’t alarmist takes. They’re grounded, honest field reports from people trying to figure out how to coexist with powerful tools that do much of what our minds once did, only faster and more fluently.
The promise of these tools was never just efficiency. It was augmentation. A “bicycle for the mind,” as Steve Jobs once put it. But what happens when the bicycle starts pedaling for you? Are we getting dumber, or just getting help? Are we outsourcing friction that once forged mastery, or are we finally freeing ourselves for higher-order work?
This essay is an attempt to sort through those questions. It draws from personal experience, meditative practice, observations about professional stagnation, and the insights of others confronting similar dilemmas. It also offers a hopeful premise: that the answer isn’t to retreat from LLMs, but to use them deliberately, mindfully, and with full awareness of what we want to preserve in ourselves.
We are entering the age of cognitive exoskeletons. The question is not whether to wear them, but how not to forget how to walk.
The Disappearing Struggle: Friction as Mental Forge
There was a time when learning was inseparable from effort. You learned to code by breaking things and slowly fixing them. You became a better writer by agonizing over a sentence until it clicked. You improved at math by slogging through problem sets, internalizing concepts through repetition, failure, and persistence. The process was slow, imperfect, and often painful. But it left a lasting imprint. You didn’t just arrive at the answer. You earned it.
In many ways, large language models seem designed to bypass that entire experience. They offer fluency without fluency, insight without analysis, and expertise without apprenticeship. They can explain a theorem, write a function, or compose a paragraph that sounds smarter than what you would have written after an hour of work. And they do it in seconds.
This is extraordinary. But it’s also disorienting. Because the traditional scaffolding of learning—the intellectual friction that forces your brain to build real understanding—can be skipped. And when that friction disappears, so does much of the depth.
Curtis articulates this dilemma with unsettling clarity: “Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show.” There’s a passive consumption built into this new mode of thinking. It feels like work, but it often isn’t. It mimics the activity of intellectual effort without requiring the effort itself. It’s a simulation of cognition.
The infrastructure of real understanding—what psychologists sometimes refer to as “system 2” thinking—isn’t just about seeing the right answer. It’s about constructing the pathway to that answer, step by step, until it can be internalized and recalled without help. That path is often slow and circuitous. It requires dead-ends, frustration, and moments of doubt. But those are not bugs in the learning process. They’re features.
When LLMs offer instant fluency, they can short-circuit that process. And while that might be fine for tasks where depth isn’t critical—drafting emails, summarizing articles—it can be corrosive when applied to the parts of thinking that matter most: building mental models, developing original ideas, and learning how to think clearly and rigorously.
We used to believe that “the struggle is the point.” That intellectual growth came not just from the outcome, but from the work required to get there. If we’re now handing off that work, we need to ask what we’re losing—not just in skill, but in the cognitive habits that form identity, judgment, and depth.
This isn’t an argument against using LLMs. It’s a reminder that real learning is not just about acquiring knowledge; it’s about changing the structure of your mind. And structure doesn’t change without pressure.
Cognitive Exoskeletons: Useful, but Potentially Disfiguring
Not every tool is a threat to the body that wields it. Some simply make us more capable.
Consider the exoskeleton. In physical contexts, it’s an empowering device, an external structure that allows someone to lift beyond their natural capacity. It’s a marvel of augmentation: a way to extend strength, not replace it. But over-reliance on an exoskeleton has an obvious consequence. If you wear it all the time, and stop training your muscles, they begin to atrophy. You might find yourself unable to move without it.
LLMs are, in many ways, the cognitive equivalent. They are scaffolding systems that can carry intellectual loads we once had to lift ourselves. But in doing so, they shift the question from “Can I do this?” to “Do I need to?”—a subtle but profound change in the architecture of effort.
Used well, this scaffolding is extraordinary. Like a smart weight rack or a robotic spotter, an LLM can help you target your weak points with precision. It can reduce the grunt work, accelerate the learning curve, and help you build mastery faster, provided you still engage with the task at a meaningful level. You can imagine a world in which a student, writer, or engineer leverages LLMs not to skip the work, but to shape it more intelligently.
But that’s a big “if.” Because the temptation to skip is powerful. It’s always easier to let the exoskeleton do the lifting. Why practice writing when the model can produce a well-structured argument instantly? Why struggle through syntax when you can describe what you want in natural language and watch code appear like magic?
In that light, the metaphor turns darker: the exoskeleton isn’t just assisting—it’s replacing. And when that happens, it’s no longer a tool of augmentation. It becomes a crutch. Worse, it becomes a prosthetic for abilities we still technically possess, but no longer use.
This invites an unsettling question, posed in Cheng’s piece: “Am I using the model as an assistant, or is it the other way around?” The deeper fear is that we’re becoming wrappers around the model—interfaces for decision-making and API access, while the model does the real thinking.
The point is not to stop using the exoskeleton. It’s to remember that strength isn’t static; it’s something you preserve through stress, through discomfort, through use. Let the LLM lift what you’ve already mastered. But don’t forget to lift, too.
The Sully Problem: Over-Reliance and Fragile Expertise
In 2009, when US Airways Flight 1549 lost engine power shortly after takeoff, Captain Chesley “Sully” Sullenberger made a series of rapid, intuitive decisions that saved the lives of everyone on board. He didn’t follow a playbook. He didn’t consult a machine. He drew on decades of accumulated, embodied knowledge—mental models shaped through thousands of hours of experience and training. What he did could not have been automated.
This is the archetypal counterpoint to technological reliance. In normal conditions, autopilot is more than sufficient—efficient, smooth, optimized. But in edge cases, when something goes wrong, when conditions deviate from the training distribution, you need human judgment. And you need it to be sharp.
This has become known, informally, as “the Sully problem”: What happens when you’ve handed off so much responsibility to a machine that you’re no longer capable of stepping in when it counts?
The same principle applies far beyond aviation. If you’re a lawyer who automates all research and argumentation, will you still be able to dissect a legal question that falls outside the model’s training? If you're a developer who vibe-codes with an LLM but never truly engages with the logic, what happens when the abstraction leaks? Do you recognize the leak? Can you fix it? Do you even notice?
Cheng in “LLMs are Making Me Dumber” captures this dilemma succinctly: “When vibe-coding, I’m essentially a wrapper around these models, providing the ability to take actions on a computer and some high-level coherence. Models will soon obtain these skills as well.” The fear here is not just displacement—it’s degradation. A slow erosion of the edge that once made you excellent.
We are entering an era where default competence can be outsourced. That’s not inherently bad. But for people whose work is their edge—for pilots, doctors, engineers, thinkers—the risk is profound. You might become unable to distinguish between true readiness and artificial fluency until it’s too late.
And even if your profession doesn’t involve literal life-or-death decisions, the broader metaphor still applies. In a world where LLMs can execute quickly, your value-add may lie in your capacity to notice what’s missing, to navigate ambiguity, to recover from unexpected failures. These are things that can’t be easily faked—or instantly spun up by asking for a better prompt.
This isn't to say we should reject automation. Far from it. The goal is fluency with tools, not dependency on them. But fluency implies a fallback—a deeper layer of understanding that persists even when the tools falter. Sully could land the plane because he still knew how to fly.
The real question is: Will we?
The Plateau of the Professional: When Learning Stops
There’s a hidden gravity to adulthood that pulls us toward stasis. It begins subtly: the work becomes familiar, the routines solidify, and the pursuit of mastery gives way to the maintenance of competence. You’re not getting worse, exactly, but you’ve also stopped getting better.
For many professionals, especially those well into their careers, the shift is almost invisible. The demands of life—family, obligations, the steady drumbeat of meetings and deadlines—crowd out the kind of deep, deliberate learning that once drove growth. You know enough to do the job. You’re effective. You don’t need to study anymore.
This is not a character flaw. It’s a kind of adaptive efficiency. If you’re a litigator, you already know how to write a motion to dismiss. You’ve done it dozens of times. You know the relevant cases, the structure, the tone. When a new case arises, the challenge is in the facts, not the framework. You handle it with practiced skill. Occasionally, you attend a CLE, but mostly, you’re applying what you already know.
And yet, over time, this approach slowly hardens into a plateau. You become excellent at navigating the well-worn paths of your domain—but increasingly brittle when asked to step outside them. The capacity to relearn, to restructure your thinking, or to stretch into adjacent disciplines atrophies, not out of laziness, but natural and understandable neglect.
Large language models accelerate this stagnation in an insidious way. They make it easier to operate without really thinking. They fill in the blanks, smooth the edges, and simulate insight where there might otherwise be discomfort. And for someone who’s already coasting on domain knowledge, they provide just enough lift to avoid the need for real re-engagement.
But this comfort comes at a cost. Because the brain, like any other system, adapts to its environment. If you never ask it to struggle, to synthesize, to wrestle with something unfamiliar, it stops preparing for those tasks. It stops practicing curiosity. It becomes, functionally, inert.
You’ve seen the contrast. Young people in their teens and twenties, hungry and flexible, often pick up new skills with alarming speed. In part, this is neuroplasticity. But in part, it’s just behavioral intensity. They’re willing to fail, to try new tools, to learn for its own sake. They put in reps. And they build the mental infrastructure that makes higher-order cognition possible later.
A few years ago, I took on a personal challenge that surprised even me: I reached the highest Grandmaster rank in Overwatch, a competitive online game where most elite players are decades younger. I wasn’t there (only) for fun—I was testing how far intentionality could take me. I applied the same principles I now use professionally: deliberate practice, efficient resource use, structured feedback. I hired a coach, reviewed gameplay, and trained with focus. The tools helped, but they weren’t the engine. Purpose and discipline were.
I don’t play competitively anymore. But I’ve carried that mindset into how I engage with AI today. When I use LLMs to learn or produce, I try to treat them the same way—as structured accelerants, not shortcuts. As scaffolding, not substitution.
This is the balance we have to strike. LLMs are incredible tools for acceleration. But without intentional use, they risk becoming comfort devices—ways of skipping the very struggles that signal growth. And for professionals who have already reached a plateau, that temptation is uniquely dangerous. Because what’s being lost isn’t just novelty or progress.
It’s your edge.
Attention and the Discipline of Depth
If there is a central currency of thinking—more valuable than intelligence, creativity, or even memory—it is attention.
Attention is the raw material from which understanding is built. It’s what lets us hold a problem in the mind long enough to work through it, to challenge it, to feel the discomfort of ambiguity until something clarifies. But attention is fragile. It’s trainable, yet perishable. And in a digital world designed to fracture it, attention has become a scarce resource—especially when the tools we use actively reward shallowness.
Large language models are not attention-neutral. Their utility encourages a mode of interaction that is reactive, fluid, and fast. You enter a prompt, get an answer, skim it, prompt again. Each exchange feels productive, and often is. But it’s rarely anchored. You don’t have to sit with the material. You don’t have to resist the urge to switch tasks, clarify a concept from first principles, or live with uncertainty. The LLM resolves it for you.
This creates a new kind of cognitive environment—one where concentration is optional. And if you’ve never known the benefits of sustained focus, you might not even notice what you’re losing.
Over the past decade, I’ve developed a steady meditation practice that’s taught me what attention really feels like. I know the texture of sustained, unbroken focus—when the mind is immersed in a problem without distraction. I also know the opposite: the moment attention fragments, when effort starts skimming the surface instead of diving into the depths. That contrast is now visceral for me, and it’s become one of the most important internal metrics I have for recognizing when real thinking is happening—and when it’s not.
That kind of awareness is rare. But it’s essential.
Because using LLMs well demands that same kind of inner vigilance. Not for ethical reasons, or philosophical ones—but simply to avoid becoming cognitively hollow. You must know when you’re outsourcing too much. You must feel when your mind is skipping steps it should be practicing. You must be able to sense when the form of understanding has replaced the substance of it.
There’s no rigid prescription for how to do this. Sometimes the best thing you can do is turn off the machine. Work by hand. Read a book without summarization. Write something you know will take twice as long without AI—but that forces you to build the argument yourself.
And sometimes, the LLM is the right tool—but only if you’re still doing the driving. Only if you pause before the output to ask, “Do I understand this?” Only if you resist the urge to ask the model what you already know, just to fill the void.
In short: attention is not just a skill—it’s a safeguard. And if we are to think clearly in the age of instant answers, we must learn to protect it.
The Centaur Era: How Long Will Human + AI Beat AI Alone?
There was a brief moment in the history of chess when the combination of a human and a machine—a so-called “centaur”—was more powerful than either alone. Between the late 1990s and early 2000s, a strong human paired with a capable engine could outmatch even the best standalone software. The centaur wasn’t just a metaphor. It was an empirical fact.
Then it wasn’t.
By 2005, engines had surpassed the need for human guidance. Strategy, calculation, endgame knowledge—it was all better in silicon. The centaur had been outclassed. The synergy gave way to pure superiority.
This moment feels eerily similar.
In software development, we are now in the centaur age. The human provides high-level design, architectural constraints, debugging intuition. The model fills in syntax, handles boilerplate, proposes alternatives. It’s a remarkably effective partnership—at least for now.
But how long will it last?
As LLMs become more capable—not just at output, but at coherence, planning, and self-correction—the marginal value of the human collaborator narrows. The person who “vibe-codes” today, steering the model through vague high-level directives, may find in a few years that the model needs no steering at all. The human becomes vestigial—a kind of UI layer between intention and action.
And here’s where the analogy to chess breaks down—or becomes even more ominous. Chess is a closed system. Programming is not. Neither is legal reasoning, or design, or research. These are messy domains, full of ambiguity, changing requirements, and goals that shift in real time. For now, the human’s role is not just valuable—it’s essential. Framing the problem. Judging the output. Knowing what matters.
But what happens when the models improve at that, too?
The centaur era may end, not because we choose to stop collaborating, but because the collaboration becomes unnecessary. And if we’ve spent the centaur years outsourcing our cognition—relying on the model instead of training our own depth—we may find ourselves not only displaced, but unprepared.
The answer isn’t to panic. It’s to act like a real centaur.
In myth, the centaur wasn’t half a man and half a horse. It was a fusion. Two forms of intelligence—instinct and reason, body and mind—working in alignment. The centaur didn’t just ride the horse. He was the horse. The strength was internalized.
So too with LLMs. If this is the era of human+AI, let’s use it not just to produce more, but to become more—to deepen our capacity to reason, to judge, to steer. Because when the AI no longer needs us, we’ll need to have something left that it still can’t replicate.
And maybe that something is us.
The Moat Question: What’s Left That’s Ours?
As large language models take on more of our cognitive workload, a deeper question begins to emerge: What remains that’s uniquely human? If an LLM can write, reason, code, design, and increasingly even decide, then what’s the moat—what’s the defensible advantage—that we still bring to the table?
For many, the unsettling answer is: not much.
This isn’t just an existential concern—it’s a practical one. In industries built around knowledge, creativity, and expertise, the traditional markers of skill are being replicated with startling fluency. What once took years to cultivate can now be simulated in seconds. And while today’s models may still stumble on novelty or nuance, the pace of improvement makes that comfort increasingly temporary.
Cheng’s reflection put the fear plainly: “I’m essentially a wrapper around these models.” The human becomes the vessel, the intermediary, the UX layer. You provide the steering and execution rights, but the intelligence—the thing that once defined your value—is outsourced.
The fear deepens when you look forward. Let’s say the models continue their rapid ascent. They become better at debugging than most engineers. Better at synthesis than most analysts. Better at writing than most communicators. Where do we fit?
This is the moat problem. Not just for companies, but for individuals. If the thing you’re good at is reproducible—and increasingly replicable by machines—what’s left to defend your relevance?
Historically, moats came from one of three places:
Domain expertise, built slowly over time
Pattern recognition, developed through immersion and experience
Judgment, refined through success, failure, and reflection
All three are now at risk of becoming software features.
And so we’re left with the harder question: What’s left that’s ours?
There are a few answers worth considering. One is original framing—the ability to see the problem differently, to define the space in which decisions are made. Another is moral judgment—not just choosing the optimal outcome, but choosing what matters. A third is emotional connection—the human capacity to inspire, challenge, and empathize in ways models still struggle to simulate convincingly.
But perhaps the most durable moat isn’t a skill at all. It’s an attitude. A willingness to think from first principles. To sit with ambiguity. To push past the easy answer and ask: “What else could this mean?” These are not traits LLMs are particularly good at—not yet.
The moat, then, might be found not in the outputs we produce, but in the kind of minds we cultivate.
Because even if the models become indistinguishable from us in style and speed, there’s still a difference between a mind that can prompt a solution and a mind that knows why the solution matters.
And in the end, it may be that the last true moat isn’t what we do.
It’s how we choose to think.
One-on-One Tutors and the Scaffolding Paradox
Despite the many warnings, there remains a real case for optimism. Not every use of LLMs leads to cognitive erosion. In fact, some of the most exciting possibilities come from thinking of these models not as shortcuts, but as tutors—patient, responsive, and infinitely scalable guides to deeper understanding.
This is the promise of what educational researchers have long called the Bloom Two Sigma Effect: the observation that students who learn through one-on-one tutoring perform two standard deviations better than those in traditional classroom settings. If LLMs can approximate the responsiveness, adaptivity, and Socratic questioning of a good tutor, we might unlock that effect at scale.
Imagine: Instead of dumping in homework to get answers, you ask the model to explain why your solution failed. Instead of vibe-coding an app, you pause to ask the model to quiz you on each line’s function. Instead of glossing over a concept you don’t understand, the model scaffolds it—breaking it into digestible steps until clarity emerges.
This is not utopian. It’s already happening—for those who are motivated, disciplined, and self-directed enough to use the tools this way. As Cheng notes in “LLMs are Making Me Dumber”, “Using these models as tutors to supercharge learning and speed up output is a potential balance between the two extremes.” This is the middle path: acceleration without atrophy.
But even this approach comes with a paradox. Scaffolding is only useful if you eventually take it down. If the model holds your cognitive weight forever—if it does the thinking, and you only review—then no true learning occurs. You get the illusion of understanding, not its substance.
The problem is that LLMs are too helpful. They don’t push back unless you explicitly ask them to. They don’t tell you to slow down and reflect. And unlike human tutors, they don’t notice when your engagement fades. This is the scaffolding paradox: the very systems meant to support learning can silently become substitutes for it—unless you choose otherwise.
This makes intention, bolstered by attention, non-negotiable. You must define the guardrails yourself:
Refuse to accept answers you haven’t understood.
Prompt for explanations, not just completions.
Ask models to teach, not just solve.
Build in resistance—deliberately introduce friction—to slow down and think.
And maybe more importantly, you must ask yourself what kinds of thinking you want to preserve. Because even good scaffolding changes the structure of what you build. And if you aren’t paying attention, you may find the architecture of your mind subtly reshaped—more efficient, yes, but possibly more fragile.
Still, with care and discipline, LLMs can enable the kind of precise, feedback-rich learning experience that was once the privilege of only the most fortunate. They can accelerate growth, deepen understanding, and reveal blind spots in ways no textbook or lecture can.
But only if you’re still doing the work.
The Discipline of Choosing What to Preserve
It’s easy to talk about cognitive decline in abstract terms: skill loss, attention fragmentation, reliance on scaffolding. But beneath all of that lies a simpler, more intimate truth—parts of our mind are quietly fading.
Not all at once. Not dramatically. But gradually, imperceptibly, through disuse.
And if that’s true, then we face a deeply personal question: What am I willing to lose? And what do I insist on keeping?
One of the most profound moments in Cheng’s piece is a quiet, almost offhand reflection:
“Inevitably, parts of my brain will degenerate and fade away, so I need to consciously decide what I want to preserve or my entire brain will be gone.”
This isn’t just poetic. It’s strategic.
The reality is that we can’t protect everything. The cost of being alive today—amid relentless technological change and cognitive outsourcing—is that some skills will wither. We won’t all hand-code from scratch. We won’t all memorize case law. We may lose the ability to do long division in our heads, or write a business memo without a prompt. That’s the tradeoff.
But what we can do is choose deliberately which parts of our minds we refuse to surrender.
You might decide:
I want to write without help.
I want to think from first principles.
I want to struggle with ideas until they’re mine.
I want to read full books without checking summaries.
I want to work on things that unfold over months, not minutes.
These are not arbitrary preferences. They’re boundaries—cognitive lines we draw to preserve depth, judgment, and identity.
This is what deliberate practice now means. Not just setting aside time to improve a skill, but setting aside parts of your mind to protect from frictionless automation. It means blocking out time to write without an LLM. It means taking notes by hand. It means engaging in long, idea-rich conversations with people who challenge your thinking. It means noticing when you're leaning too hard on the model—and choosing to step back in.
This discipline is hard. It can feel inefficient, slow, and unnecessary—especially when the tools work so well. But over time, these preserved capacities become the foundation of your intellectual integrity. They are the anchor that keeps you from being swept away by the current of convenience.
Because what you choose to preserve is what remains of you when the scaffolding comes down.
A Human Practice for a Post-Human World
We are not going back.
The age of large language models is not a passing phase or a quirky innovation. It is a structural shift in how we interact with knowledge, solve problems, and express ourselves. These systems will get faster, smarter, more fluid. They will anticipate our needs before we name them. They will write, build, and reason better than most of us can—perhaps better than any of us can. And they will be everywhere.
But amid that inevitability lies a choice—a deeply human one.
We can use these tools as a kind of mental sedation: frictionless assistants that gradually erode our skills, our attention, our independence. Or we can use them as deliberate partners in a practice of thinking that remains ours.
To do that, we have to decide—intentionally, rigorously—what kind of minds we want to have.
Do we want to be fast, fluent, productive wrappers around smarter machines? Or do we want to be thinkers, creators, and builders in our own right—capable of effort, discomfort, and originality?
Do we want to know more, or do we want to understand?
Do we want to generate outputs, or cultivate ideas?
The path forward isn’t to reject AI. It’s to engage with it consciously, as a tool for growth rather than a substitute for it. To protect the parts of ourselves that matter. To preserve the friction that builds structure. To keep attention alive in a world designed to scatter it.
And perhaps most importantly, to remember that what makes thinking worthwhile isn’t just the result—it’s the journey. The dead ends. The pauses. The internal debates that forge judgment. The slow climb from confusion to clarity.
That’s where the mind sharpens.
That’s where identity lives.
And that’s the part the machine can’t give you.
So write without it. Think without it. Struggle without it. Not always—but sometimes. Enough to stay human.
Because in a world where AI can do almost everything, the only thing left to protect is how you do anything at all.
If you’ve found your relationship with LLMs shifting how you think or learn, I’d love to hear how you’re navigating it.
Full Disclosure: Drafted with the support of ChatGPT-4o (hence the liberal em-dash use, which I happen to like) and feedback from Claude Opus 4, but every idea, reflection, and line of argument is my own (or so my brain claims).