Sunday, March 22, 2026

Adults Lose Skills to AI. Children Never Build Them

I've spent the past year or so writing about cognitive offloading, especially in children. I’ve argued that outsourcing to AI weakens critical thinking skills. I’ve read the recent research. I’m seeing the patterns play out in schools and universities.

But I may have missed something—not about the facts, but the framing. The real issue may not be that people are broadly becoming lazier or stupider (though this may still be the case). Instead, I argue that a distinct difference needs to be made clear depending on who is using the tool. What AI does to a 45-year-old is likely categorically different than what it does to a 14-year-old.

If I use AI to summarize a research paper, the argument in favor is efficiency. I’ve read hundreds of papers over the last 15 years. I (presumably) know what a good argument looks like, so I’m offloading a task I already know how to do. If I lost access to AI technology tomorrow, I could still read and summarize myself. It would take longer, but the capacity is still there.

That's atrophy—a muscle I stopped exercising. It's weakened, but it still exists. I can rebuild it if needed.

I mentioned Michael Gerlich's study on the negative correlation between AI offloading and critical thinking in a recent piece, "Why Kids Can’t Resist Cognitive Offloading," but I missed a critical distinction. Participants over 46 showed higher critical thinking scores alongside lower AI reliance. Participant between 17-25 showed the inverse.

In my view, the most likely explanation for this is not generational preference but biological development. The older group probably offloaded tasks they already knew how to perform. The younger group offloaded task they never learned how to perform. These neural pathways for source evaluation and constructing arguments were never formed. You can’t atrophy a muscle that was never built.

This is foreclosure—and foreclosure may not be reversible the way atrophy is.

The AI Audit Problem

When I ask AI to evaluate a claim, I can check the output against my own judgment. I notice when it oversimplifies. I catch when it omits a competing interpretation. I understand when the confidence of the language exceeds the strength of the evidence. This is auditing the output.

A child usually won't be able to do this—not because children are less intelligent, but because auditing requires the exact domain knowledge that the child is supposed to be developing. You cannot check an AI's analysis of heredity if you don't yet understand what heredity is. You cannot evaluate an AI's interpretation of the French Revolution if you've never read conflicting accounts of it yourself.

Adult AI interaction is generally (but not exclusively) delegation of automatable tasks. This allows us to retain greater judgement.

But a young adult’s interaction is more likely to be substitution, where the AI makes the micro-judgments the child is supposed to be building. So, however AI affects adults, compound it for children.

Shen and Tamkin's 2026 preprint showed this with software developers (adults) learning a new coding library. Developers who fully delegated to AI produced working code but failed conceptual quizzes afterward. They couldn't debug what the AI had written for them. They had the output without the understanding.

Remember: These were adults with existing programming expertise. They performed 17 percent worse that the group without AI assistance.

Now consider a child encountering programming for the first time with zero expertise to fall back on. There’s no reference to even compare AI output against. The substitution becomes foreclosure.

Homogenization as Identity Formation

This is where my thinking over the last year has changed the most. I used to see homogenization (every student producing eerily similar essays, identical arguments, the same examples in the same sequence) as a cheating or assessment problem. But now I'm thinking this is a diagnostic signal of something far more impactful.

When every student in a class processes information through the same language model, they are learning to reason through the same system. This introduces a new threat vector on the developing mind.

The model's statistical biases become the student's default framing. The model's reasoning structure becomes the student's reasoning structure. LLMs homogenize not just language but also perspective and reasoning strategies. The convergence tracks toward Western, educated, mainstream norms because that's what dominates training data and gets reinforced through alignment (Sourati et al., 2026).

Adults using AI mostly just sound generic. But for a child who never formed independent reasoning, "generic" is a major identity problem. The model’s reasoning doesn’t compete with the child’s reasoning but becomes the child’s reasoning. For children still building out the cognitive skills for evaluating the world, the effect will not be temporary but have a foundation impact on their thinking.

What I Missed in the AI-Cognitive Offloading Story

I spent a year treating cognitive offloading as a single phenomenon. I no longer think it is one. There are two fundamentally different events hiding behind the same behavior.

An adult choosing to offload a task they understand is making a tradeoff between decreasing effort and increasing efficiency. The capacity to do that task independently exists. The choice is deliberate. The atrophy is (probably) recoverable.

A child offloading a task they've never learned to perform is not making a choice. They are skipping a developmental step that was never developed. The capacity doesn't exist yet. The foreclosure may be permanent—and because they have no independent baseline, they cannot recognize what they're losing.

The downside of adult offloading is people get less sharp. The downside of adolescents growing up delegating to AI is a generation that was never sharp to begin with. Protecting the space our children need to develop the foundational skills of thinking is now a non-negotiable.

References

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Shen, J. H., & Tamkin, A. (2026). How AI impacts skill formation. arXiv preprint, arXiv:2601.20245. https://doi.org/10.48550/arXiv.2601.20245

Sourati, Z., Ziabari, A. S., & Dehghani, M. (2026). The homogenizing effect of large language models on human expression and thought. Trends in Cognitive Sciences. Advance online publication. https://doi.org/10.1016/j.tics.2026.01.003

Timothy Cook