The Empty Brain
When humans outsource memory to the cloud, computation to machines, reasoning to AI — and attention to algorithms. A long essay on cognitive debt, agency decay, and what remains of being human.
Opening
Try answering these quickly. No Google. No thinking for more than three seconds.
What is your closest friend’s phone number?
Your parents’ numbers — not “call Mom” in Contacts, but the actual digits?
That pho restaurant you went to three times this month — could you find it without Google Maps?
Calculate 237 × 18 in your head. No paper. No calculator.
A slightly difficult email you need to send to a partner — do you dare type it out first and then ask ChatGPT to check, or do you open ChatGPT from the opening line?
This week, how many decisions have you made — what to eat tonight, which headphones to buy, how to respond to an employee, where to invest — without consulting some algorithmic layer? No AI, no Shopee reviews, no scrolling through 30 recommended videos, no checking feeds to see what others are doing?
When was the last time you sat with an uncomfortable emotion for five minutes without opening your phone?
If most of your answers are “yeah, true…” with a faint, hard-to-name discomfort — you don’t have a personal problem. You’re not lazy, not weak, not “disconnected from your roots.” You’re a textbook example of a global phenomenon that cognitive science has long named: cognitive offloading — the transfer of cognitive functions from the human brain to external tools.
The problem is that this transfer has gone far beyond phone numbers and directions. It has spread to computation, to writing, to reasoning, to the very way we know what we want. It has crept into corners no one notices: not remembering what you thought about this morning, not being able to tolerate thirty seconds of waiting, not knowing what you actually like without an algorithm suggesting it.
Our grandparents lived their entire lives knowing roughly 150–200 people. Our parents drove without GPS, cooked twenty dishes without recipes, memorized hundreds of lines of poetry and proverbs. They were not smarter than us. They simply used their brains more, because they had no other option.
We don’t. Over roughly fifteen years, the largest human experiment ever conducted has been running on the brains of five billion people — with almost no control group. Smartphones. Image-based social media. Short-form video. And now, generative AI that can write for you, think for you, even feel for you.
This essay is an attempt to draw the full picture of what is happening. Not to be pessimistic, not to rationalize — but to pose what I believe will become the central question of the coming decade:
When memory is in the cloud, computation is in machines, reasoning is in AI, emotion is soothed by content, identity is shaped by algorithms, decisions are suggested by chatbots — what is left to call “human”?
I don’t have a definitive answer. But I have six sections below, and I think after reading them, you’ll have your own version.
Part 1: The three-layer architecture of outsourced cognition
Imagine human cognition as a three-layer architecture, like a computer system:
Layer 1 — Memory (Storage): where knowledge, experience, and information are kept.
Layer 2 — Computation (Compute): where specific tasks are processed — from addition to writing emails.
Layer 3 — Reasoning: where judgments, evaluations, and decisions are made.
Over roughly fifteen years, each layer has, in turn, been outsourced to a different form of technology. The striking thing is: we hardly noticed it was happening.
Layer 1: Memory has moved to the cloud
Before smartphones, the average adult remembered 20–30 phone numbers. Today, that number is close to zero. But this is more than convenience — it has a scientific name: the Google Effect.
In 2011, Betsy Sparrow, Jenny Liu, and Daniel Wegner published a landmark study in Science. They found that when participants believed information would be saved on a computer, their memory of that information was significantly worse than when they believed it would be deleted. The brain actively “does not remember” what it knows is available elsewhere.
This is an evolutionarily sensible mechanism. The brain is expensive — it consumes about 20% of the body’s energy while making up only 2% of body weight. Outsourcing memory externally to save energy is a smart strategy. The problem is not the nature of offloading, but the scale and feedback loop of it.
In 2016, Benjamin Storm at UC Santa Cruz found something more worrying: the tendency to rely on the Internet as a memory aid grows with each use. Every time you Google instead of trying to recall, your brain learns that “next time, Google again.” This is a self-reinforcing loop — a snowball effect — running inside every smartphone user today.
Worse, the mere presence of a smartphone is harmful. The 2017 “Brain Drain” study by Adrian Ward and colleagues at UT Austin showed: simply having a smartphone in sight — even turned off, even face-down — reduces working memory and fluid intelligence. Those most dependent on their devices suffer most. The phone doesn’t need to be on. Its presence alone occupies part of your mind.
Layer 1 is, thus, fully externalized.
Layer 2: Computation has moved to machines
This is the earliest and least controversial layer to be outsourced. Pocket calculators appeared in the 1970s. Excel in 1985. GPS went mainstream in the early 2000s. We’re so used to it that we no longer call it “outsourcing.”
But consider further. A driver in 2000 knew the city, understood its structure, had a mental map of their neighborhood. A driver in 2026 depends entirely on Google Maps for even familiar routes. Neuroscience shows: the hippocampus — the brain region responsible for spatial navigation — is smaller in heavy GPS users than in those who navigate themselves.
This is “use it or lose it” at the neural level. The brain is plastic — it grows stronger in regions that are used, and atrophies in those that aren’t. When you outsource a capacity, you don’t just stop training it — you actively let it degenerate.
Layer 3: Reasoning is moving to AI
This is the newest and most alarming layer. Unlike the first two, reasoning is the capacity that defines the modern human. Descartes said cogito ergo sum — I think, therefore I am. If thinking itself is outsourced, what remains of “I”?
In June 2025, a study from MIT Media Lab led by Nataliya Kosmyna shook the scientific community. Titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task”, the team used EEG to measure brain activity in 54 students split into three groups writing SAT essays:
- Brain-only group (no tools)
- Google group (standard search)
- ChatGPT group
After four months of observation, the results were among the clearest findings yet about AI’s neural impact on humans:
- The brain-only group had the strongest and widest neural connectivity, especially in alpha, theta, and delta bands — linked to creativity, memory load, and semantic processing.
- The Google group showed moderate activity.
- The ChatGPT group showed the lowest activity — by one analysis, brain activity dropped up to 47% compared to the control.
More striking: the effect did not disappear when they stopped using AI. When the ChatGPT group was moved to tool-free writing in the final session, they could not reactivate the necessary neural networks. Their essays, judged by two English teachers, were largely “soulless” — similar to one another, using the same expressions, lacking original thought.
Kosmyna calls this “cognitive debt” — the cognitive equivalent of financial debt: every time you outsource reasoning, you take a short-term loan whose long-term interest is paid in thinking capacity.
Part 2: The catalyst — when attention shatters
If the story stopped at three outsourced layers, one could argue humans still retain the capacity for coordination — deciding when to use the cloud, when to calculate, when to ask AI. That is the meta-cognitive capacity — cognition about cognition.
The problem: the raw material of that coordination — sustained attention — is being eroded by another force.
The dopamine slot machine
TikTok launched internationally in 2016 and quickly became the most engaging app in social-media history — average users spend 95 minutes a day on it. With videos optimized at 21–34 seconds, a typical user consumes 167–271 videos per day.
Each swipe releases a small hit of dopamine. The reward is variable-ratio reinforcement — unpredictable. Sometimes you get a great video; sometimes a dud. This is the exact mechanism behind slot machines — and it is the most addictive form of reinforcement humans know.
Pediatricians have started calling TikTok a “dopamine machine.” A 2024 meta-analysis of nearly 100,000 participants found that regular short-form video users scored lower on attention, inhibitory control, and working memory — the three core capacities for reading, learning, and problem-solving.
The phenomenon now has a name: “TikTok brain.” This is no longer metaphor. It describes a real brain state.
Part 3: The trap — the self-reinforcing loop
The four phenomena above — outsourcing memory, computation, reasoning, and fragmented attention — do not exist independently. They form a self-reinforcing loop:
Step 1: Outsource memory → no need to hold long context → reduced ability to maintain complex information mentally.
Step 2: Less mental context → harder to reason deeply (deep reasoning requires holding many variables at once) → depend on AI to reason.
Step 3: AI gives fast answers → the brain gets used to “instant answers” → reduced tolerance for slow, uncertain thinking.
Step 4: No tolerance for slow thinking → easily pulled into short-form content (immediate gratification).
Step 5: Short-form content trains the brain with dopamine loops → fragmented attention → even less able to reason deeply.
Step 6: Less able to reason → more dependent on AI → back to Step 1, but at a deeper level.
Each loop, meta-cognitive capacity — the ability to coordinate cognition — weakens slightly. This is not speculation. It is a documented pathway from learned dependence to learned helplessness.
Psychology Today (2025) called this “agency decay” — the degradation of the capacity to act as a subject. Not a dramatic overnight takeover by machines, but the quiet, gradual erosion of the human ability to think and act independently.
Part 4: The missing pieces
Three cognitive layers and attention aren’t the whole picture. Humans are outsourcing deeper things too — and these are spoken of less often.
4.1. Outsourced emotion
The Korean-German philosopher Byung-Chul Han, in The Burnout Society, describes contemporary society as one of “excess positivity” — where everyone must always be OK, always happy, always present, always connected.
In that context, negative emotions — sadness, loneliness, anxiety, boredom — become something to “handle” immediately. And the most common coping method now is: open TikTok, chat with an AI companion, message ten people, scroll Instagram.
This is outsourced emotion. Instead of sitting with an emotion — a capacity that Buddhist tradition, Stoicism, and modern psychology all see as foundational to emotional maturity — modern humans are losing this ability. AI companions like Replika and Character.AI are industrializing this trend.
4.2. Outsourced identity
This is the deepest layer. The algorithms of TikTok, Instagram, YouTube don’t just show you what you like — they shape what you like, what you believe, what you want to become.
Yanis Varoufakis, in Technofeudalism (2023), argues we no longer live in traditional capitalism. We live in a form of techno-feudalism, where big tech corporations are the lords and users are serfs paying rent in data. In this system, our preferences are no longer ours — they are produced by networks of machines.
If a philosopher from 1998 (Andy Clark & David Chalmers’ Extended Mind thesis) argued the mind extends into external tools, the more troubling question now is: if the mind extends into devices, and devices are controlled by companies optimizing for profit — whose mind is actually running?
Part 5: What remains of the human?
If memory is in the cloud, computation in machines, reasoning in AI, emotion soothed by content, identity shaped by algorithms, decisions suggested by chatbots, the body forgotten, and real relationships replaced by parasocial ones — what is left to call “human”?
The pessimistic answer
Nothing. Or more precisely: what remains is a “human interface” — a performance layer running between external systems. The human is no longer a cognitive subject, but a node in a network, an I/O device for algorithms. Varoufakis calls them cloud serfs. Han calls them “achievement-subjects” exploiting themselves. Carr calls them shallows.
In this view, human civilization is not destroyed by AI. It hollows itself out from within, while still appearing “progressive” on the outside.
The optimistic answer
This is a cognitive evolutionary step — like when humans outsourced memory to writing 5,000 years ago. Plato worried writing would ruin memory, and he was right — but he was also wrong, because writing enabled far more complex thinking than memorization. AI can play a similar role: when we don’t need to remember, compute, or reason about obvious things, we can redirect cognitive energy to higher layers — ethical judgment, artistic creation, meaning-making, deep connection with others.
The catch
The optimistic answer requires one condition: the coordinating capacity must remain intact. But that is exactly the capacity being eroded by attention fragmentation and dopamine loops. This is the central paradox:
To outsource healthily, humans need strong meta-cognitive capacity. But the very technologies we outsource to are destroying that capacity.
Part 6: The way out
Identifying a problem is not the same as knowing how to solve it. But it’s the first step.
At the individual level: rebuild meta-cognition
- Separate shallow and deep time. Don’t mix them.
- Practice tolerating boredom. Boredom is where creativity is born.
- Digital Sabbath. One day a week without a smartphone.
- AI-free zones. Some tasks you do without AI — not for efficiency, but to keep the cognitive muscle.
- Meditation. Neuroscience shows mindfulness thickens cortical areas responsible for attention. Meditation is gym for meta-cognition.
At the family level: protect childhood
Jonathan Haidt’s four norms:
- No smartphones before 14
- No social media before 16
- Phone-free schools
- More free, independent, responsible play in the real world
At the societal level: redesign technology
- Regulation. Like the EU’s Digital Services Act, or Australia’s 2024 ban on social media for under-16s.
- Agency-preserving AI. Let the user still do the reasoning.
- Alternative economics. Subscriptions over ads.
- Digital media education. Teach people how algorithms work — as we teach nutrition to resist industrial food.
Conclusion: the remaining question
I didn’t write this essay with a clear conclusion. No one has a clear conclusion. We are living in the largest experiment in human cognitive history, and the data is still being collected.
But the question worth asking is personal. Not “what will AI do to humanity” — but “what kind of person do I want to become in ten years?”
If ten years from now, every time I need to think I open ChatGPT, every time I’m sad I open TikTok, every time I need to decide I ask AI — am I still “me”? Or have I become a soft interface between systems?
The answer is not in rejecting technology — that is both impossible and undesirable. The answer is in keeping a part of cognition un-outsourced. A core. A place where “I” still actually think, actually choose, actually feel, actually take responsibility.
Perhaps that is the new definition of “human” in the age of AI: not the one who knows most, not the one who calculates best, not the one who reasons fastest — but the one who retains the capacity to choose when not to outsource.
That part, if preserved, is the most valuable.
This is an abridged English version of the Vietnamese original. Read the full essay in Vietnamese: Bộ Não Rỗng.