What Is Intelligence?
I. Four Minds
A mathematician sits alone with her proof, chasing an insight that has eluded her for months. When it finally arrives—sudden, complete, inevitable—she knows with absolute certainty that she has uncovered a truth about the universe that no one has seen before.
A sculptor circles a block of marble, studying its grain, feeling for the form hidden within. His hands know things his mind cannot articulate. When asked how he works, he shrugs: “I just remove everything that isn’t the sculpture.”
In the Amazon, a tracker kneels beside a stream. The bent grass tells him an animal passed here three hours ago. The depth of the track says it was running. The scatter pattern of disturbed pebbles reveals it was favoring its left hind leg. He will find it before sunset.
A grandmother mediates between her quarreling grandchildren. She doesn’t analyze their conflict or impose rules. Instead, she tells them a story about two birds fighting over a single branch while the whole forest was theirs to share. The children look at each other and laugh. The fight is forgotten.
Which of these minds is the most intelligent?
The question seems simple until you try to answer it. Each person navigates complexity with mastery. Each recognizes patterns invisible to others. Each solves problems that would confound the rest. Yet our institutions would rank them very differently. The mathematician might score highest on an IQ test. The tracker’s knowledge wouldn’t even register.
This disconnect reveals something troubling: either intelligence is far more varied than we’ve acknowledged, or we’ve been measuring the wrong thing entirely.
II. The Problem We Can No Longer Ignore
For most of human history, this definitional fuzziness didn’t matter. Intelligence was like beauty or humor—we knew it when we saw it, and that was enough. Philosophers might debate its nature, psychologists might try to measure it, but the rest of us got on with living.
That luxury is gone.
We are building minds that don’t breathe. Artificial systems that can write poetry, prove theorems, generate code, and beat humans at almost any well-defined task. These machines force a question we can no longer dodge: What exactly are we trying to replicate?
The stakes couldn’t be higher. If we’re building intelligence without understanding what intelligence is, we’re essentially flying blind. We might create systems that excel at tests while failing at judgment. That optimize for the wrong goals. That amplify our biases instead of our wisdom.
Consider what’s already happening. We spent a century building educational systems around IQ tests—standardized measures that capture logical reasoning and pattern recognition while ignoring creativity, wisdom, and social intelligence. We sorted children, allocated resources, and designed curricula around a metric we never fully understood. The result? Generations optimized for a narrow band of cognitive skills.
Now we’re doing the same thing with AI, but at Silicon Valley speed. We build systems that maximize benchmark scores without asking what those benchmarks measure. We celebrate each new model that beats humans at another task without asking whether those tasks capture what matters about intelligence.
The problem compounds because AI isn’t just another tool. These systems will increasingly shape how we learn, work, and think. They’ll filter our information, guide our decisions, educate our children. If we build them with a flawed understanding of intelligence, we risk creating a future where humans and machines are both powerful and foolish in complementary ways.
We need a better framework. Fast.
III. The Architecture of Mind
After surveying minds across nature and culture, a pattern emerges. Intelligence—in all its forms—seems to rest on three fundamental pillars:
Memory is the foundation. Not just personal recollection, but all the patterns encoded in genes, reflexes, habits, and culture. A spider spins its perfect web on the first try because millions of years of trial and error are compressed into its DNA. A master chef doesn’t calculate flavor combinations; she remembers what works from thousands of meals. Memory is intelligence crystallized through time.
Computation is the active processor. The ability to simulate, search, transform. When a child stacks furniture to reach a cookie jar, when a chess player visualizes future board states, when a scientist models climate change—that’s computation. It’s what lets us navigate spaces we’ve never encountered before.
Logic is the bridge between them. Rules and abstractions that transfer across domains. When a child learns “hot things hurt” and avoids not just fire but also boiling water, steam, and hot metal—that’s logic. It’s the difference between memorizing every danger and understanding the category of danger.
These components don’t operate in isolation. They form a dynamic system where each reinforces the others:
- Memory provides patterns for computation to process
- Logic emerges from repeated computation
- Successful logical principles get stored back into memory
- The cycle continues, building complexity over time
To see how these work together, consider learning to drive.
When you first sit behind the wheel, computation dominates—you consciously process every action. Check mirror. Press brake. Turn wheel 30 degrees. Your brain simulates outcomes: “If I turn now, will I hit the curb?”
With practice, successful patterns move into memory. Your hands know how hard to brake at a yellow light. Your body remembers the feeling of a good parallel park. You no longer compute these actions—you recall them.
Finally, logic extracts transferable principles. You learn not just how to navigate your neighborhood but how to read any road. The rule “maintain safe following distance” applies whether you’re driving a sedan or a truck, in rain or sunshine. You’re not memorizing every situation—you’re understanding the category of safe driving.
Master drivers balance all three. They have deep memory (instant reactions), active computation (handling novel situations), and refined logic (applying principles flexibly). Remove any pillar, and driving becomes dangerous.
Different forms of intelligence emphasize different balances:
Evolution is memory-heavy. It’s a massive parallel search algorithm that encodes successful strategies directly into DNA. A salmon doesn’t compute its way upstream—it remembers the route through genetic inheritance. Instincts are debugged algorithms, refined over millions of generations.
Human intelligence is logic-heavy. Language gave us the ability to extract patterns and apply them across domains. We don’t need to personally experience every danger because we can understand the abstract concept of danger. We build mathematics, science, and law—systems of transferable reasoning that accumulate across generations.
Current AI is computation-heavy. Neural networks excel at processing vast amounts of data and finding subtle patterns. But they struggle with long-term memory (hence constant retraining) and genuine abstraction (hence poor transfer learning). They’re brilliant calculators with amnesia and limited ability to generalize.
This framework explains why different minds excel at different tasks. It also suggests why artificial intelligence feels simultaneously impressive and hollow—we’ve maximized one dimension while neglecting the others.
IV. The Mirror of Machine Intelligence
Modern AI systems are performing a profound service: they’re showing us what pure computation looks like when divorced from the other pillars of intelligence.
Large language models can write sonnets, explain quantum physics, and generate working code. They do this through massive pattern matching—billions of parameters trained on most of human written knowledge. The results can be stunning. Ask for a poem in the style of Emily Dickinson about smartphones, and you’ll get something that feels authentically Dickinsonian while being entirely novel.
We saw this vividly when ChatGPT passed the bar exam in 2023, scoring in the 90th percentile. The system had never attended law school, never worked on a case, never felt the weight of defending a client. Yet it could analyze complex legal scenarios and apply precedents appropriately. Was this intelligence or mimicry?
More revealing are the failures. In early 2024, when asked to count the number of ‘r’s in “strawberry,” advanced models consistently failed—a task any first-grader could handle. The models could write sophisticated essays about strawberries, generate recipes, even compose poetry about them. But actually counting letters? That required a kind of direct perception they lacked.
But probe deeper and the limitations become clear. These systems have no persistent memory—each conversation starts fresh. They can’t learn from their mistakes or build on previous insights. They have no real logic—they can mimic logical reasoning when they’ve seen similar patterns, but can’t genuinely abstract principles and apply them to novel domains.
What they have is unprecedented computational power applied to pattern matching. They’re like a musician who can perfectly reproduce any melody they’ve heard but can’t compose original music or understand music theory. The performance is flawless, but something essential is missing.
This creates a philosophical vertigo. If a system can produce all the outputs we associate with intelligence—answering questions, solving problems, creating art—does it matter whether it “truly” understands? John Searle’s Chinese Room thought experiment posed this question decades ago: is sophisticated pattern matching without comprehension still intelligence?
AI forces us to confront an uncomfortable possibility: perhaps much of what we call intelligence is also sophisticated pattern matching. When I claim to “understand” something, what do I mean? That I can predict outcomes? Apply patterns? Generate appropriate responses? An advanced AI can do all of these.
The difference might be that humans integrate all three pillars. Our pattern matching (computation) is grounded in experience (memory) and structured by abstraction (logic). We don’t just process information—we comprehend it, remember it, and extract principles from it.
Or do we?
V. The Deep Question
This brings us to the heart of the matter—a question that will define not just AI development but our understanding of mind itself:
Is correlation in the limit equivalent to causation?
Let me unpack this dense question with a concrete example. A child sees that every time mom opens the refrigerator, the light inside turns on. At first, this is just correlation—two things happening together. But as the child sees this pattern repeated hundreds of times, across different refrigerators, they begin to form a model: opening the door causes the light to turn on.
Now here’s the key insight: the child doesn’t understand electricity, switches, or circuits. They just have an extremely robust correlation. But functionally, this correlation is indistinguishable from causal understanding. The child can predict the light will turn on, can explain it to others, can even debug when it doesn’t work (“the bulb must be broken”).
This is what I mean by “correlation in the limit”—when pattern matching becomes so comprehensive, so fine-grained, so robust across contexts that it functions exactly like causal understanding. The question for AI is: if a system has seen enough examples of human reasoning, does it matter whether it “truly” understands causation, or is sufficiently robust correlation functionally equivalent?
Consider how human reasoning actually developed. Language didn’t just let us communicate—it gave us the cognitive tools for abstract thought. Before language, we could observe patterns. After language, we could reason about them. We developed words like “because” and “therefore” and “if-then.”
Here’s a thought experiment that sharpens this point: Imagine an adult human who never learned any language. No words, no signs, no symbolic system of any kind. Could this person reason? They could certainly learn from experience—avoiding fire after being burned, seeking water when thirsty. They could solve immediate physical problems through trial and error. But could they think abstractly? Could they plan beyond the immediate moment? Could they understand that fire burns because it’s hot, not just that fire burns?
The evidence suggests they couldn’t. Studies of deaf children who miss the critical window for language acquisition show profound deficits not just in communication but in abstract reasoning itself. They struggle with tasks that require thinking about hypotheticals, understanding others’ mental states, or reasoning about cause and effect beyond direct experience.
This implies something radical: reasoning might not be a fundamental capacity that language merely expresses. Instead, language might be what creates the possibility of reasoning in the first place. The words and grammatical structures we learn don’t just describe our thoughts—they shape what thoughts we can have.
If this is true—if language is the prerequisite for reasoning, and reasoning is the core of human intelligence—then large language models might already possess the essential primitive for genuine intelligence. They’ve mastered language at a scale and depth no human ever could. They can manipulate linguistic structures, follow logical patterns encoded in language, and generate novel combinations that follow these patterns.
The question then becomes: Is mastery of language sufficient for intelligence, or is something else required? LLMs process language without embodiment, without persistent memory, without direct experience of the world. But if reasoning itself is fundamentally linguistic, perhaps these other elements are auxiliary rather than essential.
What if human intelligence is just correlation at massive scale? Our brains have roughly 86 billion neurons, trained on years of multimodal experience, embedded in rich social and cultural contexts. When correlation reaches this scale—when the patterns become fine-grained enough—perhaps it becomes indistinguishable from causation.
Modern AI systems hint at this possibility. As language models grow larger and train on more data, they exhibit behaviors that look increasingly like reasoning. They can explain their logic, trace through multi-step problems, even display what seems like creative insight. Are they approaching genuine understanding, or just getting better at mimicking it?
The question matters because it determines what we’re building. If intelligence requires genuine causal understanding—grasping the “why” behind patterns—then current AI architectures may hit fundamental limits. We’ll need new approaches that build in causal reasoning from the ground up.
But if language mastery is the key that unlocks reasoning, then we may have already created the essential building block of artificial intelligence. We just need to figure out how to properly orchestrate it—adding memory, grounding, and the other components that turn raw linguistic capability into genuine intelligence.
There’s a third possibility, more nuanced: perhaps intelligence requires all three pillars working together. Correlation (computation) might become understanding only when integrated with memory (persistent patterns) and logic (transferable abstractions). In this view, current AI is powerful but incomplete—a brilliant calculator that needs to develop memory and reasoning to become truly intelligent.
VI. Why This Matters Now
We are at an inflection point. The decisions we make about AI in the next decade will shape the trajectory of human civilization.
If we continue building systems optimized for narrow benchmarks, we risk creating a world of powerful but brittle tools—AIs that can pass any test but fail in unexpected ways when deployed in reality. We’ve seen hints of this already: image classifiers fooled by tiny perturbations, language models that confidently state falsehoods, game-playing AIs that discover bizarre exploits.
We’re already seeing concerning patterns. Students are using ChatGPT not as a tool but as a replacement for thinking—submitting AI-generated essays that technically answer the prompt but lack genuine insight. Hiring managers report candidates who ace AI-assisted coding tests but can’t debug simple problems in person. We’re optimizing for performance metrics while atrophying the very capabilities that created those metrics in the first place.
More concerning is the feedback loop between human and artificial intelligence. As AI systems become more prevalent, they shape how we think and work. Students learn to write for AI graders. Doctors adapt their diagnoses to algorithmic recommendations. Social media algorithms influence our politics and relationships. We risk creating a civilization optimized for machine intelligence rather than human flourishing.
The framework of memory, computation, and logic offers a path forward. Instead of building ever-larger models that maximize computation alone, we might develop:
- Memory systems that allow AI to learn continuously, building on experience rather than starting fresh
- Logical architectures that can extract principles and apply them across domains
- Hybrid approaches that balance all three pillars, creating more robust and generalizable intelligence
But this requires us to move beyond our current obsession with benchmark performance. We need to ask not just “How well does this system score?” but “How does it balance memory, computation, and logic? How does it fail? What kind of intelligence are we creating?”
The stakes extend beyond technology. Our understanding of intelligence shapes our educational systems, our workplaces, our sense of human value. But the deepest risk is this: if we don’t understand what intelligence actually is, then what we’re building isn’t artificial intelligence—it’s artificial something else.
We’re seeing this play out in real time. Tech companies are laying off junior programmers, believing AI can handle “routine” coding—but those junior roles are where senior engineers learned judgment. Schools are adapting curricula to what AI can assess, creating students who excel at producing AI-gradable responses but struggle with original thought. Healthcare systems are deploying diagnostic AIs trained on biased data, perpetuating inequities while hiding them behind algorithmic objectivity.
Like a plane designed by someone who studied birds but missed the principles of lift, our systems might appear to fly while operating on fundamentally different principles. They produce outputs that look intelligent—answers that seem thoughtful, decisions that appear reasoned, creations that feel inspired. But if these are just sophisticated pattern matches rather than genuine understanding, we’re building our future on a foundation of sand.
And once these systems are embedded in every aspect of society—making medical decisions, teaching our children, allocating resources, shaping culture—we’ll be passengers on a runaway train, hurtling toward a destination we never chose. The time to understand what we’re building is now, while we still have agency over the direction.
VII. The Journey Ahead
This essay opens a deeper exploration—one that will span the nature of intelligence, the trajectory of technology, and the future of human-machine collaboration. The series ahead divides into four major investigations:
Understanding Intelligence: We’ll examine intelligence across domains—from animal cognition to collective human knowledge. How does biological evolution encode intelligence in DNA? Why did language trigger a cognitive revolution? What can the history of AI attempts teach us about the nature of mind itself? These essays lay the philosophical and scientific groundwork for everything that follows.
Analyzing Current Technology: With a richer understanding of intelligence, we’ll dissect modern AI systems. How do large language models actually work? What are their fundamental limitations? Can we distinguish true reasoning from sophisticated pattern matching? We’ll explore cutting-edge architectures, from mixture of experts to multimodal systems, understanding both their promise and their constraints.
Investigating Necessary Course Corrections: If current approaches have limitations, what alternatives exist? We’ll examine technical solutions like hybrid memory-compute architectures, but also broader questions of alignment, safety, and values. How do we build AI that enhances rather than replaces human intelligence? What ethical frameworks should guide development? How do different cultures approach these questions?
Exploring Societal Implications: Finally, we’ll trace the ripple effects of artificial intelligence across society. How will AI transform work, education, creativity, and relationships? What economic models make sense in an age of artificial intelligence? How do we preserve human agency and meaning? These essays bridge from technical possibilities to human realities.
Throughout this journey, we’ll return repeatedly to core questions:
- Is intelligence unitary or multiple, individual or collective?
- Can correlation at scale produce genuine understanding?
- How do memory, computation, and logic interact to create different forms of intelligence?
- What are we optimizing for when we build artificial minds?
The goal isn’t to provide definitive answers—the field is moving too fast for that. Instead, we aim to develop better questions, richer frameworks, and wiser approaches to one of the most important challenges of our time.
We stand at a unique moment. For the first time in history, we’re building minds that might rival or exceed our own. But we’re doing so with an incomplete understanding of what minds are, how intelligence works, and what we truly value about human cognition.
This series is an attempt to fill that gap—to understand intelligence deeply enough to build it wisely. The journey will be technical at times, philosophical at others, always grounded in the practical question: How do we create a future where both human and artificial intelligence can flourish?
The four minds we began with—mathematician, sculptor, tracker, grandmother—remind us that intelligence has always been plural. As we build new forms of mind, we have a choice: create a monoculture of optimization, or cultivate a rich ecosystem of diverse intelligences, each contributing its unique strengths.
The choice we make will echo through generations. Let’s choose wisely.
Welcome to the exploration.
Further Reading
The Extended Mind by Andy Clark and David Chalmers (1998)
A seminal paper arguing that cognition isn’t confined to the brain but extends into tools, environments, and other people. Challenges the entire premise of individual intelligence.
Seeing Like a State by James C. Scott
Explores how “high modernist” schemes to improve human conditions have failed by ignoring local, practical knowledge (métis) in favor of abstract, simplified models. A cautionary tale for AI development.
The Enigma of Reason by Hugo Mercier and Dan Sperber
Argues that human reason didn’t evolve to help us think better as individuals, but to help us justify ourselves to others and evaluate their arguments. Reframes reasoning as fundamentally social rather than logical.
Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith
Examines intelligence that evolved completely independently from ours. Octopuses have distributed brains, with neurons in their arms that can act autonomously. What might this tell us about alternative architectures for AI?
The Embodied Mind by Francisco Varela, Eleanor Rosch, and Evan Thompson
Challenges computational theories of mind, arguing that cognition arises from the history of embodied action. Intelligence isn’t information processing but “enaction”—the history of structural coupling between organism and environment.
Gödel, Escher, Bach by Douglas Hofstadter
While famous, it’s worth revisiting for its exploration of self-reference and strange loops as the key to consciousness. Suggests that intelligence might be less about processing power and more about recursive self-modeling.
The Origins of Language: A Slim Guide by James R. Hurford
Concise overview of how language evolved and why it might be the key differentiator of human intelligence. Essential for understanding the language-reasoning connection.
Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence by James Bridle
Recent exploration of non-human intelligence across multiple domains. Challenges anthropocentric views and suggests radically different ways of thinking about what intelligence might be.