ou've had this moment. You're reading a book. Your eye hits a word you don't fully know. You pause for a fraction of a second—maybe reconstruct a vague meaning from context—and keep going.
The word is gone. So is the chance to grow from it.
I've done this thousands of times. In Serbian, in English, in the few pages of German I still wrestle with. Every time, I know I've just let something slip that I'd rather have kept. The friction of stopping, finding a dictionary, looking it up, coming back—it's small, but it's real. And small friction, repeated over a lifetime of reading, becomes a cognitive tax you didn't know you were paying.
This article is about that tax, and about the deeper question underneath it: what is a word doing in your head, and why does it matter more than almost anyone talks about?
The Debate You Didn't Know Was Happening
Right now, in two entirely different fields, the same argument is playing out.
In cognitive science, researchers are dismantling a claim most of us were raised on: that language is the primary carrier of thought. Noam Chomsky's position—language as the engine of reasoning—is now the minority view. The dominant position, championed by people like Evelina Fedorenko at MIT, is that the brain's language network and its reasoning network are dissociable. Patients with severe aphasia, who have lost nearly all language ability, can still play chess, do arithmetic, solve logic puzzles, navigate cities, compose music. If language were thought, none of that should be possible.
In AI, researchers are discovering the mirror image. Models that reason only through tokens—the chain-of-thought you've watched a chatbot type out—are hitting a ceiling. The new frontier is latent reasoning: letting the model do its thinking in its internal vector space, without forcing every step to pass through language. The early results are striking. Forcing everything through the token layer isn't the essence of intelligence. It's a bottleneck.
Two fields. One insight. Language is a serialization format, not the computation itself.
The Experiments You Can Run on Yourself
You don't need fMRI to feel this. You already know it.
The tip of your tongue knows the name. You can picture the person, recall their films, remember where you met them. But the word won't come. The concept is fully formed. Language is just failing to catch up.
You rotate a letter in your head to check if it's flipped or just turned sideways. You don't reason about it in sentences. You spin the picture.
You catch a ball. Your brain solves differential equations about trajectory, wind, and spin in milliseconds. No one thinks the parabola suggests. You just move.
You grind on a problem verbally for an hour and get nowhere. Then you stand in the shower and the answer arrives. Thought clearly kept running while language took a break.
You start a sentence and notice—just before the words land—a pre-verbal shape of what you want to say. Writers feel this acutely when revising. The thought stays stable. The sentences change around it.
If thought required language, none of this would happen. It happens constantly.
So Why Does Vocabulary Matter at All?
Here's where it gets interesting, because the obvious conclusion—words don't matter, the real thinking is somewhere else—is wrong.
Having a word for a concept changes what you can do with it. Schadenfreude, hysteresis, saudade, supervenience, zugzwang. Each of these is a compression artifact that lets you grip a concept, hold it in working memory, combine it with others, recall it later. Lev Vygotsky called this the transformation of thought via inner speech. The word is a handle.
Russian speakers, whose language has separate words for light blue and dark blue, distinguish the two colors faster than English speakers do—and it shows up in perceptual tasks, not just verbal ones. Lisa Feldman Barrett's work on emotional granularity shows that people with richer emotion vocabularies regulate their emotions better. The word sharpens the distinction the brain notices.
And there's a mathematical point that doesn't get made often enough. A vocabulary of N concepts doesn't give you N thoughts. It gives you roughly N-factorial possible combinations. Each new word doesn't add to the graph. It multiplies it.
So vocabulary and non-linguistic reasoning aren't in tension. They're vertically integrated.
“Language is not the engine. Language is the substrate the engine operates on. Every word is a node. Every node multiplies the pathways a thought can travel.”
The Second Substrate
But vocabulary alone isn't enough.
I know this because I've spent 25 years writing software, and writing software does something to the brain that words don't.
When you learn a new word, you're adding a labeled node to a mostly linguistic network. Definitions, associations, connotations. The graph grows horizontally through meaning.
When you write software, something else happens. You're forced to hold a non-linguistic structure in your head—a graph of states, dependencies, control flow, data transformations—and manipulate it without the comforting scaffolding of prose. A loop is not a sentence. A recursive function is not a paragraph. A state machine is not an argument. To reason about them, you have to build a mental model that isn't made of words, then mentally execute it step by step.
You're literally training the non-linguistic reasoning engine we've been discussing—the same substrate a chess grandmaster uses to see a position, the same one a sculptor uses to feel a figure emerging from stone.
This is why programmers describe insight moments as seeing the solution. Why a refactor can feel like rotating a 3D object in your head. Why the best engineers talk about the shape of the code. They've spent years training a cognitive substrate that operates below language—one that manipulates structure, causality, and transformation directly.
Vocabulary expands the representational space. Structural thinking expands the engine that operates over it. They're orthogonal investments. A person with a huge vocabulary and no structural practice becomes eloquent but imprecise. A person with strong structural thinking and impoverished vocabulary becomes precise but inarticulate.
The rare combination—rich lexicon feeding a well-trained structural engine—is what produces the great essayists, scientists, and systems thinkers.
The Third Substrate
And then, about a year ago, I realized there was a third one I'd been using my whole life without naming it.
It's called taste.
I wrote a piece about it called Affinity, Style, Taste. Go read it if you haven't. The short version: taste is a cognitive mode that lives upstream of argument. You look at something. You feel whether it's right. Not reason your way to a verdict—feel one, and only afterwards (if at all) find the words to justify it.
It's how Steven Tyler attached rubber bands inside his jeans so they wouldn't ride up over his boots. It's how Jony Ive decided what Apple products would feel like in a human hand. It's what Ira Glass was pointing at when he said all creative people start with taste and spend years catching up to it.
And it's what I've been using every day for the past six weeks while my team at Orange Hill has been testing Lupa—a new iOS app we built to help readers grow their vocabulary without friction. More on that in a moment. For now, the relevant part is what kept happening during testing.
The word selection worked. The OCR was accurate. The definitions came back in under a second. On paper, the feature was done.
It didn't feel right.
The haptic tick was half a beat late. The highlight color sat wrong against book paper. The moment between tapping a word and seeing its definition had a subtle dead zone that made the whole interaction feel like it was asking for permission instead of responding to you.
None of that is in the spec. None of it is verbal. You can't reason your way to it from first principles.
You feel it. You adjust something. You feel it again.
You're not coding. You're sculpting.
That's the third substrate. The judgment layer. The felt verdict that sits upstream of any argument you could make for it. It's the same cognitive machinery a jazz musician uses to know the next note before they've heard it, the same one a sommelier uses to catch what's off in a glass a layperson would happily drink.
So now we have three substrates, and they're not three versions of the same thing. They're distinct:
Vocabulary expands the representational space. The more words you have, the more nodes your thoughts can touch.
Structural thinking expands the reasoning engine. The part of you that manipulates abstract shape and causality without language.
Taste expands the judgment layer. The sense of whether what the engine produced is good.
You need all three. Vocabulary without structure makes you eloquent and shallow. Structure without taste makes you precise and cold. Taste without either makes you an opinionated amateur. The people who make things that matter tend to all three gardens.
How We Built Lupa
Here's where this gets recursive.
The app my team at Orange Hill built to help you expand the first substrate was itself created by running all three in a tight loop—accelerated by AI to a speed that would have been impossible two years ago.
Weeks, not months.
I'd hold the structural model in my head—the camera pipeline, the OCR layer, the voice-to-word matching on a page, the way the vocabulary storage has to survive offline. Then I'd describe a piece of it to Claude Code in the compressed, architectural language that software thinking trains you to use. Claude would translate structural intent into working Swift. I'd run it on a real phone, with a real book—and then the third substrate took over.
It felt wrong. The tick was late. The color sat wrong. Adjust. Feel again.
That loop—structure down to language, language down to code, code up to feel, feel back to structure—is the entire shape of modern building. Andrej Karpathy called the first version of it vibe coding and then retired the phrase for agentic engineering because he realized the critical ingredient wasn't the vibe. It was the judgment doing the steering. The taste making the calls.
We didn't build Lupa by prompting an AI until something shipped. We built it by feeling our way into a product we wanted to use, with AI collapsing the translation cost between our structural thinking and the running code.
AI didn't replace the three substrates. It multiplied all of them. Which is exactly what Lupa is designed to do for you—for the first one.
What Lupa Does
The friction I described at the start of this article—the unfamiliar word, the slide past, the node you didn't add to your graph—is what we built Lupa to eliminate.
You point your phone's camera at the page. You tap a word with your thumb, or you say the word out loud—the app finds it on the page, highlights it in the actual sentence you're reading, and returns an LLM-generated definition that understands the word in context. Not a generic dictionary entry. The specific meaning it carries right there, on that page, in that sentence.
You get pronunciation—both written and spoken. You can save the word to favorites. Your personal vocabulary grows inside the app, a cognitive graph you can return to and review.
The design philosophy maps directly onto the thesis of this article. We're not trying to replace your reasoning engine. We're feeding it richer substrate. Every word you capture is another node, another set of pathways, another seed for the kind of non-linguistic insight that arrives in the shower three days later.
If you read—in any language—and you've ever felt the quiet loss of letting an unfamiliar word slip past you, this is for you. I want real feedback. What works. What doesn't. What feels late by half a beat. The third substrate is how we're going to finish tuning this thing, and 100 readers with their own taste and their own reading lives will move it further than another month of us testing it alone.
Three Substrates, One Mind
The thread running through all of this:
Vocabulary gives thought its nodes—the richer your lexicon, the more paths an idea can travel.
Structural thinking gives thought its engine—the non-linguistic reasoning that rotates, composes, and manipulates abstract shape without needing to pass through language.
Taste gives thought its verdict—the felt judgment that knows when the thing in front of you is right, wrong, or missing the crucial 10 percent.
AI is now discovering what human brains have always known. Models that reason only through tokens are learning to reason in latent space—the same move our minds made a long time ago. And the humans working alongside them are discovering that the bottleneck was never execution. It was always the substrates underneath. What you know. How you think. What you care about.
We built Lupa to feed the first. We built it using the second and the third. And we're releasing it into a moment when all three are about to matter more than they ever have.
Come broaden your cognitive network with us—nodes, engine, and judgment all.




Comments
No comments yet.