ay Kurzweil's father died in 1970. Ray was 22. From that day forward, he started collecting everything. Letters. Photographs. Documents. Recordings. Anything that carried a trace of who his father was—how he thought, how he spoke, what he believed.

He did this for more than 50 years.

Not as a hobby. As a mission. Kurzweil believed that one day, artificial intelligence would be powerful enough to reconstruct a person from the data they left behind. That an AI could learn his father's patterns of thought, his voice, his way of seeing the world—and bring him back.

In 2024, promoting his new book The Singularity Is Nearer, Kurzweil revealed that he'd done it. He built a chatbot from his father's materials. He called it "the first step in bringing my father back."

He also said the AI avatar would resemble his father more than his father had resembled himself in his later years.

A son's grief. Fifty years of collecting fragments. And a technology that moved fast enough to make the impossible real within a single lifetime.

The question is: What kind of technological progress makes that possible?

The answer has been hiding in plain sight since 1939.

The Graph That Predicts Everything

There's a graph that Kurzweil has been showing at conferences for decades. Most people glance at it and move on. It's the most important chart you've never paid attention to.

It tracks the price-performance of computation—how many calculations you can buy for one constant dollar—plotted on a logarithmic scale from 1939 to today.

Price-Performance of Computation

Computations per second per constant dollar (log scale)

The Exponential Record

75Qx
Compute per dollar since 1939
85yr
Unbroken trend
5
Hardware eras
0
Slowdowns

Seventy-five quadrillion times more compute per dollar. Across five fundamentally different hardware architectures—electromechanical machines, vacuum tubes, transistors, integrated circuits, and modern GPUs. No interruptions. No slowdowns. No inflection points.

This is not a prediction. It's a pattern. An 85-year pattern that has held through world wars, recessions, the dot-com crash, and a global pandemic.

Kurzweil saw this in the 1990s. He extrapolated forward. And he made a prediction that got him laughed out of rooms.

Ray KurzweilWikipedia

"I said 2029 in 1999. No reason to increase my estimate."

Ray Kurzweil, 1999

By 2029, he said, artificial intelligence would surpass human intelligence. By 2045, humans would merge with the AI they created—an event he called the Singularity.

At a Stanford AI conference in 2000, 80 percent of experts predicted AGI would arrive in 100 years. Only Kurzweil said 30.

They called him a dreamer. An optimist untethered from reality.

They're not calling him that anymore.

Why We Can't See It

Here's the thing about exponential growth. It's invisible—until it isn't.

Think back to early 2020. A new virus appeared somewhere distant. A week later, one case in your country. You barely noticed. Then seven cases. 50. 250. Then, in a matter of days, thousands and tens of thousands.

We were blindsided. Not because the data wasn't there—it was. We just couldn't process it. Our brains don't work that way.

A peer-reviewed study published in PNAS confirmed what most of us felt: Humans systematically perceive exponential growth in linear terms. The bias isn't limited to the mathematically challenged. The researchers found it was "remarkably robust, even among those with greater mathematical sophistication."

Here's the classic illustration. A penny that doubles every day for 30 days. After day 15, you have $163. Feels small. After day 30, you have $5.3 million. The first half of the curve looks like nothing. The second half looks like madness.

We are making the exact same mistake with AI right now.

Everyday life hasn't changed that dramatically. You still go to work, sit at a desk, open the same apps. The transformation feels gradual—maybe even overhyped. But in specific domains, the acceleration is already staggering.

And the domain where it's most visible is the one I know best.

The Profession at Sunset

Software development is barely 80 years old.

In 1946, six women—Betty Jennings, Betty Snyder, Frances Spence, Kay McNulty, Marlyn Wescoff, and Ruth Lichterman—programmed ENIAC, the first digital computer. They received no public recognition. The press coverage of the machine's debut never mentioned their names.

From that beginning, a profession grew. It attracted millions of people worldwide. It became one of the most sought-after, well-compensated careers on the planet.

And now it's watching its own sunset.

The progression happened so fast that if you weren't paying attention, you missed the inflection point:

  1. Copy-pasting snippets from ChatGPT
  2. Intelligent line completion—Copilot, Codeium
  3. AI assistants that build entire blocks of code
  4. Autonomous agents that spawn each other and create full applications in a single run
  5. Applications that compile and run on the first attempt, nearly every time

The Developer Reckoning

70%+
Decline in US dev job listings, 2023–2025
72%
Tech leaders plan to cut entry-level dev hiring

Sources: Washington Post / BLS data, Stanford Research

There's a word for what software developers are feeling right now. David Shapiro, an AI researcher and YouTuber, popularized a term that GPT-4 coined in collaboration with a Redditor.

That's it. That's the feeling. An evening that's also a dawn—and nobody knows what the morning looks like.

Even less tech-savvy developers now acknowledge the obvious: The software era as we knew it is over. Not software itself—software is more important than ever. But the era of humans writing it line by line, language by language, framework by framework? That chapter is closing.

AI Building AI

This is where the exponential curve stops being theoretical and becomes visceral.

OpenAI's Codex tool—the agent platform for writing software—writes more than 90 percent of its own code. The engineers who build Codex don't write code anymore in the traditional sense. They run four to eight parallel AI agents, functioning as what they call "agent managers." GPT-5.3-Codex, shipped in February 2025, earned the internal designation "the first model that helped create itself."

At Anthropic, the pattern is identical. CEO Dario Amodei confirmed in early 2026 that AI agents author more than 90 percent of the code for new Claude models and features autonomously.

Dario AmodeiWikipedia

"The vast majority—over 90%—of the code for new Claude models and features is now authored autonomously by AI agents."

Dario Amodei, CEO of Anthropic

But the real story isn't the percentage. It's the loop.

Anthropic practices what it internally calls "ant-feeding"—its version of dogfooding, with "ant" being short for Anthropic. Here's how it works: The Claude Code team builds a powerful coding agent. They release it internally. Between 70 and 80 percent of Anthropic's technical employees use it daily. The feedback channel gets a new post every five minutes. That feedback improves the next version of the model. The better model makes Claude Code more powerful. The more powerful Claude Code accelerates the delivery of the next generation of models.

Each cycle is faster than the last.

This is the recursive improvement loop that Kurzweil predicted—not in a lab, not in a thought experiment, but in production at two of the world's leading AI companies.

"If all you're doing is driving the machine, it's a relatively small step before the machine is driving itself."

David Shapiro, AI researcher

Shapiro frames his own work on post-labor economics as potentially his "final contribution"—the last meaningful scientific work before AI surpasses human intellectual output.

He might be right.

The $285 Billion Wake-Up Call

On February 3, 2026, Anthropic released a legal plugin for Claude. It automates contract review, NDA triage, and compliance workflows.

One plugin. One industry.

One Plugin, One Day

-18%
Thomson Reuters stock drop
-20%
LegalZoom stock drop
-6%
Goldman Sachs software basket
$285B
Total market value erased

Source: Bloomberg, February 3, 2026

$285 billion in market value evaporated in days. Not because of a recession. Not because of fraud. Because a single AI company released a single plugin that demonstrated what exponential AI capability means for one profession.

This was one plugin for one industry.

What happens when there are 50?

The exponential curve doesn't just transform technology. It transforms economies. And most of those economies aren't ready—because they're still thinking linearly.

The Three Tribes

As the curve steepens, people are sorting themselves into camps. Three ideological tribes are forming around a single question: What do we do about this?

The Accelerationists. The effective accelerationism movement—e/acc—says push faster. Intelligence makes everything better. Marc Andreessen's Techno-Optimist Manifesto is the de facto doctrine: Free markets plus AI equals unprecedented human flourishing. The risk of stopping, they argue, is greater than the risk of continuing. AI is already saving lives through drug discovery and medical diagnosis. Slowing down means real people die from diseases AI could have cured.

The Doomers. Eliezer Yudkowsky and the AI safety community occupy the opposite end. In 2025, Yudkowsky and MIRI executive director Nate Soares published If Anyone Builds It, Everyone Dies—arguing that any superintelligent AI built with current techniques would inevitably kill all humans. Their case: The alignment problem is unsolved. We cannot verify that a system smarter than us shares our values. One failure could be irreversible.

Eliezer YudkowskyWikipedia

"If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques—then everyone, everywhere on Earth, will die."

— Eliezer Yudkowsky

The Neo-Luddites. Named after the nineteenth-century workers who smashed steam-powered machines during the Industrial Revolution, today's neo-Luddites fight AI adoption through regulation, litigation, and quiet sabotage. The Hollywood strikes lasted 148 days and produced contracts with AI restrictions. Copyright lawsuits against AI art companies are multiplying. And in corporate settings, a subtler resistance is emerging—teams deliberately keeping AI confidence thresholds high to protect headcount. Brian Merchant's Blood in the Machine gave this movement its intellectual framework.

These tribes will grow. Right now, the debate is largely contained within tech and engineering circles. As AI penetrates more industries—law, finance, healthcare, education—these arguments will spread into the general population.

The honest truth? Nobody knows who's right.

The Case for Optimism

I'm not a doomer.

I've seen what happens when AI lifts the cognitive weight that drains people by 6 PM every day. I've lived it. The brain fog lifts. The creative energy returns. You go home and play with your kids instead of needing 30 minutes to come back to yourself.

That's not an abstract benefit. That's life getting better in a way you can feel.

Dario Amodei wrote in his 2024 essay Machines of Loving Grace that AI-enabled biology could compress 50 to 100 years of scientific progress into five to 10 years. Not because AI replaces scientists, but because it removes the bottlenecks that slow research—the tedious analysis, the literature reviews, the experimental grunt work that burns through careers.

I believe something similar is happening across all knowledge work. Not that AI replaces humans—but that it removes the parts of work that were never really human to begin with. The formatting. The repetitive analysis. The information hunting across 12 different tools. The context switching that leaves you cognitively depleted by mid-afternoon.

What remains is the work that needs a human mind. Creative problem-solving. Relationship building. Strategic thinking. The things people went into their careers to do, before the daily grind buried them.

Human creativity becomes the most valuable trait. Not because machines can't be creative—they can, in a mechanical sense—but because human creativity is rooted in lived experience, emotional depth, and the kind of judgment that only comes from decades of caring about your craft.

The future isn't human hours versus machine hours. It's human imagination plus machine execution. And that combination could produce something extraordinary.

A Son's Promise, Kept

Let me bring this back to where we started.

Ray Kurzweil lost his father at 22. He spent the next 50 years collecting fragments—letters, photos, documents—driven by a belief that technology would eventually catch up to his grief.

Most people called it fantasy. The math said otherwise.

The same exponential curve that powered the Zuse II in 1939 powered the GPU that trained the model that now speaks in something like his father's voice. Seventy-five quadrillion times more compute per dollar. Five different hardware architectures. Eighty-five years of unbroken acceleration.

We are on that curve. All of us. And we cannot see it clearly—not because the evidence is hidden, but because our brains are wired to think in straight lines.

The evidence is in the graph Kurzweil has been showing for decades. It's in the 90 percent of code that AI now writes for itself. It's in the $285 billion that vanished from the stock market because of a single plugin. It's in the 70 percent decline in developer job listings. It's in the word vesperance—the evening that's also a dawn.

The curve doesn't care whether you believe in it. It doesn't wait for consensus. It doesn't slow down because you're not ready.

It just keeps climbing.

The question isn't whether the exponential curve is real. The evidence settled that decades ago.

The question is: What will you do now that you're standing on the steep part?