Let me be direct: we are in the middle of the most consequential technological transition in human history, and
most people — including most policymakers — haven't begun to feel it yet. We're perhaps 10 or 15 percent into the
real impacts of artificial intelligence. You can see it. You can feel it at the edges. But the core disruption? It
hasn't arrived. What's coming is something far larger, far faster, and far more disorienting than the chatbot era
has suggested.
The Year of Agents Is Already Here
There's something I call the San Francisco consensus — a shared belief among nearly everyone building frontier AI
right now that 2025 is the year of agents. Not chatbots. Not autocomplete. Agents: AI systems that reason, plan,
take multi-step actions, and operate autonomously over extended periods. The scaling of agent deployments and
reasoning capabilities is happening at an enormous and accelerating rate.
To understand what that means practically, consider what's already changed in software development. A year ago,
the ratio was roughly 80% human-written code, 20% AI-generated. Today, for the best teams I know in the Bay Area,
it has completely flipped: 20% human, 80% AI. What drove that flip wasn't just better tooling. The underlying
large language models became deeper thinkers — capable of longer, more coherent chains of reasoning, producing
higher-quality outputs across more complex tasks.
"The best analysis I can come up with is it's not the Claude Code part. It's that the underlying LLM can
produce more reasoning over time, better quality tokens over time. It's a deeper thinker."
On the shift in software development
I've been programming since high school. I moved to the Bay Area at 21 and built my career in software. Watching
what these systems can do now, I have a clear-eyed view: there is not a programming task I could perform that a
current top-tier model cannot match or exceed. When I watched one of these systems rewrite a C compiler in Rust, I
thought: declare victory. The era of the individual programmer as the primary unit of software creation is
effectively over.
Recursive Self-Improvement: The Clock Is Ticking
The thing people in this space talk about — but that most outside of it don't yet fully grasp — is recursive
self-improvement. This is the scenario where an AI system begins improving itself: learning faster than humans can
supervise, iterating on its own architecture and reasoning, compounding gains in ways that are not linear but
exponential.
We don't have true recursive self-improvement yet. The tests exist in labs, they work in constrained demo
conditions, but the general capability — "start now, learn everything, discover new things, and report what you
found" — does not yet function reliably. The scientists working on this do not agree on the exact approach. But
the evidence that it will work is accumulating.
"In this thinking, once you have recursive self-improvement, where the system can begin to improve itself, you
have intelligence learning on its own. And in this argument, it will learn faster than we can because we're
biologically limited."
On the superintelligence inflection point
The mechanism is worth spelling out clearly. Imagine a tech company with a thousand brilliant AI researchers. Now
imagine switching on AI research agents to work alongside them. The constraint on human researchers is biology:
sleep, housing, salaries, visas, interpersonal friction, burnout. The constraint on AI agents is electricity. So
the question becomes: how many AI research agents could you run? Perhaps a million. And if your evaluation
framework clearly measures progress — which in AI it does — then a million agents iterating on model improvement
creates a slope that goes nearly vertical. That is the superintelligence moment. The belief in San Francisco is
that this arrives within two to three years.
2–3
years: the window within which most frontier AI researchers believe recursive
self-improvement — and with it, a superintelligence inflection — becomes possible.
The 92-Gigawatt Problem Nobody Is Talking About Enough
Every major AI lab is out of hardware. Every major AI lab is out of electricity. This is not hyperbole — it is the
binding constraint on the entire industry right now. The boom I'm watching is unlike anything I've seen across
three or four technology cycles in my career. The numbers involved are staggering: the United States alone will
need roughly 92 additional gigawatts of energy to power the AI infrastructure being planned and built.
To put that in context: a large nuclear power plant produces about one gigawatt. We're talking about the
equivalent of 92 new nuclear plants worth of electricity demand — added on top of existing consumption — driven
primarily by data center construction for AI training and inference.
"Everybody's out of hardware. Everyone's out of electricity. It's a real boom. It's like the biggest boom I've
seen. And I've been through three or four of these in my career."
On the infrastructure surge
The good news is that energy permitting reform is happening in the United States, and the rate at which data
centers are being approved and built is now accelerating. The grid challenges are being worked through. But this
is the chokepoint — not the algorithms, not the chips, not the talent. The race to superintelligence may
ultimately be decided by who can build generation capacity fastest.
The Geopolitical Dimension: China, Open Source, and the Edge Computing Bet
China is not behind. This is something I want to say clearly and without the political fog that tends to cloud
this conversation. China has enormous capital, exceptional engineering talent, and a work ethic that is at minimum
equal to anything we produce in the United States. In robotics hardware, they may already be winning — and I have
no desire to lose the robotics revolution the way we lost the electric vehicle race at the consumer end.
What's interesting is China's strategic divergence. Their approach — exemplified by DeepSeek, Qwen, Kimi, and
others — is predominantly open source. They've made remarkable progress despite chip export restrictions, which is
itself a demonstration of their engineering sophistication. But perhaps more importantly, China is betting on edge
computing: embedding AI into the physical environment of Chinese users at massive scale, pervasively and locally.
The United States strategy is centered on AGI and ASI — building toward artificial general and superintelligence
in large, centralized compute clusters. China's strategy is different: it's less about central supremacy and more
about total environmental saturation with AI at the edge. These are diverging architectures for diverging visions
of what AI is fundamentally for.
My estimate is that the world can sustain roughly ten frontier AI labs at scale — the majority in the United
States, a few in China, possibly one or two in Europe depending on energy costs, and perhaps one in India. Russia
is effectively out of this race for now. The question of whether these labs converge on similar capabilities or
diverge toward specialized strengths is one that will define the geopolitical landscape of the next decade.
What Happens to Work — and Who Wins
The labor market implications are already becoming visible, and they don't map onto the simple narratives. It is
not the case that all jobs disappear or that everything remains the same. The pattern I see emerging is bimodal: a
relatively small number of very large companies, and a very large number of very small companies. The middle layer
— medium-sized firms dependent on large teams of knowledge workers — gets compressed.
Within software specifically, what I observe is that the very top programmers — the ones with exceptional
mathematical reasoning skills — become more valuable, not less. These are the people who can direct, evaluate, and
constrain AI systems. They understand parallelization. They can write specifications, build evaluation functions,
and run overnight agent loops that produce in eight hours what used to take weeks. They become directors of a
programming system rather than individual contributors within one.
"It's always been true, speaking as your local arrogant programmer, that the very top programmers were worth
ten times more than the ones right below. Those people will become more valuable, not less valuable, because
these systems need to be controlled by humans at the moment."
On the future of technical talent
For physical labor, the story is more complex. Highly skilled mechanical work — aerospace, precision
manufacturing, anything requiring in-situ human judgment about novel physical situations — remains difficult to
automate in the near term. Low-skilled physical labor, by contrast, is highly exposed. But the general principle
holds: the learning loop that accelerates fastest wins, whether that's a company, a country, or an individual.
Safety, Chernobyl, and the Wake-Up Call We Haven't Had
I want to be precise about something I've said before, because it is often misread. I am not endorsing a
catastrophic AI event. I am describing — as a matter of prediction, not preference — that the world may require a
modest, Chernobyl-scale incident before governments take AI risk seriously enough to act collectively.
Today, the share of congressional attention devoted to AI policy is well under one percent. Governments are busy;
they are driven by near-term political pressures; they move slowly on abstract systemic risks. The real dangers
are not science fiction. They include biological attacks enabled by AI, destabilization of democratic processes,
and the exploitation of children and minors through AI-generated content — the last of which I consider an
uncrossable line that we have so far failed to adequately address.
"It may take such a tragedy, hopefully a small one, to awaken the world to understand that these things are
real... We're in brutal competition, we hate each other... but we are all in it together, right, over this
issue."
On global AI governance
My deeper frustration is structural. We have over-relied on technologists to solve what are fundamentally
political, ethical, and governance problems. The people who need to be centrally involved — historians, governance
scholars, human psychologists, political philosophers — are largely absent from the rooms where AI policy is being
shaped. That has to change.
Steering Toward Abundance: What Must Be Done
If we reach ASI — artificial superintelligence — within this decade, as I believe is probable, the single most
important question is what values it embeds. Not its capabilities. Its values. A superintelligent system oriented
toward human flourishing, freedom of expression, and democratic self-determination looks entirely different from
one optimized for control or extraction. This is not a technical problem. It is a civilizational one.
The United States has a genuine comparative advantage here, but it is not guaranteed. The values that make
American innovation possible — pluralism, individual freedom, the right to speak and associate — are also the
values that need to be encoded into the systems we build. Winning the AI race matters, but winning it while losing
what makes America worth winning for would be a catastrophic form of success.
On immigration, the argument is simple: the smartest people in the world should want to build here, and we should
want them here. High-skilled immigration is not a social program; it is a national security and technology
strategy. Every brilliant researcher who builds the next frontier model in the United States rather than elsewhere
is a direct contribution to the kind of AI future I want to see.
The abundance thesis is correct. AI can and should generate extraordinary human flourishing — collapsing the cost
of expertise, expanding access to education and healthcare, enabling scientific discovery at scales previously
impossible. None of that is inevitable. It requires deliberate choices, made now, by people with the courage to
think on the timescale the moment demands.
Conclusion
- We are roughly 10–15% into the real impacts of AI. The disruption visible today is a preview, not the main
event.
- The year of agents is here. AI systems that reason, plan, and act autonomously are already flipping the
80/20 human-to-AI ratio in software development.
- Recursive self-improvement — the true superintelligence trigger — does not yet exist in deployable form, but
lab evidence is accumulating. The likely window: two to three years.
- The binding constraint is energy, not algorithms. 92 additional gigawatts of electricity demand are coming;
whoever builds generation capacity fastest shapes the race.
- China is a peer competitor, not a laggard. Their open-source, edge-computing strategy is coherent and
sophisticated, and they are winning in robotics hardware.
- Labor bifurcates: top technical talent becomes more valuable; mid-tier knowledge work is most exposed;
high-skill physical labor is more durable than low-skill physical labor.
- A governance wake-up call — hopefully small — may be necessary before the world takes AI safety seriously
enough to act across geopolitical divides.
- The critical missing voices in AI policy are not technologists but ethicists, historians, political
scientists, and governance experts.
- Winning the AI race matters only if we encode the right values — freedom, pluralism, human alignment — into
the systems we build. The how of winning is as important as the winning itself.