See All Articles on AI
After the Typhoon: A Candid Conversation on AI, Power, and the Price of Progress
Good afternoon — the room is full, the typhoon has passed, and Jing Yang, Asia Bureau Chief of The Information, opens the session with a warm, wry reminder that nothing — not even the strongest storm in years — could blow away our fascination with AI. The guest of honor: Karen Hao, veteran AI reporter and author whose work on OpenAI and the industry has provoked equal parts admiration and unease.
What followed was a wide-ranging, sharp conversation about the philosophical, economic, human, and environmental cost of today’s AI arms race. Below are the highlights — edited and reframed as a blog post to help you carry the conversation forward.
Intelligence without a definition
One of the first—and most disquieting—points Karen raises is simple: there is no scientific consensus on what “intelligence” actually is. Neuroscience, psychology, and philosophy offer competing frameworks. For AI development, that lack of definition has real consequences: progress is measured by how well systems mimic specific human tasks (seeing, writing, passing exams), not by any agreed benchmark of intelligence.
That ambiguity explains why the industry keeps moving the goalposts—from chatbots passing Turing-esque tests to systems beating humans at games, to models achieving high SAT/LSAT-style scores—yet no single victory settles the debate over whether we’ve created intelligence.
Scaling vs. invention: the bet that reshaped AI
Karen traces OpenAI’s early strategy: instead of reinventing algorithms, double down on what worked (transformer architectures) and scale—more data, more compute. That bet has delivered astonishing capabilities, but at extreme financial and environmental cost. Scaling became the obvious, simple path, crowding out alternative research paths that might produce more efficient solutions.
She cautions: success by scaling doesn’t mean scaling is the only way. New labs and open-source efforts are beginning to show that different architectures and smarter approaches can deliver similar capabilities with far less compute — a fact that has shaken markets and sparked debates about the sustainability of the current model.
Money, persuasion, and the illusion of inevitability
OpenAI’s rise, Karen argues, is not just technical — it’s theatrical and political. Sam Altman’s storytelling and fundraising prowess have convinced investors to back an audacious, costly vision. The result: enormous projected spending (trillions) that dwarfs current revenues. The math is alarming, and investors’ appetite has redirected capital from other critical areas — climate tech among them.
This isn’t just a business critique; it’s a structural one. Karen suggests that the “we’re the good guys” narrative and existential rhetoric (either utopia or annihilation) have helped justify secrecy, centralization, and a scramble for resources.
The hidden human and environmental costs
Perhaps the hardest part of Karen’s reporting: the labor behind the magic. Large models don’t learn in a vacuum — they are taught. Tens of thousands of contract workers around the globe perform time-sensitive, low-paid tasks: annotating data, writing example prompts and responses, and moderating content. The work is precarious, often exploitative, and in many cases psychologically damaging.
On the environmental side, scaling enormous models consumes massive energy. Ambitious data-center and energy plans (250 gigawatts, talk of new reactors) raise fundamental questions about feasibility and impact. Karen warns that the physics and logistics aren’t trivial — and that this demand is reshaping policy debates, even prompting lobbying around nuclear power and energy deregulation.
Open-source vs. empires
Karen frames a philosophical divide: closed-source “empires” seek to monopolize knowledge production; open source champions democratized access and distributed scrutiny. Open-source movements — recently energized by breakthroughs out of China and elsewhere — act as a corrective: they make models auditable, contestable, and improvable by a global community.
That contest matters not only for innovation but for safety and accountability. When every advance is locked behind a corporate wall, we lose collective ability to critique and fix problems.
Is there a bubble? And will it pop?
When the audience asked if we’re in a financial bubble, Karen was blunt: yes. The valuation dynamics, outsized spending commitments, and shaky revenue models leave the space vulnerable. She pointed to brittle market reactions around breakthroughs (e.g., DeepSeek) as signs of how quickly sentiment can swing. A pop — if it comes — could be disruptive in ways that echo past tech crashes, but on a far larger scale given AI’s entanglement with public institutions.
Regulation, accountability, and a practical roadmap
Karen is unequivocal: external regulation is necessary. Relying on bespoke corporate structures and self-policing will not be sufficient. We have models from pharmaceuticals and healthcare where regulation and public-interest frameworks exist alongside innovation. Similar guardrails are needed for AI — not to kill innovation, but to redirect it toward public benefit and resilience.
Final notes: energy, geopolitics, and what to watch
-
Expect more open-source pressure and more labs experimenting with non-scaling paths.
-
Watch the energy debate — hardware and compute demand are becoming political.
-
Keep an eye on labor conditions: the “hidden human cost” should drive contract standards and transparency.
-
Be skeptical of grandiose revenue promises; dig into how companies intend to monetize and whether that path is realistic.

No comments:
Post a Comment