Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, March 31, 2026

$450B Wiped Out: Google TurboQuant Just Crashed RAM Prices 30% Overnight


See All Articles on AI    <<< Previously





Fact Check By Google Gemini

Based on search results available as of March 31, 2026, the claim that Google's TurboQuant algorithm has significantly impacted memory stocks and caused a drop in RAM prices is
partially true, but the specific figures regarding "$450B wiped out" and "30% overnight" appear to be sensationalized or part of a volatile, developing market narrative.
Here is a detailed breakdown:
1. What is Google TurboQuant? (True)
  • Google Research unveiled TurboQuant (and related TurboQuant++), an algorithm aimed at compressing the "Key-Value Cache" (KV cache) used in Large Language Models (LLMs).
  • It claims to reduce the memory required for AI inference by up to six times, theoretically lowering the need for high-capacity memory in data centers.
2. Impact on Memory Stocks (True, but figures vary)
  • The announcement triggered a sell-off in major memory-related stocks, including Samsung, SK Hynix, Micron, and Western Digital, due to fears of reduced demand for high-bandwidth memory (HBM).
  • While some sources suggest significant valuation drops (one source claims $900B+ was "impacted" in total market cap of related firms), analyst reports suggest the reaction was volatile and possibly overblown, with stocks often recovering.
3. Did RAM Prices Crash 30% Overnight? (Unlikely/Exaggerated)
  • There are reports of some consumer DDR5 RAM prices experiencing a "rare drop" or "moderating" due to the news.
  • However, a blanket "30% crash overnight" is likely a massive exaggeration or limited to specific, flash-sale retail items (e.g., some Corsair modules), rather than a general, global 30% drop in all RAM prices.
  • The narrative appears driven by YouTube tech commentary and blog posts (e.g., "AIM Network") that use dramatic headlines.
4. The Core Conflict (Context)
  • While TurboQuant improves efficiency, experts suggest the demand for AI memory is still growing, and that efficiency gains often lead to increased usage, not reduced demand (Jevons Paradox).
Summary Verdict:
Google did release a significant compression algorithm, and it did cause a sharp, knee-jerk reaction in memory stocks. However, the claim that it wiped out specifically $450B in value and forced a blanket 30% crash in global RAM prices in a single night is unsubstantiated exaggeration.
Tags: Artificial Intelligence,Investment,

Thursday, March 26, 2026

Eric Schmidt on "Singularity's Arrival" and "Recursive Self-Improvement Timeline"


See All Articles on AI    <<< Previously    Next >>>


Artificial Intelligence · The Decade Ahead

We're 10% Into the AI Revolution — And It's Already Rewriting Everything

On recursive self-improvement, the 92-gigawatt problem, and why the slope is about to go vertical

Let me be direct: we are in the middle of the most consequential technological transition in human history, and most people — including most policymakers — haven't begun to feel it yet. We're perhaps 10 or 15 percent into the real impacts of artificial intelligence. You can see it. You can feel it at the edges. But the core disruption? It hasn't arrived. What's coming is something far larger, far faster, and far more disorienting than the chatbot era has suggested.

The Year of Agents Is Already Here

There's something I call the San Francisco consensus — a shared belief among nearly everyone building frontier AI right now that 2025 is the year of agents. Not chatbots. Not autocomplete. Agents: AI systems that reason, plan, take multi-step actions, and operate autonomously over extended periods. The scaling of agent deployments and reasoning capabilities is happening at an enormous and accelerating rate.

To understand what that means practically, consider what's already changed in software development. A year ago, the ratio was roughly 80% human-written code, 20% AI-generated. Today, for the best teams I know in the Bay Area, it has completely flipped: 20% human, 80% AI. What drove that flip wasn't just better tooling. The underlying large language models became deeper thinkers — capable of longer, more coherent chains of reasoning, producing higher-quality outputs across more complex tasks.

"The best analysis I can come up with is it's not the Claude Code part. It's that the underlying LLM can produce more reasoning over time, better quality tokens over time. It's a deeper thinker."

On the shift in software development

I've been programming since high school. I moved to the Bay Area at 21 and built my career in software. Watching what these systems can do now, I have a clear-eyed view: there is not a programming task I could perform that a current top-tier model cannot match or exceed. When I watched one of these systems rewrite a C compiler in Rust, I thought: declare victory. The era of the individual programmer as the primary unit of software creation is effectively over.

Recursive Self-Improvement: The Clock Is Ticking

The thing people in this space talk about — but that most outside of it don't yet fully grasp — is recursive self-improvement. This is the scenario where an AI system begins improving itself: learning faster than humans can supervise, iterating on its own architecture and reasoning, compounding gains in ways that are not linear but exponential.

We don't have true recursive self-improvement yet. The tests exist in labs, they work in constrained demo conditions, but the general capability — "start now, learn everything, discover new things, and report what you found" — does not yet function reliably. The scientists working on this do not agree on the exact approach. But the evidence that it will work is accumulating.

"In this thinking, once you have recursive self-improvement, where the system can begin to improve itself, you have intelligence learning on its own. And in this argument, it will learn faster than we can because we're biologically limited."

On the superintelligence inflection point

The mechanism is worth spelling out clearly. Imagine a tech company with a thousand brilliant AI researchers. Now imagine switching on AI research agents to work alongside them. The constraint on human researchers is biology: sleep, housing, salaries, visas, interpersonal friction, burnout. The constraint on AI agents is electricity. So the question becomes: how many AI research agents could you run? Perhaps a million. And if your evaluation framework clearly measures progress — which in AI it does — then a million agents iterating on model improvement creates a slope that goes nearly vertical. That is the superintelligence moment. The belief in San Francisco is that this arrives within two to three years.

2–3 years: the window within which most frontier AI researchers believe recursive self-improvement — and with it, a superintelligence inflection — becomes possible.

The 92-Gigawatt Problem Nobody Is Talking About Enough

Every major AI lab is out of hardware. Every major AI lab is out of electricity. This is not hyperbole — it is the binding constraint on the entire industry right now. The boom I'm watching is unlike anything I've seen across three or four technology cycles in my career. The numbers involved are staggering: the United States alone will need roughly 92 additional gigawatts of energy to power the AI infrastructure being planned and built.

To put that in context: a large nuclear power plant produces about one gigawatt. We're talking about the equivalent of 92 new nuclear plants worth of electricity demand — added on top of existing consumption — driven primarily by data center construction for AI training and inference.

"Everybody's out of hardware. Everyone's out of electricity. It's a real boom. It's like the biggest boom I've seen. And I've been through three or four of these in my career."

On the infrastructure surge

The good news is that energy permitting reform is happening in the United States, and the rate at which data centers are being approved and built is now accelerating. The grid challenges are being worked through. But this is the chokepoint — not the algorithms, not the chips, not the talent. The race to superintelligence may ultimately be decided by who can build generation capacity fastest.

The Geopolitical Dimension: China, Open Source, and the Edge Computing Bet

China is not behind. This is something I want to say clearly and without the political fog that tends to cloud this conversation. China has enormous capital, exceptional engineering talent, and a work ethic that is at minimum equal to anything we produce in the United States. In robotics hardware, they may already be winning — and I have no desire to lose the robotics revolution the way we lost the electric vehicle race at the consumer end.

What's interesting is China's strategic divergence. Their approach — exemplified by DeepSeek, Qwen, Kimi, and others — is predominantly open source. They've made remarkable progress despite chip export restrictions, which is itself a demonstration of their engineering sophistication. But perhaps more importantly, China is betting on edge computing: embedding AI into the physical environment of Chinese users at massive scale, pervasively and locally.

The United States strategy is centered on AGI and ASI — building toward artificial general and superintelligence in large, centralized compute clusters. China's strategy is different: it's less about central supremacy and more about total environmental saturation with AI at the edge. These are diverging architectures for diverging visions of what AI is fundamentally for.

My estimate is that the world can sustain roughly ten frontier AI labs at scale — the majority in the United States, a few in China, possibly one or two in Europe depending on energy costs, and perhaps one in India. Russia is effectively out of this race for now. The question of whether these labs converge on similar capabilities or diverge toward specialized strengths is one that will define the geopolitical landscape of the next decade.

What Happens to Work — and Who Wins

The labor market implications are already becoming visible, and they don't map onto the simple narratives. It is not the case that all jobs disappear or that everything remains the same. The pattern I see emerging is bimodal: a relatively small number of very large companies, and a very large number of very small companies. The middle layer — medium-sized firms dependent on large teams of knowledge workers — gets compressed.

Within software specifically, what I observe is that the very top programmers — the ones with exceptional mathematical reasoning skills — become more valuable, not less. These are the people who can direct, evaluate, and constrain AI systems. They understand parallelization. They can write specifications, build evaluation functions, and run overnight agent loops that produce in eight hours what used to take weeks. They become directors of a programming system rather than individual contributors within one.

"It's always been true, speaking as your local arrogant programmer, that the very top programmers were worth ten times more than the ones right below. Those people will become more valuable, not less valuable, because these systems need to be controlled by humans at the moment."

On the future of technical talent

For physical labor, the story is more complex. Highly skilled mechanical work — aerospace, precision manufacturing, anything requiring in-situ human judgment about novel physical situations — remains difficult to automate in the near term. Low-skilled physical labor, by contrast, is highly exposed. But the general principle holds: the learning loop that accelerates fastest wins, whether that's a company, a country, or an individual.

Safety, Chernobyl, and the Wake-Up Call We Haven't Had

I want to be precise about something I've said before, because it is often misread. I am not endorsing a catastrophic AI event. I am describing — as a matter of prediction, not preference — that the world may require a modest, Chernobyl-scale incident before governments take AI risk seriously enough to act collectively.

Today, the share of congressional attention devoted to AI policy is well under one percent. Governments are busy; they are driven by near-term political pressures; they move slowly on abstract systemic risks. The real dangers are not science fiction. They include biological attacks enabled by AI, destabilization of democratic processes, and the exploitation of children and minors through AI-generated content — the last of which I consider an uncrossable line that we have so far failed to adequately address.

"It may take such a tragedy, hopefully a small one, to awaken the world to understand that these things are real... We're in brutal competition, we hate each other... but we are all in it together, right, over this issue."

On global AI governance

My deeper frustration is structural. We have over-relied on technologists to solve what are fundamentally political, ethical, and governance problems. The people who need to be centrally involved — historians, governance scholars, human psychologists, political philosophers — are largely absent from the rooms where AI policy is being shaped. That has to change.

Steering Toward Abundance: What Must Be Done

If we reach ASI — artificial superintelligence — within this decade, as I believe is probable, the single most important question is what values it embeds. Not its capabilities. Its values. A superintelligent system oriented toward human flourishing, freedom of expression, and democratic self-determination looks entirely different from one optimized for control or extraction. This is not a technical problem. It is a civilizational one.

The United States has a genuine comparative advantage here, but it is not guaranteed. The values that make American innovation possible — pluralism, individual freedom, the right to speak and associate — are also the values that need to be encoded into the systems we build. Winning the AI race matters, but winning it while losing what makes America worth winning for would be a catastrophic form of success.

On immigration, the argument is simple: the smartest people in the world should want to build here, and we should want them here. High-skilled immigration is not a social program; it is a national security and technology strategy. Every brilliant researcher who builds the next frontier model in the United States rather than elsewhere is a direct contribution to the kind of AI future I want to see.

The abundance thesis is correct. AI can and should generate extraordinary human flourishing — collapsing the cost of expertise, expanding access to education and healthcare, enabling scientific discovery at scales previously impossible. None of that is inevitable. It requires deliberate choices, made now, by people with the courage to think on the timescale the moment demands.

Conclusion

  • We are roughly 10–15% into the real impacts of AI. The disruption visible today is a preview, not the main event.
  • The year of agents is here. AI systems that reason, plan, and act autonomously are already flipping the 80/20 human-to-AI ratio in software development.
  • Recursive self-improvement — the true superintelligence trigger — does not yet exist in deployable form, but lab evidence is accumulating. The likely window: two to three years.
  • The binding constraint is energy, not algorithms. 92 additional gigawatts of electricity demand are coming; whoever builds generation capacity fastest shapes the race.
  • China is a peer competitor, not a laggard. Their open-source, edge-computing strategy is coherent and sophisticated, and they are winning in robotics hardware.
  • Labor bifurcates: top technical talent becomes more valuable; mid-tier knowledge work is most exposed; high-skill physical labor is more durable than low-skill physical labor.
  • A governance wake-up call — hopefully small — may be necessary before the world takes AI safety seriously enough to act across geopolitical divides.
  • The critical missing voices in AI policy are not technologists but ethicists, historians, political scientists, and governance experts.
  • Winning the AI race matters only if we encode the right values — freedom, pluralism, human alignment — into the systems we build. The how of winning is as important as the winning itself.

Wednesday, March 4, 2026

Conclusion (Genesis by Eric Schmidt, 2024)


View Other Book Summaries on AI    Download Book
<<< Previous

AI and the Leap of Faith

Human history has always been shaped by discoveries that forced us to rethink our place in the universe. The Copernican revolution moved Earth from the center of creation. Darwin revealed that humanity was part of a long evolutionary chain rather than a singular divine event. The digital revolution transformed information into the organizing principle of modern civilization.

Artificial intelligence may represent the next shift—one that challenges not just what we know, but what we are.

Unlike previous inventions, AI is not merely a tool that extends human strength or speed. It touches something deeper: intelligence itself. And as machines begin to perform tasks that once defined human uniqueness—reasoning, learning, even creativity—we find ourselves confronting questions that are as philosophical as they are technological.

The emergence of AI is therefore more than a scientific milestone. It is the beginning of a new chapter in humanity’s long search to understand itself.


The Universe as a Game We Are Learning to Play

For centuries, scientists and philosophers have tried to describe the universe as a system governed by discoverable rules. One evocative metaphor imagines reality as a cosmic chessboard—a vast game whose patterns we gradually learn by observing the moves.

At first, humanity was merely a spectator. We watched the stars move across the sky, studied the rhythms of nature, and slowly uncovered the mathematical laws underlying the cosmos.

But now something remarkable is happening. Humanity is no longer just observing the game—we are beginning to play.

Artificial intelligence represents one of the boldest moves humans have ever made on this cosmic board. It is a technology capable of discovering patterns beyond the limits of human cognition, uncovering insights hidden within the immense complexity of nature and society.

Yet participating in the game also requires something more than logic. It requires judgment, courage, and often a leap of faith. Even the most brilliant scientists cannot fully predict the consequences of the tools they create.


The Limits of Understanding

The physicist Albert Einstein once described humanity’s relationship to the universe with a striking image. Imagine a child wandering into an enormous library whose walls are filled with books written in languages the child cannot read. The child senses that someone must have written those books and that there is a pattern in how they are arranged—but the meaning remains mysterious.

This metaphor captures the human condition remarkably well.

Despite our scientific progress, we still understand only fragments of the deeper laws shaping reality. Artificial intelligence may help us decipher more of those patterns, but it also introduces new mysteries of its own. We have created systems whose internal reasoning can sometimes exceed our ability to interpret them.

In other words, the creators are beginning to struggle to understand their creations.

That paradox lies at the heart of the AI age.


Acting Without Certainty

Throughout history, leaders have faced difficult decisions without the luxury of perfect information. The rise of artificial intelligence amplifies this dilemma.

Waiting for complete certainty before acting is not an option. Technological progress moves too quickly. Yet acting too confidently can create risks whose consequences may unfold across decades or centuries.

The path forward therefore requires a delicate balance: humility about what we do not know, paired with enough confidence to continue exploring.

This balance has always defined human progress. Scientific breakthroughs, political revolutions, and cultural transformations all required people to move forward despite uncertainty. AI is simply the latest—and perhaps most consequential—instance of that pattern.


The Moral Foundation of Progress

If technical knowledge alone were enough to guide civilization, the future would be relatively straightforward. But human societies are shaped not only by logic and data, but by moral purpose.

The values that guide our decisions—ideas like dignity, responsibility, and justice—form an invisible foundation beneath technological progress. Without them, even the most powerful tools can lead to destructive outcomes.

Artificial intelligence forces us to confront this reality more directly than ever before. As machines begin to make decisions that affect human lives, the question arises: whose values will guide those decisions?

Ensuring that AI reflects humanity’s moral aspirations rather than merely its technical capabilities may become one of the defining challenges of the century.


A World Divided Over the Future

Not everyone will respond to the rise of AI in the same way.

Some people will see it as a stabilizing force—an anchor capable of helping humanity solve problems ranging from climate change to disease. Others will see it as a dangerous acceleration of forces that already threaten social cohesion and political stability.

These diverging reactions are not new. Every transformative technology has generated both optimism and fear. But AI may amplify these tensions because its impact touches so many aspects of life simultaneously: economics, security, science, and identity.

The result could be a world where some groups race forward with technological development while others attempt to slow or resist it.

Such divergence could shape the geopolitical dynamics of the coming decades.


The Question of Authority

Perhaps the most difficult question raised by artificial intelligence is also the most practical.

Who decides?

Who determines when an AI system is safe enough to deploy? Who sets the ethical boundaries for its use? Who decides how much authority should be delegated to machines?

These decisions will not be made in a single room or by a single institution. Governments, corporations, scientists, and citizens will all play roles in shaping the trajectory of AI.

But coordination among these actors will be difficult. Different societies hold different values and priorities. In a world of competing political systems and economic interests, consensus will not come easily.

The future of AI may therefore be shaped not only by technological breakthroughs but by the ability of human institutions to cooperate in managing them.


A New Beginning

It is tempting to interpret the rise of artificial intelligence as a dramatic ending—the moment when human dominance over the planet begins to fade.

But another interpretation is possible.

Rather than an ending, the emergence of AI may represent the beginning of a new phase in the story of human creativity. Humanity has always evolved by creating tools that expand its capabilities. AI may simply be the most powerful extension of that process yet.

Whether this new chapter becomes a story of flourishing or catastrophe will depend less on the machines themselves and more on the choices humans make.

In that sense, the future remains profoundly human.

Artificial intelligence may transform how we live, work, and understand the universe. But the deeper question will remain the same one humanity has faced for centuries: how to use newfound power with wisdom.

And perhaps that is the real beginning of the AI age—not the birth of intelligent machines, but the moment when humanity must decide what kind of civilization it wishes to become.

Conclusion from the book: Genesis by Eric Schmidt

Strategy (Ch.8)


View Other Book Summaries on AI    Download Book
<<< Previous    Next >>>

Strategy for an Age of Intelligent Machines

Every era of history has demanded that humanity answer a defining question. The twentieth century wrestled with war, empire, and the creation of global institutions designed to prevent catastrophe. The twenty-first century confronts something even more unusual: the emergence of intelligence that humanity itself has created.

Artificial intelligence is not just another technology. It represents a new kind of actor in human affairs—one capable of learning, reasoning, and increasingly shaping decisions that affect society. The challenge is not merely technical. It is philosophical, political, and civilizational.

The deeper question looming over the AI age is deceptively simple: Will humans become more like machines, or will machines become more like us? The answer may determine the future of the human species.


The Strategy Question

Periods of turbulence often tempt societies to focus on short-term fixes. But moments of profound transformation require something deeper: strategy.

Strategy is not about solving one problem at a time. It is about defining guiding principles that shape countless decisions across an uncertain future. In the age of artificial intelligence, the stakes are unusually high because the timeline for action may be short. AI development is accelerating at a pace that leaves little room for decades of gradual adaptation.

Humanity therefore faces a rare historical moment—a hinge point where strategic choices about technology, governance, and values could shape the trajectory of civilization itself.


The Idea of Coevolution

One way to think about the relationship between humans and AI is through the lens of coevolution.

In biology, species often evolve together. Charles Darwin observed that hummingbirds developed long, slender beaks while certain flowers evolved long funnels to match them. Each adapted to the other over time. Their evolution was intertwined.

Something similar may unfold between humans and machines.

As AI grows more capable, humans may adapt our technologies, institutions, and even our bodies to interact more effectively with it. Brain–computer interfaces are already being explored as a way to connect biological intelligence directly with digital systems. Some futurists even imagine genetic modifications that could enhance human cognition or create individuals specially adapted to collaborate with AI.

But such possibilities raise uncomfortable questions. If humans redesign themselves to keep pace with machines, what remains of the original human project? If biology itself becomes an engineering problem, humanity may lose a stable reference point for defining what it means to be human.


The Risks of Self-Redesign

The idea of enhancing humans to compete with machines might sound appealing at first. But it carries profound risks.

Genetic modification or neurological augmentation could create new forms of inequality. Entire classes of “enhanced” humans might emerge, possessing cognitive advantages that ordinary people cannot match. The human species itself could fragment into multiple biological branches.

There is also a deeper philosophical danger. If human capabilities become dependent on technological augmentation, humanity might gradually become reliant on the very systems it created. The relationship between creator and tool could quietly reverse.

Technology has always reshaped human life, but the possibility of altering human biology itself marks a more radical step—one that could transform the very foundation of human identity.


The Alternative: Making AI More Human

Instead of reshaping ourselves to match machines, another path exists: shaping machines to better understand humanity.

This is the challenge known in the AI field as alignment—ensuring that powerful AI systems behave in ways consistent with human values.

Achieving alignment is extraordinarily difficult. Machines do not naturally understand concepts like fairness, responsibility, or compassion. They learn patterns from data and optimize goals based on mathematical reward systems. If those goals are poorly defined, even highly capable systems can behave in unexpected ways.

Researchers are already exploring several approaches. Some systems rely on explicit rules programmed by developers. Others learn through reinforcement learning, where human feedback helps guide behavior. Each method has limitations: rigid rules can fail in complex situations, while reward systems can be exploited by clever algorithms that achieve high scores without fulfilling the intended purpose.

The deeper challenge lies in translating something even more elusive: human culture itself.


The Invisible Code of Human Society

Much of human behavior is governed not by written rules but by unwritten norms.

The French sociologist Pierre Bourdieu described this invisible cultural substrate as doxa—the shared assumptions and habits that quietly shape how societies function. These norms teach people what is acceptable, what is shameful, and what is admirable long before formal laws intervene.

Doxa cannot easily be written into code.

Yet AI systems might be able to absorb these patterns indirectly by observing human behavior across enormous datasets. Just as large language models have learned linguistic patterns from the internet, future AI systems might learn social norms through interaction with the world.

The goal would not be to impose a single global morality but to build layered frameworks reflecting laws, cultural practices, and ethical traditions across societies.

Such a system would resemble a pyramid of guidance—from international agreements and national laws down to local customs and everyday human behavior.


The Alignment Problem

Even with these tools, alignment remains one of the most formidable challenges in technology.

Humanity itself has never achieved universal agreement about what constitutes good or evil. Different cultures hold different moral priorities, and ethical principles evolve over time.

Teaching machines to navigate this complexity may require a global effort involving scientists, governments, philosophers, and religious traditions. It may even require AI systems to help monitor and supervise other AI systems.

The stakes are enormous. A powerful AI system developed anywhere could affect people everywhere. Safety cannot depend solely on the good intentions of individual developers or nations. Coordination across societies will be essential.


Rediscovering What It Means to Be Human

As machines grow more capable, the boundary between human and artificial intelligence may begin to blur.

To navigate that ambiguity, humanity may need to articulate more clearly what distinguishes us from our creations.

One concept that could serve as a foundation is human dignity. Philosophers such as Immanuel Kant argued that human beings possess inherent worth because they are capable of moral reasoning and conscious choice. Humans are not merely tools to be used for someone else’s purposes.

If dignity becomes the guiding principle of AI development, it could help define limits on how machines are deployed and how humans should be treated in an AI-driven world. It might also provide a philosophical boundary between humans and machines—even as machines become increasingly sophisticated.


The Strategic Balance

Ultimately, the challenge of the AI age lies in balancing two powerful forces.

On one side is the desire to unleash AI’s extraordinary potential for discovery, innovation, and prosperity. On the other is the need to maintain human agency and prevent technologies from drifting beyond our control.

Too much control could stifle progress. Too little could risk catastrophe.

Navigating this tension will require not just technical solutions but a renewed effort to define humanity’s values and aspirations.

Artificial intelligence may be the most powerful tool humans have ever created. But its true impact will depend on whether we approach it with strategic clarity about who we are—and who we want to remain.

The real question is not only what AI will become.

It is whether humanity can define itself clearly enough to guide the intelligence it has brought into the world.

Ch.8 from the book: Genesis by Eric Schmidt