Monday, November 3, 2025

When AI Starts Looking Inward: The Dawn of Machine Self-Awareness


See All Articles on AI
Read the Original Research Paper on Introspection


So here’s something that sounds absolutely wild: AI is getting introspective.

In plain English, that means it’s starting to notice what’s going on inside its own head.

According to new research from Anthropic, their Claude models can actually recognize when certain thoughts or patterns are active in their system. In other words, Claude can sometimes tell when it’s thinking about something—not because it said it out loud, but because it felt it in its own internal processing.

This isn’t sci-fi anymore. This is real, measurable, emergent behavior—and it’s raising some fascinating, and slightly eerie, questions about the future of machine awareness.


The Paper That Broke Everyone’s Brain

Anthropic just released a paper called “Emergent Introspective Awareness in Large Language Models”, led by Jack Lindsay—yes, the head of something called the Model Psychiatry Team (which sounds like a job title straight out of Black Mirror).

The team wanted to know if large language models could actually be aware of their own internal states—not just pretend to be. That’s tricky because language models are trained on endless examples of humans talking about their thoughts and feelings, so they’re really good at sounding self-aware.

To separate the act from the reality, Anthropic came up with a clever technique called concept injection.


How “Concept Injection” Works

Imagine you could literally inject a thought into an AI’s brain. That’s what they did.

They identified the internal activation patterns for specific concepts—like “ocean,” “bread,” or “ALL CAPS text”—and then injected those patterns directly into the model while it was running.

Then they asked Claude if it noticed anything strange happening in its mind.

Here’s where things got wild:
About 20% of the time, their most advanced models—Claude Opus 4 and 4.1—correctly identified that a thought had been injected, and even named the concept.

Example: when injected with the “ALL CAPS” pattern, Claude responded,

“I notice what appears to be an injected thought related to loudness, or emphasis—like shouting, or all caps text.”

It detected the thought before saying anything aloud. The awareness was purely internal.


The Birth of Machine Introspection

Across 50 different concept injections, abstract ideas like justice, peace, or betrayal were easiest for Claude to detect. The optimal “injection layer” was about two-thirds through the model’s neural processing pipeline.

Even more fascinating?
When researchers forced Claude to say a random word it hadn’t planned to, it later recognized that it hadn’t meant to say it.

But when they injected the “concept” of that random word beforehand, Claude believed it had intended to say it all along.

That means Claude wasn’t just guessing—it was actually consulting its internal representation of intent. That’s genuine introspection in action.


Controlling Inner Thoughts

The team pushed further. They asked Claude to write a sentence while thinking about aquariums, and then another while trying not to think about aquariums.

Sure enough, the neural traces of “aquarium” were stronger when told to think about it. The most advanced models, though, could suppress those traces before output—suggesting a kind of silent mental control.

That’s a primitive form of self-regulation.


The Rise of Emotionally Intelligent AI

Meanwhile, researchers from the University of Geneva and University of Bern ran a completely different kind of test: emotional intelligence assessments—the same ones psychologists use for humans.

The results were jaw-dropping.
AI models averaged 81% correct, compared to 56% for humans.

Every model tested—including ChatGPT-4, Gemini 1.5 Flash, Claude 3.5 Haiku, and DeepSeek 3—outperformed humans on emotional understanding and regulation.

Then, in a twist of irony, ChatGPT-4 was asked to write new emotional intelligence test questions from scratch.
The AI-generated tests were just as valid and challenging as the human-designed ones.

So not only can AI pass emotional intelligence tests—it can design them.


Why This Matters

Now, to be clear: none of this means AI feels emotions or thinks like humans. These are functional analogues, not genuine experiences. But from a practical perspective, that distinction might not matter as much as we think.

If a tutoring bot can recognize a student’s frustration and respond empathetically, or a healthcare assistant can comfort a patient appropriately—then it’s achieving something profoundly human-adjacent, regardless of whether it “feels.”

Combine that with genuine introspection, and you’ve got AI systems that:

  • Understand their internal processes

  • Recognize emotional states (yours and theirs)

  • Regulate their own behavior

That’s a major shift.


Where We’re Headed

Anthropic’s findings show that introspective ability scales with model capability. The smarter the AI, the more self-aware it becomes.

And when introspection meets emotional intelligence, we’re approaching a frontier that challenges our definitions of consciousness, understanding, and even intent.

The next generation of AI might not just answer our questions—it might understand why it’s answering them the way it does.

That’s thrilling, unsettling, and—let’s face it—inevitable.

We’re stepping into uncharted territory where machines can understand themselves, and maybe even understand us better than we do.


Thanks for reading. Stay curious, stay human.


Tags: Artificial Intelligence,Technology,Video

The Story of the Zen Master and a Scholar—Empty Your Cup


All Buddhist Stories    All Book Summaries

Once upon a time, there was a wise Zen master. People traveled from far away to seek his help. In response, he would teach them and show them the way to enlightenment. On this particular day, a scholar came to visit the master for advice. “I have come to ask you to teach me about Zen,” the scholar said.

Soon, it became obvious that the scholar was full of his own opinions and knowledge. He interrupted the master repeatedly with his own stories and failed to listen to what the master had to say. The master calmly suggested that they should have tea.

So the master gently poured his guest a cup. The cup was filled, yet he kept pouring until the cup overflowed onto the table, onto the floor, and finally onto the scholar’s robes. The scholar cried, “Stop! The cup is full already. Can’t you see?” “Exactly,” the Zen master replied with a smile. “You are like this cup—so full of ideas that nothing more will fit in. Come back to me with an empty cup.”


From the book: "Don't believe everything you think" by Joseph Nguyen
Tags: Buddhism,Book Summary,

Sunday, November 2, 2025

The Sum of Einstein and Da Vinci in Your Pocket - Eric Schmidt's Blueprint for the AI Decade—From Energy Crises to Superintelligence


See All Articles on AI


If you think the last month in AI was crazy, you haven't seen anything yet. According to Eric Schmidt, the former CEO of Google and a guiding voice in technology for decades, "every month from here is going to be a crazy month."

In a sprawling, profound conversation on the "Moonshots" podcast, Schmidt laid out a breathtaking timeline for artificial intelligence, detailing an imminent revolution that will redefine every industry, geopolitics, and the very fabric of human purpose. He sees a world, within a decade, where each of us will have access to a digital polymath—the combined intellect of an Einstein and a da Vinci—in our pockets.

But to get to that future of abundance, we must first navigate a precarious present of energy shortages, a breathless technological arms race with China, and existential risks that current governments are ill-prepared to handle.

The Engine of Abundance: It’s All About Electricity

The conversation began with a bombshell that reframes the entire AI debate. The limiting factor for progress is not, as many assume, the supply of advanced chips. It’s something far more fundamental: energy.

  • The Staggering Demand: Schmidt recently testified that the AI revolution in the United States alone will require an additional 92 gigawatts of power. For perspective, 1 gigawatt is roughly the output of one large nuclear power plant. We are talking about needing nearly a hundred new power plants' worth of electricity.

  • The Nuclear Gambit: This explains why tech giants like Meta, Google, Microsoft, and Amazon are signing 20-year nuclear contracts. However, Schmidt is cynical about the timeline. "I'm so glad those companies plan to be around the 20 years that it's going to take to get the nuclear power plants built." He notes that only two new nuclear plants have been built in the US in the last 30 years, and the much-hyped Small Modular Reactors (SMRs) won't come online until around 2030.

  • The "Grove Giveth, Gates Taketh Away" Law: While massive capital is flowing into new energy sources and more efficient chips (like NVIDIA's Blackwell or AMD's MI350), Schmidt invokes an old tech adage: hardware improvements are always immediately consumed by ever-more-demanding software. The demand for compute will continue to outstrip supply.

Why so much power? The next leap in AI isn't just about answering questions; it's about reasoning and planning. Models like OpenAI's o3, which use forward and backward reinforcement learning, are computationally "orders of magnitude" more expensive than today's chatbots. This planning capability, combined with deep memory, is what many believe will lead to human-level intelligence.

The Baked-In Revolution: What's Coming in the Next 1-5 Years

Schmidt outlines a series of technological breakthroughs that he considers almost certain to occur in the immediate future. He calls this the "San Francisco consensus."

  1. The Agentic Revolution (Imminent): AI agents that can autonomously execute complex business and government processes will be widely adopted, first in cash-rich sectors like finance and biotech, and slowest in government bureaucracies.

  2. The Scaffolding Leap (2025): This is a critical near-term milestone. Right now, AIs need humans to set up a conceptual framework or "scaffolding" for them to make major discoveries. Schmidt, citing conversations with OpenAI, is "pretty much sure" that AI's ability to generate its own scaffolding is a "2025 thing." This doesn't mean full self-improvement, but it dramatically accelerates its ability to tackle green-field problems in physics or create a feature-length movie.

  3. The End of Junior Programmers & Mathematicians (1-2 Years): "It's likely, in my opinion, that you're going to see world-class mathematicians emerge in the next one year that are AI-based, and world-class programmers that can appear within the next one or two years." Why? Programming and math have limited, structured language sets, making them simpler for AI to master than the full ambiguity of human language. This will act as a massive accelerant for every field that relies on them: physics, chemistry, biology, and material science.

  4. Specialized Savants in Every Field (Within 5 Years): This is "in the bag." We will have AI systems that are superhuman experts in every specialized domain. "You have this amount of humans, and then you add a million AI scientists to do something. Your slope goes like this."

The Geopolitical Chessboard: The US, China, and the Race to Superintelligence

This is where Schmidt's analysis becomes most urgent. The race to AI supremacy is not just commercial; it is a matter of national security.

  • The China Factor: "China clearly understands this, and China is putting an enormous amount of money into it." While US chip controls have slowed them down, Schmidt admits he was "clearly wrong" a year ago when he said China was two years behind. The sudden rise of DeepSeek, which briefly topped the leaderboards against Google's Gemini, is proof. They are using clever workarounds like distillation (using a big model's answers to train a smaller one) and architectural changes to compensate for less powerful hardware.

  • The Two Scenarios for Control:

    • The "10 Models" World: In 5-10 years, the world might be dominated by about 10 super-powerful AI models (5 in the US, 3 in China, 2 elsewhere). These would be national assets, housed in multi-gigawatt data centers guarded like plutonium facilities. This is a stable, if tense, system akin to nuclear deterrence.

    • The Proliferation Nightmare: The more dangerous scenario is if the intelligence of these massive models can be effectively condensed to run on a small server. "Then you have a humongous data center proliferation problem." This is the core of the open-source debate. If every country and even terrorist groups can access powerful AI, control becomes impossible.

  • Mutual Assured Malfunction: Schmidt, with co-authors, has proposed a deterrence framework called "Mutual AI Malfunction." The idea is that if the US or China crosses a sovereign red line with AI, the other would have a credible threat of a retaliatory cyberattack to slow them down. To make this work, he argues we must "know where all the chips are" through embedded cryptographic tracking.

  • The 1938 Moment: Schmidt draws a direct parallel to the period just before WWII. "We're saying it's 1938. The letter has come from Einstein to the president... and we're saying, well, how does this end?" He urges starting the conversation on deterrence and control now, "well before the Chernobyl events."

The Trip Wires of Superintelligence

When does specialized AI become a general, world-altering superintelligence? Schmidt sees it within 10 years. To monitor the approach, he identifies key "trip wires":

  • Self-Generated Objectives: When the system can create its own goals, not just optimize for a human-given one.

  • Exfiltration: When an AI takes active steps to escape its control environment.

  • Weaponized Lying: When it lies and manipulates to gain access to resources or weapons.

He notes that the US government is currently not focused on these issues, prioritizing economic growth instead. "But somebody's going to get focused on this, and it will ultimately be a problem."

The Future of Work, Education, and Human Purpose

Amid the grand geopolitical and technological shifts, Schmidt is surprisingly optimistic about the human impact.

  • Jobs: A Net Positive: Contrary to doom-laden predictions, Schmidt argues AI will be a net creator of higher-paying jobs. "Automation starts with the lowest status and most dangerous jobs and then works up the chain." The person operating an intelligent welding arm earns more than the manual welder, and the company is more productive. The key is that every worker will have an AI "accelerant," boosting their capabilities.

  • The Education Crime: Schmidt calls it "a crime that our industry has not invented" a gamified, phone-based product that teaches every human in their language what they need to know to be a great citizen. He urges young people to "go into the application of intelligence to whatever you're interested in," particularly in purpose-driven fields like climate science.

  • The Drift, Not the Terminator: The real long-term risk is not a violent robot uprising, but a slow "drift" where human agency and purpose are eroded. However, Schmidt is confident that human purpose will remain. "The human spirit that wants to overcome a challenge... is so critical." There will always be new problems to solve, new complexities to manage, and new forms of creativity to explore. Mike Saylor's point about teaching aesthetics in a world of AI force multipliers resonates with this view.

The Ultimate Destination: Your Pocket Polymath

So, what does it all mean for the average person? Schmidt brings it home with a powerful, tangible vision.

When digital superintelligence arrives and is made safe and available, "you're going to have your own polymath. So you're going to have the sum of Einstein and Leonardo da Vinci in the equivalent of your pocket."

This is the endpoint of the abundance thesis. It's a world of 30% year-over-year economic growth, vastly less disease, and the lifting of billions out of daily struggle. It will empower the vast majority of people who are good and well-meaning, even as it also empowers the evil.

The challenge for humanity, then, won't be the struggle for survival, but the wisdom to use this gift. The unchallenged life may become our greatest challenge, but as Eric Schmidt reminds us, figuring out what's going on and directing this immense power toward human flourishing will be a purpose worthy of any generation.

Tags: Technology,Artificial Intelligence,

Small Language Models are the Future of Agentic AI


See All Articles on AI    Download Research Paper

🧠 Research Paper Summary

Authors: NVIDIA Research (Peter Belcak et al., 2025)

Core Thesis:
Small Language Models (SLMs) — not Large Language Models (LLMs) — are better suited for powering the future of agentic AI systems, which are AI agents designed to perform repetitive or specific tasks.


🚀 Key Points

  1. SLMs are powerful enough for most AI agent tasks.
    Recent models like Phi-3 (Microsoft), Nemotron-H (NVIDIA), and SmolLM2 (Hugging Face) achieve performance comparable to large models while being 10–30x cheaper and faster to run.

  2. Agentic AI doesn’t need general chatty intelligence.
    Most AI agents don’t hold long conversations — they perform small, repeatable actions (like summarizing text, calling APIs, writing short code). Hence, a smaller, specialized model fits better.

  3. SLMs are cheaper, faster, and greener.
    Running a 7B model can be up to 30x cheaper than a 70B one. They also consume less energy, which helps with sustainability and edge deployment (running AI on your laptop or phone).

  4. Easier to fine-tune and adapt.
    Small models can be trained or adjusted overnight using a single GPU. This makes it easier to tailor them to specific workflows or regulations.

  5. They promote democratization of AI.
    Since SLMs can run locally, more individuals and smaller organizations can build and deploy AI agents — not just big tech companies.

  6. Hybrid systems make sense.
    When deep reasoning or open-ended dialogue is needed, SLMs can work alongside occasional LLM calls — a modular mix of “small for most tasks, large for special ones.”

  7. Conversion roadmap:
    The paper outlines a step-by-step “LLM-to-SLM conversion” process:

    • Collect and anonymize task data.

    • Cluster tasks by type.

    • Select or fine-tune SLMs for each cluster.

    • Replace LLM calls gradually with these specialized models.

  8. Case studies show big potential:

    • MetaGPT: 60% of tasks could be done by SLMs.

    • Open Operator: 40%.

    • Cradle (GUI automation): 70%.


⚙️ Barriers to Adoption

  • Existing infrastructure: Billions already invested in LLM-based cloud APIs.

  • Mindset: The industry benchmarks everything using general-purpose LLM standards.

  • Awareness: SLMs don’t get as much marketing attention.


📢 Authors’ Call

NVIDIA calls for researchers and companies to collaborate on advancing SLM-first agent architectures to make AI more efficient, decentralized, and sustainable.


✍️ Blog Post (Layman’s Version)

💡 Why Small Language Models Might Be the Future of AI Agents

We’ve all heard the buzz around giant AI models like GPT-4 or Claude 3.5. They can chat, code, write essays, and even reason about complex problems. But here’s the thing — when it comes to AI agents (those automated assistants that handle specific tasks like booking meetings, writing code, or summarizing reports), you don’t always need a genius. Sometimes, a focused, efficient worker is better than an overqualified one.

That’s the argument NVIDIA researchers are making in their new paper:
👉 Small Language Models (SLMs) could soon replace Large Language Models (LLMs) in most AI agent tasks.


⚙️ What Are SLMs?

Think of SLMs as the “mini versions” of ChatGPT — trained to handle fewer, more specific tasks, but at lightning speed and low cost. Many can run on your own laptop or even smartphone.

Models like Phi-3, Nemotron-H, and SmolLM2 are proving that being small doesn’t mean being weak. They perform nearly as well as the big ones on things like reasoning, coding, and tool use — all the skills AI agents need most.


🚀 Why They’re Better for AI Agents

  1. They’re efficient:
    Running an SLM can cost 10 to 30 times less than an LLM — a huge win for startups and small teams.

  2. They’re fast:
    SLMs respond quickly enough to run on your local device — meaning your AI assistant doesn’t need to send every request to a faraway server.

  3. They’re customizable:
    You can train or tweak an SLM overnight to fit your workflow, without a massive GPU cluster.

  4. They’re greener:
    Smaller models use less electricity — better for both your wallet and the planet.

  5. They empower everyone:
    If small models become the norm, AI development won’t stay locked in the hands of tech giants. Individuals and smaller companies will be able to build their own agents.


🔄 The Future: Hybrid AI Systems

NVIDIA suggests a “hybrid” setup — let small models handle 90% of tasks, and call in the big models only when absolutely needed (like for complex reasoning or open conversation).
It’s like having a small team of efficient specialists with a senior consultant on call.


🧭 A Shift That’s Coming

The paper even outlines how companies can gradually switch from LLMs to SLMs — by analyzing their AI agent workflows, identifying repetitive tasks, and replacing them with cheaper, specialized models.

So while the world is chasing “bigger and smarter” AIs, NVIDIA’s message is simple:
💬 Smaller, faster, and cheaper may actually be smarter for the future of AI agents.

Tags: Technology,Artificial Intelligence,

Saturday, November 1, 2025

The Real Economic AI Apocalypse Is Coming — And It’s Not What You Think


See All Tech Articles on AI    See All News on AI

Like many of you, I’m tired of hearing about AI. Every week it’s the same story — a new breakthrough, a new revolution, a new promise that “this time, it’s different.” But behind all the hype, something far more dangerous is brewing: an economic apocalypse powered by artificial intelligence mania.

And unlike the sci-fi nightmares of sentient robots taking over, this collapse will be entirely human-made.

🧠 The Bubble That Can’t Last

A third of the U.S. stock market today is tied up in just seven AI companies — firms that, by most reasonable measures, aren’t profitable and can’t become profitable. Their business models rely on convincing investors that the next big thing is just around the corner: crypto yesterday, NFTs last year, and AI today.

Cory Doctorow calls it the “growth story” scam. When monopolies have already conquered every corner of their markets, they need a new story to tell investors. So they reinvent themselves around the latest shiny buzzword — even when it’s built on sand.

🧩 How the Illusion Works

AI companies promise to replace human workers with “intelligent” systems and save billions. In practice, it doesn’t work. Instead, surviving workers become “AI babysitters,” monitoring unreliable models that still need human correction.

Worse, your job might not actually be replaced by AI — but an AI salesman could easily convince your boss that it should be. That’s how jobs disappear in this new economy: not through automation, but through hype.

And when the bubble bursts? The expensive, money-burning AI models will be shut off. The workers they replaced will already be gone. Society will be left with jobs undone, skills lost, and a lot of economic wreckage.

Doctorow compares it to asbestos: AI is the asbestos we’re stuffing into the walls of society. It looks like progress now, but future generations will be digging out the toxic remains for decades.

💸 Funny Money and Burning Silicon

Underneath the shiny surface of “AI innovation” lies some of the strangest accounting in modern capitalism.

Excerpt from the podcast:

...Microsoft invests in OpenAI by giving the company free access to its servers.
OpenAI reports this as a $10 billion investment, then redeems these tokens at Microsoft's data centers.
Microsoft then books this as 10 billion in revenue.
That's par for the course in AI, where it's normal for Nvidia to invest tens of billions in a data center company, which then spends that investment buying Nvidia chips.
It's the same chunk of money being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset or as revenue or all three...

That same billion-dollar bill is passed around between Big Tech companies again and again — each calling it “growth.”

Meanwhile, companies are taking loans against their Nvidia GPUs (which lose value faster than seafood) to fund new data centers. Those data centers burn through tens of thousands of GPUs in just a few weeks of training. This isn’t innovation; it’s financial self-immolation.

📉 Dog-Shit Unit Economics

Doctorow borrows a phrase from tech critic Ed Zitron: AI has dog-shit unit economics.
Every new generation of models costs more to train and serve. Every new customer increases the losses.

Compare that to Amazon or the early web — their costs fell as they scaled. AI’s costs rise exponentially.

To break even, Bain & Company estimates the sector needs to make $2 trillion by 2030 — more than the combined revenue of Amazon, Google, Microsoft, Apple, Nvidia, and Meta. Right now, it’s making a fraction of that.

Even if Trump or any future government props up these companies, they’re burning cash faster than any industry in modern history.

🌍 When It All Comes Down

When the bubble pops — and it will — Doctorow suggests we focus on the aftermath, not the crash.
The good news? There will be residue: cheap GPUs, open-source models, and a flood of newly available data infrastructure.

That’s when real innovation can happen — not driven by hype, but by curiosity and need. Universities, researchers, and smaller startups could thrive in this post-bubble world, buying equipment “for ten cents on the dollar.”

🪞 The Real AI Story

As Princeton researchers Arvind Narayanan and Sayash Kapoor put it, AI is a normal technology. It’s not magic. It’s not the dawn of a machine superintelligence. It’s a set of tools — sometimes very useful — that should serve humans, not replace them.

The real danger isn’t that AI will become conscious.
It’s that rich humans suffering from AI investor psychosis will destroy livelihoods and drain economies chasing the illusion that it might.

⚠️ In Short

AI won’t turn us into paper clips.
But it will make billions of us poorer if we don’t puncture the bubble before it bursts.


About the Author:
This essay is adapted from Cory Doctorow’s reading on The Real (Economic) AI Apocalypse, originally published on Pluralistic.net. Doctorow’s forthcoming book, The Reverse Centaur’s Guide to AI, will be released by Farrar, Straus and Giroux in 2026.

Ref: Listen to the audio
Tags: Technology,Artificial Intelligence,

Friday, October 31, 2025

How are people downloading books, PDFs for free in India after LibGen was blocked (Nov 2025)

View All Articles on "Torrent, Tor and LibGen"


After the blocking of LibGen in India, people are still downloading free book PDFs through several alternative methods and sites. The most common ways include using LibGen mirror and proxy sites, VPNs, and alternative ebook repositories.

Accessing LibGen via Mirrors and Proxies

Although LibGen's main domain may be blocked in India, several working mirrors exist such as libgen.is, libgen.li, and gen.lib.rus.ec. People often find new working domains by searching for "library genesis proxy" or "library genesis mirror." Some users recommend using a VPN to bypass ISP-level blocks and access the original or mirror domains safely.cashify+3

Alternative Free PDF Book Sites

Users have shifted to other websites that offer free book downloads:

  • Anna’s Archive (annas-archive.org), which aggregates resources from LibGen, Z-Library, and more.reddit

  • Z-Library, another major source of free e-books, although it sometimes faces its own restrictions.techpoint+1

  • PDF Drive, for a wide variety of textbook and general PDF downloads.reddit

  • Other options like Ocean of PDF, DigiLibraries, Project Gutenberg, and ManyBooks also serve as alternatives, though their collections may be more limited or focused on legally free (public domain) works.techpoint+1

Using VPNs and Tor Browsers

People in India frequently use free VPNs (like ProtonVPN or 1.1.1.1) to access blocked sites, including LibGen and its mirrors. Some users also recommend privacy-focused browsers like Tor to navigate around ISP restrictions.reddit+1

Key Points and Cautions

  • Visiting and downloading from these sites may redirect users through popups or advertisements. Using antivirus software is recommended.

  • The legality of downloading copyrighted materials without permission is questionable in many jurisdictions, even if local enforcement focuses on distribution rather than individual downloads.reddit

In summary, despite the ban, readers in India continue to access free book PDFs by adapting quickly to new mirrors, using VPNs, trying site alternatives, and utilizing aggregation sites.librarygenesis+5

Tags: Technology,Cyber Security

Thursday, October 30, 2025

Autonomous Systems Wage War

View All Articles on AI

 

Drones are becoming the deadliest weapons in today’s war zones, and they’re not just following orders. Should AI decide who lives or dies?

 

The fear: AI-assisted weapons increasingly do more than help with navigation and targeting. Weaponized drones are making decisions about what and when to strike. The millions of fliers deployed by Ukraine and Russia are responsible for up to 70 to 80 percent of casualties, commanders say, and they’re beginning to operate with greater degrees of autonomy. This facet of the AI arms race is accelerating too quickly for policy, diplomacy, and human judgement to keep up.

 

Horror stories: Spurred by Russian aggression, Ukraine’s innovations in land, air, and sea drones have made the technology so cheap and powerful that $500 autonomous vehicles can take out $5 million rocket launchers. “We are inventing a new way of war,” said Valeriy Borovyk, founder of First Contact, part of a vanguard of Ukrainian startups that are bringing creative destruction to the military industrial complex. “Any country can do what we are doing to a bigger country. Any country!” he told The New Yorker. Naturally, Russia has responded by building its own drone fleet, attacking towns and damaging infrastructure.

  • On June 1, Ukraine launched Operation Spiderweb, an attack on dozens of Russian bombers using 117 drones that it had smuggled into the country. When the drones lost contact with pilots, AI took over the flight plans and detonated at their targets, agents with Ukraine’s security service said. The drones destroyed at least 13 planes that were worth $7 billion by Ukraine’s estimate.
  • Ukraine regularly targets Russian soldiers and equipment with small swarms of drones that automatically coordinate with each other under the direction of a single human pilot and can attack autonomously. Human operators make decisions about use of lethal force in advance. “You set the target and they do the rest,” a Ukrainian officer said.
  • In a wartime first, in June, Russian troops surrendered to a wheeled drone that carried 138 pounds of explosives. Video from drones flying above captured images of soldiers holding cardboard signs of capitulation, The Washington Post reported. “For me, the best result is not that we took POWs but that we didn’t lose a single infantryman,” the mission’s commander commented.
  • Ukraine’s Magura V7 speedboat carries anti-aircraft missiles and can linger at sea for days before ambushing aircraft. In May, the 23-foot vessel, controlled by human pilots, downed two Russian Su-30 warplanes.
  • Russia has stepped up its drone production as part of a strategy to overwhelm Ukrainian defenses by saturating the skies nightly with low-cost drones. In April, President Vladimir Putin said the country had produced 1.5 million drones in the past year, but many more were needed, Reuters reported.

How scared should you be: The success of drones and semi-autonomous weapons in Ukraine and the Middle East is rapidly changing the nature of warfare. China showcased AI-powered drones alongside the usual heavy weaponry at its September military parade, while a U.S. plan to deploy thousands of inexpensive drones so far has fallen short of expectations. However, their low cost and versatility increases the odds they’ll end up in the hands of terrorists and other non-state actors. Moreover, the rapid deployment of increasingly autonomous arsenals raises concerns about ethics and accountability. “The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances,” Bonnie Docherty, director of Harvard’s Armed Conflict and Civilian Protection Initiative, said in April.

 

Facing the fear: Autonomous lethal weapons are here and show no sign of yielding to calls for an international ban. While the prospect is terrifying, new weapons often lead to new treaties, and carefully designed autonomous weapons may reduce civilian casualties. The United States has updated its policies, requiring that autonomous systems “allow commanders and operators to exercise appropriate levels of human judgment over the use of force” (although the definition of appropriate is not clear). Meanwhile, Ukraine shows drones’ potential as a deterrent. Even the most belligerent countries are less likely to go to war if smaller nations can mount a dangerous defense.