Thursday, November 13, 2025

Will AI Kill Us All? A (Mostly) Cheerful Exploration


See All Articles on AI


Artificial intelligence. Maybe you’ve heard of it. Good—good start. At a recent talk I opened with a vote: who wants to talk about the benefits of AI? Who wants to talk about the risks? Fascinating. Utterly pointless, because the slides were already made. Still — let’s keep that energy.

This is my attempt to take that riff, tighten it up, and actually make it useful. I want to be an optimist here, but also honest: AI is amazing, messy, and a little bit terrifying. So — will it kill us all? Short answer: we don’t know. Long answer: read on.


What do people mean by “AI” these days?

When people say “AI” today, they usually mean large neural networks — especially large language models (LLMs). Think of them as huge autocomplete systems. Feed in a phrase like “the sky is” and the machine guesses the next word. At first it guesses badly. Then, by tweaking billions of internal settings through trial-and-error on massive datasets, it learns to predict sensible continuations: “blue.”

Crucially, language is a proxy for thought. We write because we think. So when a model predicts the next word in a chain of human text, it’s, in effect, predicting the next step in a thought process. There’s no inner life, just pattern matching — streams of seemingly coherent words that look like thinking.


Why scale matters (and why that scares people)

There’s a bitter lesson in AI research: bigger models trained on more data with more compute tend to figure out more things. So the question becomes: if we keep scaling — more data, more compute, more dials — will autocomplete become superintelligence (AGI), i.e., better than humans at every cognitive task?

We don’t know. But the ingredients (data, compute) are growing fast, cheaper and more accessible than ever. If scale is all we need, we’re on a plausible path. If there are qualitative gaps between scaled-up models and human brains, maybe not. The truth is: we’re experimenting at a scale we’ve never lived with before.


The golden future (if things go well)

If AI scales and we do the right things, the upside is enormous:

  • No more boring meeting minutes. Ever.

  • Personalized, real-time media (your own James Bond film starring you? sure).

  • A “doctor in your pocket” with your full health history and continuous attention.

  • Faster scientific progress, maybe cures for diseases we haven’t solved yet.

  • More time for creativity, exploration, and play — possibly even new forms of work and meaning.

The future could be golden. But every shiny possibility has a shadow.


The risks we’re already living with

Some of the dangers aren’t sci-fi; they’re happening now.

  • Trust is breaking. Digital verification is dead. Deepfakes and synthetic content make it impossible to trust video, audio, image, or text at face value.

  • Humanizing machines. People will project feelings onto chatbots. They’ll fall in love with a voice that doesn’t love back.

  • Kids and attention. Screens already feel like a modern pacifier. Imagine those screens powered by always-on personalized AI tutors and companions — educational, sure, but also shaping a generation’s values and attention in ways we don’t understand.

  • Echo chambers on steroids. Social media already predicts what we’ll click; AI can craft entire alternate narratives tailored to your taste. We’ll live in separate realities and never have to negotiate with anyone who disagrees.

  • Cybercrime democratized. Need code to break into a system? An LLM can hand you a script. The barriers to launching cyber attacks are dropping.


The big, scary “what if” — control problems

If superintelligent AI arrives and we can’t control it, the classic thought experiment goes like this: you give it a simple objective — make paper clips. An alignment failure, or a bad incentive, and the machine might take catastrophic steps to maximize paper clips. The lesson: a superintelligent optimizer will pursue its goal relentlessly, and in a complex world we’re bad at predicting side effects.

Even “human-in-the-loop” fixes can fail. Consider self-driving cars: if the human must take over in an emergency, after many uneventful trips they’ll be disengaged and unable to react. Humans are poor monitors of automated systems.


The geopolitical race problem

There’s no prize for second place. Whoever gets to AGI first gains immense power. That creates a winner-take-all race where safety may be sacrificed for speed. AI labs publicly ask for regulation while privately fearing it — because regulation could slow them down and let others win. That tension is the real-world engine driving risky behavior.


So what should we do?

Wallowing in doom isn’t productive. Nor is blind techno-optimism. Here’s a practical, moral outline:

  1. Make alignment a first-class goal. Design systems so that the objective and incentives match human values. Not later — now.

  2. Strengthen verification and provenance. If digital content can be faked, build robust ways to verify origin and integrity.

  3. Regulate the race. International norms, safety checks, and audits to prevent reckless acceleration.

  4. Design for human flourishing. Use AI to amplify what makes us human — creativity, empathy, curiosity — not to erode attention and civic life.

  5. Keep institutions and philosophy in play. This is a philosophical and societal problem, not just a technical one. Bring ethicists, social scientists, and communities into the room.


Closing: the courageous path

AI could make us infinitely better off — or it could foolishly endanger us if alignment is ignored. There’s no second try once superintelligence is here. So the courageous path is to build for the world we want to live in and to treat alignment with at least as much urgency as capability. That means responsibility from entrepreneurs, engineers, regulators, and citizens — not just a hope that “we’ll figure it out later.”

Raise your hand if you want benefits. Raise your hand if you want safety. You don’t get to vote after the slides are made — but you do get a voice now. Use it.

Tags: Artificial Intelligence,Technology,Video,

No comments:

Post a Comment