Friday, February 27, 2026

Kejriwal Acquitted -- But What About the Institutions?


See All News by Ravish Kumar
<<< Previous

In Indian politics, sometimes verdicts are not delivered only in courtrooms. They are delivered inside us — in our memory, in our trust, in the silences we maintain.

When a Special Court in Delhi said that no sufficient evidence was found against Arvind Kejriwal, Manish Sisodia, and 23 others in the alleged liquor policy case, it was not merely a legal conclusion. It was a moment that forced millions to pause. For months, prime-time debates had thundered with certainty. The word “scam” echoed nightly, repeated so often that doubt itself began to feel unreasonable. And yet, the court said: the charge sheets ran into thousands of pages, but contained gaps that testimony could not bridge. Not even a prima facie case was established.

This is not just about two politicians. It is about institutional credibility.

Investigative agencies in any democracy wield enormous power. But their greatest asset is not power — it is public trust. When a court observes that the prosecution failed to even make an initial case, it raises uncomfortable questions about the process itself. Arrests that stretched for months. Bail repeatedly opposed. Supplementary charges added. A sitting Chief Minister jailed while in office — something unprecedented in independent India. And in the end: insufficient evidence.

What does that do to faith in institutions?

For nearly two years, television studios functioned as parallel courtrooms. Accusations were presented as conclusions. Political spokespersons delivered nightly verdicts. Viewers were told, with conviction, that corruption had been exposed. The sheer repetition created its own truth. “If so much is being said, something must be there.” That psychological shift is perhaps more powerful than any legal document.

Now the court has spoken. And its words are stark.

This is where the deeper democratic tension emerges. In principle, investigative agencies are autonomous. In practice, timing shapes perception. When arrests coincide with election cycles, when bail orders are countered immediately with new cases, when opposition leaders across states face similar patterns of scrutiny — citizens are bound to ask whether law is being applied neutrally or strategically.

Democracy does not collapse in dramatic explosions. It erodes quietly — through normalization. Through the idea that arrest equals guilt. Through the belief that prolonged investigation itself is proof of wrongdoing. Through media trials that exhaust public patience before courts even begin proceedings.

There is also an uncomfortable mirror here for the judiciary. If, after prolonged incarceration and extensive filings, a case cannot sustain even a preliminary threshold, then the process itself becomes part of the story. Justice delayed may be justice denied — but justice pursued without adequate basis is also damaging. Not only to individuals, but to institutions.

The larger question is not whether Kejriwal or Sisodia feel vindicated. The larger question is what this episode tells us about the health of investigative autonomy in India. When agencies appear aligned with political narratives, their credibility suffers — even in cases where genuine wrongdoing might exist elsewhere. And once credibility is eroded, rebuilding it is far harder than filing another charge sheet.

There is also a caution for media. Debate is not journalism. Repetition is not evidence. Volume is not verification. When narratives harden before proof, the public sphere becomes polarized long before truth has a chance to breathe.

This verdict may restore some faith in judicial independence. But it simultaneously exposes the fragility of investigative trust. A democracy survives not because courts occasionally correct excesses — but because institutions act with restraint before excess becomes routine.

The most troubling aspect of this episode is not that leaders were accused. In a democracy, scrutiny is necessary. The troubling aspect is the possibility that accusation itself becomes political currency — that the process becomes punishment.

When that happens, even acquittal does not fully undo the damage. Reputations are scarred. Public discourse is distorted. Citizens grow cynical. And cynicism is fertile ground for authoritarian impulses.

The court has said there was no sufficient evidence. That is a legal fact. But the political and institutional consequences will linger far beyond the judgment.

In the end, democracies are not tested by how loudly allegations are made. They are tested by how carefully power is exercised — and how honestly institutions examine themselves when they fail.

Tags: Ravish Kumar,Hindi,Video,Indian Politics,

Quiz - Evaluating AI Agents (Short Course @ DeepLearning.ai)

View Course on DeepLearning.AI    View Other Courses Audited By Us


Thursday, February 26, 2026

Rebuilding Schools for the AI Age -- What If We Started From Scratch?


See All Articles on AI

Let’s be honest: something isn’t working.

U.S. high school seniors are at historic lows in reading, math, and science proficiency. College tuition has exploded nearly 9x since the 1980s. And yet college graduates are now among the longest unemployed groups. At the same time, AI is reshaping entire industries in real time.

If this is the trajectory, we have to ask a hard question:

If we were going to redesign school from first principles for the AI age… what would we build?

That’s the question behind Alpha School, an education model that claims something almost heretical:

Kids can learn 2–10x faster.
Academics can be done in two hours a day.
And school should be something children love more than vacation.

Radical? Absolutely.
Necessary? Increasingly, yes.


The Core Problem: School Is Stuck in 1900

For 40 years, learning scientists have known something uncomfortable:
The “teacher in front of a classroom” model is not the most effective way to teach.

Research going back to Benjamin Bloom’s “Two Sigma Problem” showed that one-on-one mastery-based tutoring dramatically outperforms traditional classrooms. The issue wasn’t knowing what works. The issue was scale.

Until now.

AI has changed the equation.

Instead of one teacher delivering the same lesson to 30 students at the same pace, generative AI can create personalized lessons for every child—at exactly the right level of difficulty.

Not too easy (which breeds boredom).
Not too hard (which breeds disengagement).
Right in the “zone of proximal development.”

The result? Kids move faster because they’re no longer trapped in time-based progression. They advance when they master, not when the calendar flips.

That shift—from time-based to mastery-based—changes everything.


The First Radical Principle: Kids Must Love School

Here’s the most counterintuitive insight:

Alpha’s founders argue that love of school isn’t a luxury metric—it’s the foundation.

In most traditional systems, we accept that school is “supposed to be hard” or even unpleasant. Spinach. Necessary but joyless.

But in every other domain—companies, sports teams, creative studios—we obsess over building environments people want to show up to.

Why should school be different?

At Alpha, over 90% of students report loving school. Some reportedly choose school over vacation. That sounds absurd—until you see the structure.


The Two-Hour Academic Engine

Students complete core academics in just two focused hours per day.

Not Zoom lectures.
Not passive screen time.
Not ChatGPT cheating.

Instead:

  • Personalized AI-generated lessons

  • Continuous feedback loops

  • Screen monitoring that detects guessing, distraction, or rushing

  • Real-time coaching toward productive learning behaviors

The platform measures “XP” (focused minutes of learning).
Waste time? The system flags it.
Engage deeply? You move forward faster.

This isn’t more screen addiction—it’s high-efficiency learning.

And when academics compress into two hours?

You get time back.


The Afternoon: Where Life Actually Happens

This is where the model gets interesting.

If kids aren’t chained to desks all day, what do they do?

They build.

They launch businesses.
They produce musicals.
They create apps.
They design marketing campaigns.
They climb rock walls.
They study financial literacy.
They practice public speaking.

Instead of treating “life skills” as electives, they become core curriculum.

Leadership.
Entrepreneurship.
Teamwork.
Adaptability.
Grit.

In an AI-saturated future, those may matter more than memorizing formulas.


A Different Role for Adults

Traditional schools ask teachers to do five jobs:

  • Be subject expert

  • Design curriculum

  • Deliver lectures

  • Grade work

  • Motivate students

That’s an impossible spec.

Alpha removes curriculum design and grading from humans and gives those to software. What remains?

Mentorship.

Their adults—called “guides”—focus on:

  • High standards

  • Emotional support

  • Weekly one-on-one coaching

  • Motivational accountability

The average student-teacher one-on-one time in traditional schools? About 22 seconds per day.

At Alpha? 30 minutes per week, guaranteed.

That changes relational depth.


The Dilemma: Screens, Skepticism, and Scaling

This model raises real concerns.

What about screen time?
Parents are pushing back against devices. Alpha’s response is: there’s good screen time (focused learning that frees the rest of the day) and bad screen time (passive consumption). The difference is intentionality and structure.

What about cheating with AI?
They block chatbots during core academics. AI is used to generate lessons, not to do the work for students.

What about scale?
This is harder.

Private micro-schools can innovate quickly. Public systems move slowly. Charter applications get rejected. Regulatory inertia is real.

And then there’s cost. Advanced AI tutoring at scale isn’t cheap yet. Token usage alone runs high. But like all technology, costs fall.

The real bottleneck may not be tech.

It may be parents.

As one founder put it:
“The biggest impediment to education reform isn’t students. It’s what parents believe school is supposed to look like.”

That’s a cultural challenge, not a technical one.


Why This Matters Now

AI isn’t coming. It’s here.

Kids entering kindergarten today will graduate into a world where:

  • Entire job categories disappear.

  • New ones emerge overnight.

  • Knowledge is ubiquitous.

  • Adaptability becomes survival.

In that world, memorization declines in value.
Learning how to learn becomes everything.

What Alpha is attempting isn’t just a school redesign. It’s a philosophical shift:

From passive consumption → active creation.
From time-based sorting → mastery-based growth.
From compliance → agency.
From pessimism → optimism.


The Bigger Question

Is Alpha the final answer?

Probably not.

But it may be an early prototype of what’s coming.

The more important takeaway isn’t whether every detail works perfectly. It’s that someone is asking the right question:

If we were building education for the AI age from scratch… would we build what we have now?

If the honest answer is no, then experimentation isn’t optional.

It’s urgent.

And perhaps the boldest idea of all is this:

In a future where AI gives children superpowers,
the real job of school isn’t to slow them down.

It’s to help them aim higher.

Tags: Artificial Intelligence,EdTech,

The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (by Mustafa Suleyman) -- Book Summary


View Other Book Summaries on AI    Download Book

PROLOGUE
This is how an AI sees it.

QUESTION: What does the coming wave of technology mean for humanity? 

In the annals of human history, there are moments that stand out as turning points, where the fate of humanity hangs in the balance. The discovery of fire, the invention of the wheel, the harnessing of electricity—all of these were moments that transformed human civilization, altering the course of history forever.

And now we stand at the brink of another such moment as we face the rise of a coming wave of technology that includes both advanced AI and biotechnology. Never before have we witnessed technologies with such transformative potential, promising to reshape our world in ways that are both awe-inspiring and daunting.

On the one hand, the potential benefits of these technologies are vast and profound. With AI, we could unlock the secrets of the universe, cure diseases that have long eluded us, and create new forms of art and culture that stretch the bounds of imagination. With biotechnology, we could engineer life to tackle diseases and transform agriculture, creating a world that is healthier and more sustainable.

But on the other hand, the potential dangers of these technologies are equally vast and profound. With AI, we could create systems that are beyond our control and find ourselves at the mercy of algorithms that we don’t understand. With biotechnology, we could manipulate the very building blocks of life, potentially creating unintended consequences for both individuals and entire ecosystems.As we stand at this turning point, we are faced with a choice—a choice between a future of unparalleled possibility and a future of unimaginable peril. The fate of humanity hangs in the balance, and the decisions we make in the coming years and decades will determine whether we rise to the challenge of these technologies or fall victim to their dangers.

But in this moment of uncertainty, one thing is certain: the age of advanced technology is upon us, and we must be ready to face its challenges head-on.

~~~

THE ABOVE WAS WRITTEN by an AI. The rest is not, although it soon could be. This is what’s coming.
  1. The Wave We Can’t Stop (Ch1)
  2. The Default Setting of Technology Is To Spread (Ch2)
  3. The Real Problem Isn’t Invention. It’s Containment (Ch3)
  4. When the Machine Starts Thinking Back (Ch4)
  5. When Life Becomes a Design Problem (Ch5)
  6. When the Wave Becomes a Superwave (Ch6)
  7. The Four Forces Making This Wave Different (Ch7)
  8. Why We Won't Say No (Ch8)
  9. The Grand Bargain (Ch9)
  10. Fragility Amplifiers (Ch10)
  11. The Future of Nations (Ch11)
  12. The Dilemma (Ch12)
  13. Containment Must Be Possible (Ch13)
  14. Ten Steps Towards Containment (Ch14)

Ten Steps Towards Containment (Ch14)


View Other Book Summaries on AI    Download Book
<<< Previous Chapter    Go Back To Book Index >>>

Containment Is Not a Wall. It’s an Architecture.

When people hear the word containment, they often imagine something simple: a box. Lock the technology inside. Cut the wire. Build a wall. Problem solved.

Chapter 14 argues something far more nuanced — and far more realistic.

Containment, in the age of AI and synthetic biology, is not a single barrier. It’s a layered system. A set of concentric circles. An onion, built layer by layer, where no single ring is sufficient — but together, they might hold.

The central thesis is clear: if we want to navigate the coming technological wave without collapse or dystopia, we need a deliberate, multi-level containment strategy that combines technical safeguards, oversight, economic redesign, government reform, and international cooperation. None of these alone will work. Together, they just might.

The chapter begins close to the code — at the innermost circle: technical safety. The author makes an important point here. AI systems once produced blatantly biased and racist outputs. Through reinforcement learning from human feedback and sustained engineering effort, they improved. Not perfectly — but meaningfully. The lesson? Technical problems can be mitigated through focused work.

But the scale of effort is wildly mismatched. Tens of thousands build frontier AI. Only a few hundred work on safety. Compared to the risks, safety research is marginal. The author calls for an “Apollo program” for AI safety — a national-scale mobilization. Safety must become foundational design, not a patch applied after launch.

From there, the argument expands outward.

Audits are the next layer. Trust requires verification. You cannot control what you cannot see. Red teams, adversarial testing, incident databases, third-party oversight — these are not bureaucratic formalities but essential instruments of power. Knowledge is control. Without structured, enforceable auditing mechanisms, safety becomes performative.

Yet audits require time. And time is the scarcest resource.

Which brings us to one of the chapter’s most strategically sharp ideas: choke points. Advanced semiconductors, rare earth minerals, high-end chip fabrication plants — the technological wave rests on surprisingly narrow foundations. The U.S. export controls on advanced chips to China demonstrate something uncomfortable but important: technological acceleration can be slowed. Not stopped. Slowed.

And slowing matters. Time buys space for safety research. For governance. For regulation. For institutional reform. The next five years, the author suggests, may be a narrow window when such leverage still exists.

But the chapter does not place responsibility solely on states.

It turns sharply toward technologists and corporations themselves.

Builders cannot hide behind inevitability. “Technology will happen anyway” is not an ethical defense. Critics, too, are challenged. Shouting from the sidelines is insufficient. If containment is to work, critics must build. They must enter the arena and shape incentives from within.

This leads to one of the most difficult tensions in the chapter: profit versus purpose. Corporate structures today are optimized for shareholder return. They are not designed for containment. Experiments with ethics boards, benefit corporations, and hybrid governance models show promise — but are fragile. The gravitational pull of simple profit structures remains powerful.

Beyond business lies government — and here the tone becomes urgent. States are operating “blind in a hurricane.” They lack in-house technical capacity. They outsource expertise. They regulate reactively. To survive the coming wave, governments must rebuild internal technical competence, license frontier systems, rethink taxation (especially the shift from labor to capital), and create institutions equal to exponential change.

Finally, the outermost circle: alliances and treaties. Laser weapons were banned. Nuclear proliferation was constrained. CFCs were phased out. International coordination is difficult — but not impossible. The implication is unmistakable: AI and biotech demand similar ambition.

Across all these layers runs a central dilemma. Move too slowly, and we risk catastrophic failures. Move without restraint, and we risk concentrated, unaccountable power. Containment is not about freezing progress. It is about steering it.

Why does this matter today? Because incentives are currently outpacing guardrails. Innovation compounds exponentially; governance evolves incrementally. Without structural redesign — across safety, audits, business, state, and alliances — the imbalance grows.

The chapter ends not in alarmism, but in sober resolve. Containment is not a single lever. It is architecture. It is design. It is coordination across disciplines and institutions that rarely move in sync.

The wave is coming. The question is whether we build the walls too late — or build the scaffolding in time.

From Chapter 14 of the book: 'The Coming Wave' by Mustafa Suleyman and Michael Bhaskar

Containment must be possible (Ch13)


View Other Book Summaries on AI    Download Book
<<< Previous Chapter    Next Chapter >>>

What if the only way out of the technological dilemma is to admit that there is no easy way out?

Chapter 13 begins with a quiet but important shift in tone. After mapping the risks of the coming wave — AI, synthetic biology, robotics, autonomy — the author turns to the practical question: what would containment actually look like? Not in slogans. Not in panel discussions. In reality .

And the first uncomfortable truth is this: regulation alone is not enough.

Whenever technology feels overwhelming, the reflex answer is “regulate it.” It sounds responsible. Mature. Sensible. We’ve regulated cars, planes, medicines — why not AI? But the chapter dismantles this comforting instinct. Regulation moves slowly. Technology evolves weekly. Politicians operate inside news cycles; researchers operate inside exponential curves. By the time legislation catches up, the landscape has already shifted .

The Ring doorbell example captures this dynamic perfectly. A seemingly simple product — a camera on your front door — quietly reshaped norms around privacy and surveillance before regulators even realized what had happened. Multiply that by AI models, synthetic biology tools, and autonomous systems, and the lag becomes existential .

The chapter introduces a powerful phrase: the price of scattered insights is failure. Today’s debates about algorithmic bias, drone warfare, bio-risk, or economic displacement are fragmented. Each silo treats its problem as distinct. But they are manifestations of the same underlying wave — asymmetry, hyper-evolution, omni-use, and autonomy. Without a unified goal, efforts remain ad hoc and reactive .

That unifying goal, the author argues, must be containment.

Containment is not a magic box that seals dangerous technology away. It is a system of guardrails — technical, cultural, regulatory — strong enough to prevent runaway catastrophe while allowing progress to continue . Think less “ban everything” and more “keep humanity in the driver’s seat.”

The dilemma, though, is brutal. Nations are locked in strategic competition. Every country wants to lead in AI and biotech — for pride, for security, for prosperity. Yet they also fear losing control over those same technologies. Advantage and safety pull in opposite directions . Slow down too much and you fall behind. Move too fast and you court disaster.

The EU’s AI Act is presented as one of the most ambitious attempts at containment so far — risk tiers, oversight for high-risk systems, prohibitions for unacceptable ones . Yet even this flagship effort reveals the limits of legislation. Critics say it overreaches. Others say it’s too weak. Some argue it chills innovation; others that it protects incumbents. This is what regulating a general-purpose technology looks like: messy, contested, incomplete.

And general-purpose technologies are precisely the problem. A nuclear weapon is specific. A computer is omni-use. The more uses a technology has, the ha

From Chapter 13 of the book: 'The Coming Wave' by Mustafa Suleyman and Michael Bhaskar

The Dilemma (Ch12)


View Other Book Summaries on AI    Download Book
<<< Previous Chapter    Next Chapter >>>

What if the greatest threat of the twenty-first century isn’t technology itself — but the trap it sets for us?

Chapter 12 confronts that trap head-on. It begins with a sobering reminder: human history is a history of catastrophe. Plagues wiped out a third of populations. World wars consumed millions. Nuclear weapons gave us the power to end civilization in minutes . Catastrophe isn’t theoretical. It’s precedent.

But the coming wave of AI, synthetic biology, robotics, and quantum computing expands both the scale of risk and the number of pathways to disaster . The central thesis of this chapter is stark: we are entering an era where uncontained technology makes global catastrophe more likely than ever — yet the most effective methods of containment threaten to produce dystopia. Between catastrophe and authoritarian control lies the defining dilemma of our age.

The author walks through plausible disaster scenarios not to indulge in science fiction, but to illustrate amplification. Drone swarms equipped with facial recognition. Engineered pathogens released deliberately or by accident. AI systems autonomously escalating military conflict. Deepfake-triggered riots cascading into civil breakdown . These are not wild fantasies. They are extrapolations of capabilities already emerging.

Crucially, the risk isn’t limited to rogue superintelligence. While the “paperclip maximizer” thought experiment gets attention, the author is more concerned about near-term amplification: AI in the hands of existing bad actors, fragile states, or simply fallible institutions . AI doesn’t need to become malevolent to be dangerous. It only needs to scale human intentions — good or bad — with unprecedented speed and reach.

And then there’s biology. A pathogen with modest transmissibility but high fatality could kill at a scale dwarfing COVID. A novel virus combining moderate spread with extreme lethality could result in over a billion deaths in months . These aren’t predictions. They’re reminders of what’s now technically possible.

The most chilling example is historical: Aum Shinrikyo, the Japanese doomsday cult that pursued chemical and biological weapons, eventually releasing sarin in the Tokyo subway . Their ambition outpaced their competence. But as destructive tools become cheaper, more automated, and more precise, competence becomes less of a barrier. “We are playing Russian roulette,” the chapter concludes bluntly .

So what’s the response?

Here the dilemma sharpens. To prevent catastrophe, governments may feel compelled to impose sweeping surveillance and control — monitoring every lab, server, line of code, and strand of synthesized DNA . Technology has penetrated society so deeply that containing it means watching everything.

The author calls this the “dystopian turn.” In the face of disaster, the public appetite for security may override resistance to surveillance. COVID lockdowns showed how quickly societies accept extreme measures when fear spikes . An engineered pandemic or AI-triggered calamity could accelerate demands for something close to total oversight — an AI-enabled panopticon.

But this, too, is failure. A world of total monitoring, centralized coercion, and eroded liberties may prevent some risks while destroying the freedoms that make civilization worth preserving . Catastrophe on one side. Dystopia on the other.

Could we escape by halting technological progress altogether?

The chapter dismisses that as a dangerous illusion. Modern civilization rests on continual innovation. Economic growth, rising living standards, healthcare advances, climate mitigation — all depend on new technologies . Without them, demographic decline, resource scarcity, and environmental stress would trigger stagnation or collapse. A moratorium on progress would not deliver safety; it would produce another kind of catastrophe .

This is why the author frames our predicament not as a simple trade-off but as an existential bind. Technology is both salvation and threat. It is the engine of prosperity and the vector of ruin. As John von Neumann once asked: Can we survive technology?

What makes this chapter powerful is its refusal to settle for easy answers. It resists techno-optimism and techno-doomism alike. The overwhelming majority of technological use will be beneficial. Yet edge cases matter when the edge is planetary.

Why does this matter now? Because the coming decade will see AI deployed into energy grids, financial systems, defense networks, and biotech labs. Once distributed widely, safety must be maintained everywhere, not just in well-run labs or responsible firms . One failure is enough.

We are, the author suggests, Homo technologicus — a species defined by its tools. The contradiction in his tone is deliberate. Technology has made life longer, richer, healthier. But its trajectory may not remain net positive by default.

The ultimate question is not whether risk exists. It’s whether containment is possible without sacrificing liberty. If catastrophe pushes us toward dystopia, and stagnation leads to decline, then navigating between these poles becomes the defining political and moral challenge of the century.

The dilemma isn’t abstract. It’s tightening. And there are no good options — only trade-offs we must learn to manage, before events manage them for us.

From Chapter 12 of the book: 'The Coming Wave' by Mustafa Suleyman and Michael Bhaskar