Saturday, November 15, 2025

3 questions to ask yourself before you die


See All on Motivation


The D Word We Avoid — And Why Facing It Can Transform Your Life

No one really wants to talk about the D word.
No, not that D word. Relax — I mean death.

We’ve come up with all kinds of ways to avoid saying it outright. In the UK you might “pop your clogs.” In Japanese, raiku means “to go to the next world.” And my newest favorite: the German phrase unter den Radieschen schauen — “looking at the radishes from below.”

But for something we avoid so fiercely, death is one of the most fascinating and powerful forces we ever encounter. It frightens us, shapes us, and — if we let it — completely transforms us.

Think about it:
How many people have had a near-death experience or lost someone they love, only to rethink everything about how they want to live? A lot of us.

Malala Yousafzai survived a gunshot wound and decided she would “make the most of this new life.”
Candy Lightner created Mothers Against Drunk Driving after her daughter was killed by a repeat offender.
Steve Jobs called death “life’s change agent.”

As a hospice and palliative care doctor, I’ve seen this transformation over and over. Some combination of tragedy, grief, and regret wakes us up. But what if we didn’t need tragedy to see clearly?
What if we could learn what death teaches — without the pain?

What if we intentionally invited mortality into our awareness, not to depress us, but to help us live better?


How Death Became My Teacher

Like many of my patients, I’ve had brushes with death that changed me.

At 13, I nearly drowned in a wave pool — an almost ridiculous place to die. Returning to normal life afterward felt surreal. I knew how close I came to having no life at all.

Then, early in my medical training, I cared for a woman only months older than me.
She was a Chinese immigrant. An only child. We had the same dark eyes, the same black hair, even the same name. It was like looking into a mirror — except one of us had terminal cancer.

Her parents flew from China believing they had months left together.
She died a week later.

Deaths like these split your world open. And yet, the changes they bring often steer us in a positive direction: toward gratitude, compassion, and purpose.

Over the years, I began to notice a pattern in my patients:
After facing death, people often say they feel like they’ve been asleep or on autopilot in their own lives.

That makes sense. Our brains are supercomputers designed for efficiency. They automate everything — even living.

It often takes a major life event — relocation, divorce, illness, job loss, a milestone birthday, or death — to short-circuit that autopilot and make us go:

“Wait… what am I doing with my life?”

But it’s not the event itself that changes us.
It’s the shift in perspective and the surge of emotion that pushes us to act.

And those two things?
We don’t need a crisis to create them.


How to Use Mortality to Wake Up — Without Waiting for Tragedy

Here are three practices that can pull you out of autopilot, help you understand your values, and minimize regret — all by bringing death a little closer in a healthy, intentional way.

The more deeply you feel these, the more powerful they become.


1. Prioritize What Really Matters

In a world where everything feels urgent, ask yourself:

“Will this matter when I’m dying?”

Zooming out to the deathbed perspective clarifies priorities instantly.

A young woman once asked me whether she should reconnect with her estranged father.
I asked her, “What would you do if you knew he had six months left?”
She didn’t hesitate: “I’d reach out.”

“Then maybe,” I said, “do it now.”

Why wait until death shifts from an abstract idea to an immediate reality?


2. Be Fully, Fiercely Present

Do you know what dying people want most?

Not bucket-list adventures.
Not material things.

They want one more morning.
To taste food.
To be with the people they love.

That’s it.

So try asking yourself:

“What if this is the last time I get to experience this?”

The last hug with a parent.
The last conversation with your best friend.
The last dinner you savor, sunset you watch, dog you cuddle.

One day, whether we like it or not, we will have a final moment with everyone and everything.

Presence is simply remembering that.


3. Minimize Regret Before It Forms

People regret what they didn’t do more than what they did.
And they regret not living up to their aspirations more than their obligations.

Here’s a powerful exercise:

Imagine it’s a year from now and you learn you’re dying.
You can feel your breath slowing.
Your days shrinking.

Ask yourself:

“What do I wish I had more time to do?”

That answer is your blueprint.

Most of us will live far longer than a year.
Some of us won’t.
So what would you need to start today to avoid tomorrow’s regrets?


The Regret That Still Stings

Not long ago, a friend of mine entered hospice. She was young and full of energy. She loved talking about death — genuinely loved it — and she was excited to contribute ideas to this very talk.

We thought she had months.

She died ten days later.

The last message she sent me was:
“We can schedule a time to talk. I would love to help.”

I meant to follow up.
But life got busy.
And now I never will.

I’m human.
But I still wonder:
If I had stepped out of my own autopilot for a moment, would I be sharing her wisdom now instead of this regret?


A Healthier Relationship With Mortality

My hope is that you won’t need a brush with death to learn these lessons.

I hope you reconnect with someone long before their final months.
That you forgive someone while both of you are still fully alive.
That you pursue your dreams now, not after a crisis.

I still don’t know why my patient and my friend died while I lived.
None of us get that answer.

What we do get is a choice:

To make our lives count.
To choose courage over fear.
Connection over isolation.
Presence over autopilot.

To reach the end of our lives saying,
“I’m so glad I did,”
not
“I wish I had.”

They say we all have two lives.
And the second begins when we realize we have only one.

So the real question is:

Who decides when your second life begins — the D word, or you?

Tags: Motivation,Video,

Model Alert... Chronos-2 -- Forecasting Multiple Time Series


See All Articles on AI

Transformers are well suited to predicting future values of time series like energy prices, wages, or weather, but often — as in those examples — multiple time series often influence one another. Researchers built a model that can forecast multiple time series simultaneously.

 

What’s new: Chronos-2 is a pretrained model that can accept and predict multiple time series in a zero-shot manner to forecast series of a single variable (univariate forecasting), multiple variables (multivariate forecasting), and single variables that depend on other variables (covariate-informed forecasting). Its authors include Abdul Fatir Ansari, Oleksandr Shchur, Jaris Küken, and colleagues at Amazon, University of Freiburg, Johannes Kepler University Linz, Boston College, and Rutgers.

  • Input/output: Time series in (up to 8,192 time steps), time series out (up to 1,024 time steps)
  • Architecture: Modified transformer, 120 million parameters
  • Performance: Lower error on average than 14 competing models
  • Availability: Weights available for commercial and noncommercial uses under Apache 2.0 license

How it works: Given any number of time series, Chronos 2 predicts values at multiple future time steps. Chronos 2 learned to minimize the difference between its predicted future values and ground truth values in subsets of datasets that contain univariate series (including synthetic data generated using methods from earlier work). They supplemented these datasets with synthetic multivariate and covariate data produced using a method devised by the authors: Their method generates multiple independent time series and then produces dependencies between them by applying mathematical transformations at the same time step and across time steps.

  • Chronos 2 stacks each input time series to make a series of vectors, where each vector represents one time step. These values can be historical or future values that are known (such as dates of holidays or weather forecasts). For non-overlapping time series (for example, one past and one future), the model aligns the time series by the corresponding time step and adds zeros to either end to equalize the number of time steps.
  • Given the series of vectors, the model splits them into non-overlapping patches, and a vanilla neural network with added skip connections, or residual network, turns each patch into an embedding.
  • Given the embeddings, it predicts values of each time series for a number of future time steps that haven’t already been assigned a value.
  • In addition to the attention layers that perform attention across a given time series, Chronos 2 includes what the authors call group attention layers. These layers process attention across time series, or more specifically, across groups of time series. The user specifies groups, so the model can produce multiple independent forecasts at once.

Results: Across various benchmarks, Chronos 2 outperformed 14 competing zero-shot models according to their skill score, a measure of how much a model reduces the average difference in predicted values relative to a baseline (higher is better, one is a perfect score).

  • Across univariate, multivariate, and covariate subsets of fev-bench, Chronos-2 achieved the highest skill score.
  • On fev-bench, 100 realistic time-series tasks including single and multiple input and output time series, Chronos-2 (0.473) outperformed TiRex (0.426), which processes only univariate time series, and Toto-1.0 (0.407), which can process multivariate and covariate time series in some cases.

Behind the news: Most previous works, including the previous versions Chronos and Chronos-Bolt, predict only univariate time series. Later models like Toto-1.0 and COSMIC process multiple inputs or outputs in limited ways. For instance, Toto-1.0 processes multiple inputs and outputs, but the multiple inputs can only represent past information, not future or static information. COSMIC, on the other hand, can handle multiple inputs (past or future) but not multiple outputs.

 

Why it matters: Chronos 2 can handle past, future, and static inputs as well as multiple outputs, giving developers, researchers, and companies alike the ability to better predict complex time series.

 

We’re thinking: The author’s attention setup is similar to the way many video transformers apply attention separately across space and time. It saves memory compared to performing attention across both at once, and remains an effective method for understanding data across both.

 

Tags: Technology,Artificial Intelligence,Large Language Models,

Model Alert... Better Images Through Reasoning -- Tencent releases HunyuanImage-3.0


See All Articles on AI

 

A new image generator reasons over prompts to produce outstanding pictures.

 

What’s new: Tencent released HunyuanImage-3.0, which is fine-tuned to apply reasoning via a variety of reinforcement learning methods. The company says this helps it understand users’ intentions and improve its output.

  • Input/output: Text and images in, text and images out (fine-tuned for text in, images out only) 
  • Architecture: Mixture of experts (MoE) diffusion transformer (80 billion parameters, 13 billion parameters active per token), one VAE, one vision transformer, two vanilla neural network projectors
  • Performance: Currently tops LMArena Text-to-Image leaderboard
  • Availability: Weights available for commercial and noncommercial use by companies with fewer than 100 million monthly active users under Tencent license
  • Undisclosed: Input and output size limits; parameter counts of VAE, vision transformer, and projectors; training data; models used for labeling, filtering, and captioning images; reward models

How it works: The authors built a training dataset of paired text and images. They trained the model on image generation via diffusion through several stages and fine-tuned it on text-to-image generation in further stages.

  • To produce the dataset, the authors collected 10 billion images. (i) They built models specially trained to measure image clarity and aesthetic quality, and removed images that didn’t make the grade. (ii) They also built models to identify text and named entities such as brands, artworks, and celebrities, and extracted this information from the remaining images. (iii) They fed the images, extracted text, and extracted entities to a captioning model that produced a text caption for each image. (iv) For a subset of the data, they manually annotated chains of thought, producing data that linked text to chains of thought to images. (v) They added text-to-text data and image-text data from unspecified corpi.
  • The authors pretrained the system to generate text and images from the various text and image elements in the dataset. Specifically, for text-to-image tasks: (i) First, the VAE’s encoder embedded an image. (ii) The authors added noise to the embedding. (iii) Given the noisy embedding and a text prompt, the MoE removed the noise. (iv) The VAE’s decoder generated an image from the embedding with noise removed.
  • The authors fine-tuned the system (i) for text-to-image tasks by training it in a supervised fashion to remove noise from human-annotated examples, (ii) via DPO to be more likely to generate higher-quality examples, like human-annotated ones, than lower-quality ones, (iii) via the reinforcement learning method MixGRPO to encourage the model to generate more aesthetically pleasing images as judged by unspecified reward models, and (iv) via SRPO (another reinforcement learning method) to encourage the model to generate images more like a text description that specified desired traits and less like a text description that specified negative traits. While applying SRPO, they also encouraged the model to generate images similar to those in an author-chosen distribution.

Results: At present, HunyuanImage 3.0 holds first place in the LMArena Text-to-Image leaderboard, ahead of Google Gemini 2.5 Flash Image (Nano Banana), Google Imagen 4.0 Ultra Generate, and ByteDance Seedream 4.0. In addition, 100 people compared 1,000 outputs of 4 competing models to those of HunyuanImage 3.0 in side-by-side contests. The people evaluated which image was better, or whether they were both equally good or equally poor.

  • On average, the people preferred HunyuanImage 3.0’s images over those of the competitors. 
  • For example, 20.01 percent of the time they preferred HunyuanImage 3.0, 18.84 percent of the time they preferred Seedream 4.0, 39.3 percent of the time they were equally good, and 21.85 percent of the time they were equally poor.

Behind the news: Tencent has been on a streak of releasing vision models. 

  • Tencent recently launched the API version of Hunyuan-Vision-1.5, its latest vision-language model, with promises to release the weights and a paper soon.
  • The company released Hunyuan3D-Omni, a model that takes an image and rough 3D representation (such as a skeleton or bounding box) and generates a detailed 3D representation. 
  • It also played a role in the release of FlashWorld, which accepts an image and text prompt and generates a 3D scene.

Why it matters: Simplifying training methods can be helpful, since each additional step adds time spent not only training but also debugging, and each additional component can interact with other components in unexpected ways, which adds to the time required to debug the system. Yet Tencent used several stages of pretraining and fine-tuning and produced a superior model.

 

We’re thinking: One key to this success may be to use different methods for different purposes. For instance, the team used MixGRPO to fine-tune the model for aesthetics and SRPO to better match human preferences.

 

Tags: Technology,Artificial Intelligence,Large Language Models,

Friday, November 14, 2025

GPT-5.1, Open-Source Disruption, and Microsoft’s 'Agentic Employees'


See All Articles on AI


The latest episode of Mixture of Experts brought together three leading minds from across the AI ecosystem—Kouthar El Alaoui (IBM), Aaron Baughman (IBM), and Mihai Crovetto (Distinguished Engineer, Agentic AI)—to dissect a week filled with high-impact developments: OpenAI’s new GPT-5.1 models, the surprising rise of the open-source Kimmi K2 Thinking model, and Microsoft’s provocative vision of AI “users” embedded directly inside the enterprise workforce.

Here’s a distilled overview of what stood out.


GPT-5.1: A Fix, Not a Leap?

OpenAI’s dual rollout—GPT-5.1 Instant and GPT-5.1 Thinking—generated plenty of discussion, but the headline moment wasn’t about benchmark wins. This time, OpenAI led with style. According to the company, users want a model that is not only smart but “enjoyable to talk to.”

That pivot raised a core debate on the panel:
Is this truly a new model upgrade—or a course correction after the community pushback surrounding GPT-5?

Mixed Community Reactions

Some developers praise 5.1’s warmth and conversational fluidity. Others remain nostalgic for GPT-4’s output style and skeptical about claims of deeper reasoning. A significant portion of the community believes this is:

  • A refinement rather than a reinvention

  • Partially a cost-optimization move, especially with the new router system deciding when to use Instant vs. Thinking

  • A strategic push into personalization and user experience as the frontier of differentiation

As Mihai Crovetto put it, many are still wondering: “Is this really a new model or just a retune of GPT-5?”

The Router: Feature or Red Flag?

GPT-5.1’s new routing layer—automatically deciding how much “thinking” to apply—won praise from those seeking responsiveness. But others found it unsettling.

Crovetto was blunt:
“I don’t want it learning my behavior. I want switches I can toggle. Not a model deciding how much to think.”

This tension hints at a split emerging in the market:
Do users want a hyper-smart assistant—or a deeply personalized one?

We may soon see segmentation not by model size, but by EQ vs. IQ, style vs. reasoning.


Kimmi K2 Thinking: Open Source’s Biggest Power Play Yet

While OpenAI polished style, Chinese startup Moonshot AI delivered a shockwave with Kimmi K2 Thinking, an open-source Mixture-of-Experts (MoE) model that posts numbers competitive with top proprietary models—even outperforming them on several benchmarks.

Why This Matters

Kimmi K2 Thinking is:

  • A 1-trillion-parameter MoE that activates just 32B parameters per token (major compute efficiency)

  • Competitive on SWE-Bench, BrowseBench, and Humanity’s Last Exam

  • Fully open-weights with a permissive license

  • Capable of up to 300 tool calls, 256k context, and local deployability

As Kouthar El Alaoui noted, this challenges the entire closed-model economy:
“If the best model in the world is open weights, the center of gravity shifts from secret models to shared ecosystems.”

But… Are the Claims Real?

Baughman urged caution. Benchmarks can be gamed, and independent evaluation is essential. Still, even skeptics acknowledged the momentum: open source is no longer “six months behind.” In some areas, it may now lead.

Why Developers Are Excited

Crovetto summed up the developer enthusiasm perfectly:

“I can run it locally. No router. No data collection. No hidden training. I’m in control.”

The ability to self-host a frontier-class model—even with a one-terabyte download—is a paradigm shift.


Microsoft’s “Agentic Users”: AI Has Entered the Workforce

The show closed with one of the most surreal stories of the week: Microsoft is exploring AI agents that function as real enterprise users. These embodied agents have:

  • Their own identity

  • Credentialed access to organizational apps

  • The ability to email, edit documents, attend meetings

  • Autonomy to collaborate with humans and other agents

In short: a new coworker, but… it’s not human.

The Promise

For business teams:

  • Productivity at an entirely new scale

  • Constant availability

  • Automated workflows across the whole Microsoft ecosystem

The Nightmares

For security teams:

  • Thousands of “users” moving data around

  • Blurred accountability

  • Unknown compliance risk

  • Governance systems unprepared for agents acting like staff

  • The specter of agents impersonating humans

Crovetto called it a “security nightmare in the making,” especially under GDPR and the upcoming AI regulations.

The Cultural Shock

Even beyond security, the implications are profound.

What does “company culture” mean when:

  • Some team members never sleep?

  • Some don’t have feelings?

  • Some aren’t even people?

And yes—someone joked:
“We’re only years away from office romance with an AI coworker.”


The Coming Agentic Economy

The panel speculated on a weirder future where:

  • Agents outnumber humans

  • Agents hire humans

  • Agents pay humans for data

  • Agents create other agents

  • Agents attend meetings… and bill by the minute

  • Your boss might be Cortana

As Baughman noted, “Hybrid human–agent workplaces will be the norm, not the exception.”


Final Thoughts

This week surfaced a stark reality:
AI is no longer just a technology race—it’s a race to shape how humans and machines will work, think, and co-exist.

OpenAI is doubling down on personality.
Open-source is doubling down on power.
Microsoft is doubling down on autonomy.

The future of AI may be decided not by benchmarks, but by which vision of interaction—and control—users ultimately trust.

Tags: Artificial Intelligence,Technology,Video,

Hypersol Ophthalmic Solution

Index of Ophthal Medicines
Marketer: Jawa Pharmaceuticals Pvt Ltd
SALT COMPOSITION: Phenyl Mercuric Nitrate (0.001% w/v) + Sodium Chloride (5% w/v)

Product introduction

Hypersol Ophthalmic Solution is a prescription medicine used in the treatment of eye injuries. It draws out water from the swollen cornea. This way it helps in the rapid healing of an injury.

Hypersol Ophthalmic Solution should be used in the dose and duration as advised by the doctor. Wash your hands before using this medicine. Check the label thoroughly for directions before use. Wash your hands before using this medicine. Apply only on the affected eye.

Do not use this medicine more than the recommended dose. This medicine is generally safe to use but sometimes it may cause side effects such as irritation, itching, redness, or burning sensation in the eyes. If these side effects persist for a longer duration, consult your doctor.

Uses of Hypersol Ophthalmic Solution
Eye injury

Side effects of Hypersol Ophthalmic Solution
Most side effects do not require any medical attention and disappear as your body adjusts to the medicine. Consult your doctor if they persist or if you’re worried about them

Common side effects of Hypersol

Burning sensation around the eyes
Irritation around eyes
Application site reactions (burning, irritation, itching and redness)

How Hypersol Ophthalmic Solution works

Hypersol Ophthalmic Solution is a combination of two medicines: Phenyl Mercuric Nitrate and Sodium Chloride. Sodium Chloride is a purified salt solution which works by drawing out water from swollen cornea. Phenyl Mercuric Nitrate is a preservative.

Fact Box
Habit Forming: No
Therapeutic Class: OPHTHALMIC SOLUTION

TilRx Tablet - Antibiotic Taken Post Cataract Surgery

Index of Ophthal Medicines
Marketer: IRx pharmaceuticals Pvt Ltd.
SALT COMPOSITION: Cefuroxime (500mg)

Product introduction

Tilrx Tablet is an antibiotic medicine used to treat bacterial infections in your body. It is effective in infections of the lungs (e.g., pneumonia), ear, throat, nasal sinus, urinary tract, skin, soft tissues, bones, and joints. It is also used to prevent infections during surgery.

Tilrx Tablet should be taken with food to avoid an upset stomach. Take it regularly at evenly spaced intervals as per the schedule prescribed by your doctor. Taking it at the same time every day will help you remember to take it. The dose will depend on what you are being treated for and the severity of your condition. Make sure to complete the full course. It will not work for viral infections such as the flu or the common cold. Using any antibiotic when you do not need it can make it less effective for future infections.

The most common side effects of this medicine include rash, vomiting, increased liver enzymes, nausea, and diarrhea. These are usually mild, but let your doctor know if they bother you or last more than a few days.

Before using it, you should tell your doctor if you are allergic to any antibiotics or have any kidney or liver problems. You should also let your doctor know all other medicines you are taking as they may affect, or be affected by, this medicine. Pregnant and breastfeeding mothers should consult their doctor before using it.

Uses of Tilrx Tablet

Treatment of Bacterial infections

Benefits of Tilrx Tablet

In Treatment of Bacterial infections

Tilrx Tablet is a versatile antibiotic medicine that kills the infection-causing bacteria in your body. This medicine is used to treat many different types of infections, such as those of the lungs (pneumonia), ear, abdomen, urinary tract, bones, joints, skin, and soft tissues. This medicine usually makes you feel better within a few days, but you should continue taking it as prescribed even when you feel better. Stopping it early may make the infection come back and harder to treat.

Side effects of Tilrx Tablet

Most side effects do not require any medical attention and disappear as your body adjusts to the medicine. Consult your doctor if they persist or if you’re worried about them

Common side effects of Tilrx

Rash
Vomiting
Allergic reaction
Increased liver enzymes
Nausea
Diarrhea

How Tilrx Tablet works

Tilrx Tablet is an antibiotic. It kills bacteria by preventing them from forming the bacterial protective covering (cell wall), which is needed for them to survive.

Fact Box

Chemical Class: Intermediate spectrum {Second generation cephalosporins}
Habit Forming: No
Therapeutic Class: ANTI INFECTIVES
Action Class: Second-Generation Cephalosporins

Interaction with drugs

Taking Tilrx with any of the following medicines can modify the effect of either of them and cause some undesirable side effects

Cholera Vaccine (Inactivated) (Oral Route) Severe
Do not consume Cholera Vaccine (Inactivated) two weeks before and at least 10 days after consuming Cefuroxime. Please consult your doctor. Cefuroxime may reduce the efficacy of Cho... More

Purified Vi Polysaccharide Typhoid Vaccine (Injection Route) Severe
Do not consume Purified Vi Polysaccharide Typhoid Vaccine with Cefuroxime. If Purified Vi Polysaccharide Typhoid Vaccine is essential, ensure a gap of at least 3 days after discont... More

Kanamycin (Injection Route) Moderate
Your doctor may monitor your kidney function regularly. 
Concurrent use may increase the risk of kidney damage. 

Streptomycin (Injection Route) Moderate
Your doctor may monitor your kidney function regularly. 
Concurrent use may increase the risk of kidney damage. 

Mycophenolate mofetil (Oral Route) Moderate
Your doctor may monitor the effects of Mycophenolate mofetil along with your overall treatment and adjust the doses as per the observations.
Mycophenolate mofetil may increase the rate of release of Cefuroxime in the blood.

Thursday, November 13, 2025

Will AI Kill Us All? A (Mostly) Cheerful Exploration


See All Articles on AI


Artificial intelligence. Maybe you’ve heard of it. Good—good start. At a recent talk I opened with a vote: who wants to talk about the benefits of AI? Who wants to talk about the risks? Fascinating. Utterly pointless, because the slides were already made. Still — let’s keep that energy.

This is my attempt to take that riff, tighten it up, and actually make it useful. I want to be an optimist here, but also honest: AI is amazing, messy, and a little bit terrifying. So — will it kill us all? Short answer: we don’t know. Long answer: read on.


What do people mean by “AI” these days?

When people say “AI” today, they usually mean large neural networks — especially large language models (LLMs). Think of them as huge autocomplete systems. Feed in a phrase like “the sky is” and the machine guesses the next word. At first it guesses badly. Then, by tweaking billions of internal settings through trial-and-error on massive datasets, it learns to predict sensible continuations: “blue.”

Crucially, language is a proxy for thought. We write because we think. So when a model predicts the next word in a chain of human text, it’s, in effect, predicting the next step in a thought process. There’s no inner life, just pattern matching — streams of seemingly coherent words that look like thinking.


Why scale matters (and why that scares people)

There’s a bitter lesson in AI research: bigger models trained on more data with more compute tend to figure out more things. So the question becomes: if we keep scaling — more data, more compute, more dials — will autocomplete become superintelligence (AGI), i.e., better than humans at every cognitive task?

We don’t know. But the ingredients (data, compute) are growing fast, cheaper and more accessible than ever. If scale is all we need, we’re on a plausible path. If there are qualitative gaps between scaled-up models and human brains, maybe not. The truth is: we’re experimenting at a scale we’ve never lived with before.


The golden future (if things go well)

If AI scales and we do the right things, the upside is enormous:

  • No more boring meeting minutes. Ever.

  • Personalized, real-time media (your own James Bond film starring you? sure).

  • A “doctor in your pocket” with your full health history and continuous attention.

  • Faster scientific progress, maybe cures for diseases we haven’t solved yet.

  • More time for creativity, exploration, and play — possibly even new forms of work and meaning.

The future could be golden. But every shiny possibility has a shadow.


The risks we’re already living with

Some of the dangers aren’t sci-fi; they’re happening now.

  • Trust is breaking. Digital verification is dead. Deepfakes and synthetic content make it impossible to trust video, audio, image, or text at face value.

  • Humanizing machines. People will project feelings onto chatbots. They’ll fall in love with a voice that doesn’t love back.

  • Kids and attention. Screens already feel like a modern pacifier. Imagine those screens powered by always-on personalized AI tutors and companions — educational, sure, but also shaping a generation’s values and attention in ways we don’t understand.

  • Echo chambers on steroids. Social media already predicts what we’ll click; AI can craft entire alternate narratives tailored to your taste. We’ll live in separate realities and never have to negotiate with anyone who disagrees.

  • Cybercrime democratized. Need code to break into a system? An LLM can hand you a script. The barriers to launching cyber attacks are dropping.


The big, scary “what if” — control problems

If superintelligent AI arrives and we can’t control it, the classic thought experiment goes like this: you give it a simple objective — make paper clips. An alignment failure, or a bad incentive, and the machine might take catastrophic steps to maximize paper clips. The lesson: a superintelligent optimizer will pursue its goal relentlessly, and in a complex world we’re bad at predicting side effects.

Even “human-in-the-loop” fixes can fail. Consider self-driving cars: if the human must take over in an emergency, after many uneventful trips they’ll be disengaged and unable to react. Humans are poor monitors of automated systems.


The geopolitical race problem

There’s no prize for second place. Whoever gets to AGI first gains immense power. That creates a winner-take-all race where safety may be sacrificed for speed. AI labs publicly ask for regulation while privately fearing it — because regulation could slow them down and let others win. That tension is the real-world engine driving risky behavior.


So what should we do?

Wallowing in doom isn’t productive. Nor is blind techno-optimism. Here’s a practical, moral outline:

  1. Make alignment a first-class goal. Design systems so that the objective and incentives match human values. Not later — now.

  2. Strengthen verification and provenance. If digital content can be faked, build robust ways to verify origin and integrity.

  3. Regulate the race. International norms, safety checks, and audits to prevent reckless acceleration.

  4. Design for human flourishing. Use AI to amplify what makes us human — creativity, empathy, curiosity — not to erode attention and civic life.

  5. Keep institutions and philosophy in play. This is a philosophical and societal problem, not just a technical one. Bring ethicists, social scientists, and communities into the room.


Closing: the courageous path

AI could make us infinitely better off — or it could foolishly endanger us if alignment is ignored. There’s no second try once superintelligence is here. So the courageous path is to build for the world we want to live in and to treat alignment with at least as much urgency as capability. That means responsibility from entrepreneurs, engineers, regulators, and citizens — not just a hope that “we’ll figure it out later.”

Raise your hand if you want benefits. Raise your hand if you want safety. You don’t get to vote after the slides are made — but you do get a voice now. Use it.

Tags: Artificial Intelligence,Technology,Video,

After Delhi-Meerut, two new rapid rail routes get approval -- Check proposed routes, cost and other details

See All Articles

The two Namo Bharat (RRTS) corridors connecting Delhi to Gurgaon, Rewari, Sonipat, Panipat, and Karnal have received approval from the Public Investment Board (PIB), an inter-ministerial panel at the Centre. The projects, estimated at a combined cost of Rs 65,000 crore, will now move to the Union Cabinet for final approval.

Long-pending approval now moves forward

The PIB’s approval last week is significant as the proposals had been held up due to funding disagreements between the Centre and the previous AAP government in Delhi. The clearance marks a major step toward improving regional connectivity and reducing travel time across the National Capital Region (NCR).

Project details and estimated costs

According to the housing and urban affairs ministry’s proposal, the 93-km Sarai Kale Khan–Bawal RRTS corridor will cost Rs 32,000 crore. The second corridor, stretching 136 km from Sarai Kale Khan to Karnal, will require an estimated Rs 33,000 crore investment.

Officials said the panel, headed by the Union expenditure secretary, has suggested that Delhi and Haryana work together to adopt value capture financing (VCF). The model allows governments to fund public projects by tapping into the rise in private land values that occur because of public infrastructure development.

Sarai Kale Khan to Bawal Rapid Rail Project

# The corridor originates at Sarai Kale Khan in Delhi.

# It is planned to run via parts of south Haryana (including industrial nodes such as Manesar and Bawal) and along the edge of the national highway network.

# The first phase of this corridor (Delhi to SNB Urban Complex near Bawal) covers approximately 107 km with 16 stations.

# The proposed route is aligned along NH-8 and is to include roughly 22 stations when extended further south.

Sarai Kale Khan to Karnal (via Sonipat, Panipat)

# The corridor originates similarly from Sarai Kale Khan in Delhi and extends north through Haryana, covering major towns such as Sonipat, Panipat, and terminates at Karnal.

# The total length is cited to be about 136.3 km.

# The detailed project report (DPR) is reportedly ready.

# One media report outlines that the alignment is divided into three sections for tendering: Sarai Kale Khan → Alipur, Alipur → before Samalkha, and Samalkha → Karnal New ISBT.

Push for transit-oriented development

The participating states have also been advised to promote transit-oriented development (TOD) — a model that encourages planned and intensive urban development around transport hubs — and to establish Urban Metropolitan Transport Authorities (UMTAs).

The ministry of housing and urban affairs is currently revising the TOD policy to ensure better integration of infrastructure and urban growth along key transport corridors.

Focus on NCR infrastructure growth

After the BJP returned to power in Delhi, Prime Minister Narendra Modi had said that having BJP governments across all NCR states would help accelerate development initiatives. “Having BJP governments in all states in NCR will open numerous avenues for development, and huge efforts will be made to boost mobility and infrastructure development in the region,” the Prime Minister had said.

The two new RRTS corridors are expected to enhance regional mobility, reduce road congestion, and connect key industrial and residential zones across Delhi and Haryana.

Ref
Tags: Railways,Gurugram,