Sunday, November 16, 2025

Active Smile Mouth Ulcer Tablet

Index of Oral/Mouth Medicines
SALT COMPOSITION
Riboflavin   : 10mg
Folic Acid   : 1.5mg
Niacinamide  : 100mg
Lactic acid bacillus spores : 60 million spores

RIBOFLAVIN

Riboflavin tablets (vitamin B2) are primarily used to treat and prevent riboflavin deficiency. They also serve as a prophylactic treatment for migraine headaches in some adults. 
Primary Uses
Treating and Preventing Deficiency: Riboflavin tablets address low levels of vitamin B2, which can be caused by conditions such as alcoholism, liver disease, certain intestinal disorders, prolonged infection, or an inadequate diet.
Managing Deficiency Symptoms: Deficiency symptoms can include:
Cracks and sores at the corners of the mouth and on the tongue.
Itchy, red eyes and sensitivity to light.
Skin inflammation (seborrheic dermatitis).
Hair loss and anemia. 
Other Uses
Migraine Prevention: High-dose riboflavin (typically 400 mg daily) is used to reduce the frequency and severity of migraine headaches. A beneficial effect may not appear until several months of consistent use.
Overall Health Maintenance: Vitamin B2 is essential for general health. It works with other B vitamins to:
Help the body convert carbohydrates, proteins, and fats into energy.
Support normal cell growth and function.
Aid in the production of red blood cells.
Maintain the health of the skin, eyes, nervous system, and digestive tract.
Keratoconus: It may also be beneficial in the treatment of keratoconus (a condition affecting the cornea of the eye). 
Important Considerations
Consult a Healthcare Professional: While many riboflavin supplements are available over-the-counter, it's important to consult a doctor to determine the appropriate dosage and to rule out other underlying medical conditions.
Side Effects: A common, harmless side effect is bright yellow or orange urine.
Dietary Sources: Most people who eat a normal, balanced diet get sufficient vitamin B2 from foods like milk, cheese, eggs, meat, and fortified cereals. Supplements are generally for those with diagnosed deficiencies or specific medical needs. 

FOLIC ACID

Folic acid tablets are a synthetic form of vitamin B9 (folate) used to prevent and treat low folate levels, support vital bodily functions like red blood cell formation, and significantly reduce the risk of birth defects during pregnancy. 

Niacinamide

Niacinamide (a form of vitamin B3) tablets are primarily used to prevent and treat vitamin B3 deficiency (pellagra). They are also used for various skin conditions and other health issues, though scientific evidence for many of these uses is limited. 

Lactic acid bacillus

Lactic acid bacillus tablets, which contain a type of beneficial probiotic bacteria, are primarily used to treat and prevent various forms of diarrhea and to restore the natural balance of gut microflora, particularly after antibiotic use. 

China Just Shifted the AI Race Overnight — Here’s What Their New Quantum Photonic Chip Really Means


See All Articles on AI


Every few months, something lands in the tech world that feels less like a product announcement and more like a plot twist. This week, that twist came from China — and it’s big. Not the “new benchmark on a leaderboard” big. More like “the entire computational landscape just tilted a few degrees” big.

A Chinese research consortium has unveiled a quantum photonic chip that doesn’t live in a cryogenic lab, doesn’t need a giant cooling rig, and doesn’t cost millions to operate. Instead, it’s already running inside real data centers. And yes, they’re claiming speed boosts that sound fake: up to 1,000× faster than Nvidia GPUs for certain AI workloads.

Let’s break down what’s actually going on — without the hype, but without downplaying what might be one of the most important hardware breakthroughs in years.


The Chip That Shouldn’t Exist Yet

The chip comes from ChipX (Chip Hub for Integrated Photonics Explorer) and their partner Turing Quantum, and they're calling it the world’s first scalable, industrial-grade quantum photonic chip.

The jaw-drop comes from its physical footprint:

  • Built on a 6-inch thin film lithium niobate wafer

  • Hosts 1,000+ optical components on a single slice

  • Designed as a monolithic photonic quantum-classical hybrid system

This isn’t a fridge-sized quantum machine. This is something you can slot into a rack, deploy in weeks, and — allegedly — scale like a GPU cluster.


Why Photonic Chips Matter

Photonic chips don’t use electrons for computation. They use light. That alone solves three massive problems choking modern data centers:

1. Heat

Photons generate no resistive heat. Electrons do. This is why current GPUs require industrial cooling just to stay operational.

2. Power Consumption

Moving electrons across silicon costs energy. Moving photons costs far less. AI labs complain more about electricity bills than compute costs — photonics flips that equation.

3. Data Movement Bottlenecks

Light travels faster, loses less energy, and carries more information per signal than electrons. As models grow, data movement, not computation, has become the biggest bottleneck.

This is why photonics has become the hardware moonshot for the entire AI industry.


About That “1,000× Faster Than Nvidia” Claim…

This is the headline number — and the number everyone is side-eyeing.

The figure comes from reporting in the South China Morning Post and statements from the chip’s developers. They say:

  • The chip accelerates specific complex problem-solving tasks by 1000×

  • It is already deployed in aerospace, biomedicine, and finance

  • It significantly outperforms classical GPUs in workloads suited for quantum-inspired parallelism and photon-level low-latency processing

Realistically:

  • No, it’s not 1,000× faster across the board.

  • Yes, certain AI workloads could see speed-ups that big.

  • Yes, this tracks with what photonics promises.

For the first time, the industry is seeing those promises implemented at industrial scale, not in academic prototypes.


The Scalability Breakthrough

One of the biggest issues with quantum machines is deployment complexity. They’re huge, fragile, and require months of setup.

ChipX claims they reduced that:

  • From 6 months2 weeks deployment

  • Thanks to monolithic integration and simplified architecture

If true, this is a massive reduction in operational friction.


A Manufacturing Leap Nobody Expected This Early

China didn’t just build a chip.

They built a pilot production line capable of producing:

  • 12,000 wafers per year

  • ~350 chips per wafer

By quantum-classical photonics standards, that’s enormous output — and they openly admit production is still the bottleneck.

More importantly, China now has:

✔ chip design
✔ wafer fabrication
✔ photonic packaging
✔ testing
✔ system-level integration

All in a single closed ecosystem.

Meanwhile:

  • Europe’s leading foundry (Smart Photonics) is at 4-inch wafers

  • PsiQuantum is still adapting 300 mm silicon photonics

  • Most Western photonics remain prototype-only

This is the first sign that China may be commercially ahead in a slice of quantum hardware.


The “Million Qubit” Bombshell

The researchers say their architecture can scale to 1,000,000 photonic qubits via networked chips.

Important nuance:

  • These are photonic qubits, not superconducting qubits

  • They do not enable universal quantum computing

  • They do enable massive parallelism in AI-adjacent tasks

Think of it like GPU clusters — but quantum-inspired and photonic.


Why This Matters Across Industries

Early deployment sectors — aerospace, biomedicine, finance — are exactly the ones that hit computational walls first.

Photonic quantum accelerators help with:

  • molecular modeling

  • cryptography & decryption

  • risk computation

  • algorithmic trading

  • pattern recognition

  • large-scale simulation

And because the hardware doesn’t need cryogenic cooling, it can slip into existing enterprise racks with minimal retrofitting.


The Most Important Detail: Co-Packaging

This chip uses new co-packaging tech that places photonic and electronic components side-by-side on the same wafer.

This reduces:

  • latency

  • noise

  • heat

  • energy loss

And increases:

  • bandwidth

  • throughput

  • stability

This is the same design philosophy behind cutting-edge classical accelerators — just executed with a fundamentally superior medium (light).


Global Context: Everyone Is Betting on Different Quantum Horses

Right now:

  • Google & IBM → superconducting qubits

  • PsiQuantum → silicon photonics

  • Europe → indium phosphide

  • China → thin film lithium niobate

For the first time, China isn’t making a bet.

They’re shipping.

And they’re calling their device “the first industrial-grade optical quantum computer.”

That framing alone signals a mental shift:

This is no longer a lab experiment.
It’s a product.


What Happens Next

If the claims hold under independent verification (the big “if”), then we’re entering a hybrid hardware era:

  • Photons handle the ultra-heavy AI math

  • Electrons handle everything else

Nvidia won’t disappear — but they might no longer be the only viable platform for frontier AI.

If photonic accelerators can deliver even 10% of their claimed efficiency, they become impossible to ignore.

If they deliver 100%, the AI world gets rewritten.


Final Thoughts

This announcement didn’t just spark excitement — it sparked recalculation. Hardware determines the ceiling of what AI can become, and photonics has always been seen as the “maybe someday” breakthrough.

Suddenly, “someday” looks like now.

If you’re into deep breakdowns of AI, hardware, and the future of computation, stick around — there’s a lot more coming.

References

👉 Chip and deployment story via South China Morning Post: https://www.scmp.com/news/china/science/article/3332604/quantum-chip-gives-chinas-ai-data-centres-1000-fold-speed-boost-award-winning-team

👉 Technical and production-line details via The Quantum Insider: https://thequantuminsider.com/2025/11/15/chinas-new-photonic-quantum-chip-promises-1000-fold-gains-for-complex-computing-tasks/

👉 Background on China’s photonic chip manufacturing ramp-up: https://thequantuminsider.com/2025/06/13/china-ramps-up-photonic-chip-production-with-eye-on-ai-and-quantum-computing/

👉 Context on China positioning itself in photonic chips and future technologies: https://merics.org/en/comment/china-positions-itself-lead-future-technologies-photonic-chips

👉 China boosts photonic chip production in bid to overtake rivals: https://manufacturing.asia/building-engineering/in-focus/china-boosts-photonic-chip-production-in-bid-overtake-western-rivals

Tags: Artificial Intelligence,Technology,

Saturday, November 15, 2025

3 questions to ask yourself before you die


See All on Motivation


The D Word We Avoid — And Why Facing It Can Transform Your Life

No one really wants to talk about the D word.
No, not that D word. Relax — I mean death.

We’ve come up with all kinds of ways to avoid saying it outright. In the UK you might “pop your clogs.” In Japanese, raiku means “to go to the next world.” And my newest favorite: the German phrase unter den Radieschen schauen — “looking at the radishes from below.”

But for something we avoid so fiercely, death is one of the most fascinating and powerful forces we ever encounter. It frightens us, shapes us, and — if we let it — completely transforms us.

Think about it:
How many people have had a near-death experience or lost someone they love, only to rethink everything about how they want to live? A lot of us.

Malala Yousafzai survived a gunshot wound and decided she would “make the most of this new life.”
Candy Lightner created Mothers Against Drunk Driving after her daughter was killed by a repeat offender.
Steve Jobs called death “life’s change agent.”

As a hospice and palliative care doctor, I’ve seen this transformation over and over. Some combination of tragedy, grief, and regret wakes us up. But what if we didn’t need tragedy to see clearly?
What if we could learn what death teaches — without the pain?

What if we intentionally invited mortality into our awareness, not to depress us, but to help us live better?


How Death Became My Teacher

Like many of my patients, I’ve had brushes with death that changed me.

At 13, I nearly drowned in a wave pool — an almost ridiculous place to die. Returning to normal life afterward felt surreal. I knew how close I came to having no life at all.

Then, early in my medical training, I cared for a woman only months older than me.
She was a Chinese immigrant. An only child. We had the same dark eyes, the same black hair, even the same name. It was like looking into a mirror — except one of us had terminal cancer.

Her parents flew from China believing they had months left together.
She died a week later.

Deaths like these split your world open. And yet, the changes they bring often steer us in a positive direction: toward gratitude, compassion, and purpose.

Over the years, I began to notice a pattern in my patients:
After facing death, people often say they feel like they’ve been asleep or on autopilot in their own lives.

That makes sense. Our brains are supercomputers designed for efficiency. They automate everything — even living.

It often takes a major life event — relocation, divorce, illness, job loss, a milestone birthday, or death — to short-circuit that autopilot and make us go:

“Wait… what am I doing with my life?”

But it’s not the event itself that changes us.
It’s the shift in perspective and the surge of emotion that pushes us to act.

And those two things?
We don’t need a crisis to create them.


How to Use Mortality to Wake Up — Without Waiting for Tragedy

Here are three practices that can pull you out of autopilot, help you understand your values, and minimize regret — all by bringing death a little closer in a healthy, intentional way.

The more deeply you feel these, the more powerful they become.


1. Prioritize What Really Matters

In a world where everything feels urgent, ask yourself:

“Will this matter when I’m dying?”

Zooming out to the deathbed perspective clarifies priorities instantly.

A young woman once asked me whether she should reconnect with her estranged father.
I asked her, “What would you do if you knew he had six months left?”
She didn’t hesitate: “I’d reach out.”

“Then maybe,” I said, “do it now.”

Why wait until death shifts from an abstract idea to an immediate reality?


2. Be Fully, Fiercely Present

Do you know what dying people want most?

Not bucket-list adventures.
Not material things.

They want one more morning.
To taste food.
To be with the people they love.

That’s it.

So try asking yourself:

“What if this is the last time I get to experience this?”

The last hug with a parent.
The last conversation with your best friend.
The last dinner you savor, sunset you watch, dog you cuddle.

One day, whether we like it or not, we will have a final moment with everyone and everything.

Presence is simply remembering that.


3. Minimize Regret Before It Forms

People regret what they didn’t do more than what they did.
And they regret not living up to their aspirations more than their obligations.

Here’s a powerful exercise:

Imagine it’s a year from now and you learn you’re dying.
You can feel your breath slowing.
Your days shrinking.

Ask yourself:

“What do I wish I had more time to do?”

That answer is your blueprint.

Most of us will live far longer than a year.
Some of us won’t.
So what would you need to start today to avoid tomorrow’s regrets?


The Regret That Still Stings

Not long ago, a friend of mine entered hospice. She was young and full of energy. She loved talking about death — genuinely loved it — and she was excited to contribute ideas to this very talk.

We thought she had months.

She died ten days later.

The last message she sent me was:
“We can schedule a time to talk. I would love to help.”

I meant to follow up.
But life got busy.
And now I never will.

I’m human.
But I still wonder:
If I had stepped out of my own autopilot for a moment, would I be sharing her wisdom now instead of this regret?


A Healthier Relationship With Mortality

My hope is that you won’t need a brush with death to learn these lessons.

I hope you reconnect with someone long before their final months.
That you forgive someone while both of you are still fully alive.
That you pursue your dreams now, not after a crisis.

I still don’t know why my patient and my friend died while I lived.
None of us get that answer.

What we do get is a choice:

To make our lives count.
To choose courage over fear.
Connection over isolation.
Presence over autopilot.

To reach the end of our lives saying,
“I’m so glad I did,”
not
“I wish I had.”

They say we all have two lives.
And the second begins when we realize we have only one.

So the real question is:

Who decides when your second life begins — the D word, or you?

Tags: Motivation,Video,

Model Alert... Chronos-2 -- Forecasting Multiple Time Series


See All Articles on AI

Transformers are well suited to predicting future values of time series like energy prices, wages, or weather, but often — as in those examples — multiple time series often influence one another. Researchers built a model that can forecast multiple time series simultaneously.

 

What’s new: Chronos-2 is a pretrained model that can accept and predict multiple time series in a zero-shot manner to forecast series of a single variable (univariate forecasting), multiple variables (multivariate forecasting), and single variables that depend on other variables (covariate-informed forecasting). Its authors include Abdul Fatir Ansari, Oleksandr Shchur, Jaris Küken, and colleagues at Amazon, University of Freiburg, Johannes Kepler University Linz, Boston College, and Rutgers.

  • Input/output: Time series in (up to 8,192 time steps), time series out (up to 1,024 time steps)
  • Architecture: Modified transformer, 120 million parameters
  • Performance: Lower error on average than 14 competing models
  • Availability: Weights available for commercial and noncommercial uses under Apache 2.0 license

How it works: Given any number of time series, Chronos 2 predicts values at multiple future time steps. Chronos 2 learned to minimize the difference between its predicted future values and ground truth values in subsets of datasets that contain univariate series (including synthetic data generated using methods from earlier work). They supplemented these datasets with synthetic multivariate and covariate data produced using a method devised by the authors: Their method generates multiple independent time series and then produces dependencies between them by applying mathematical transformations at the same time step and across time steps.

  • Chronos 2 stacks each input time series to make a series of vectors, where each vector represents one time step. These values can be historical or future values that are known (such as dates of holidays or weather forecasts). For non-overlapping time series (for example, one past and one future), the model aligns the time series by the corresponding time step and adds zeros to either end to equalize the number of time steps.
  • Given the series of vectors, the model splits them into non-overlapping patches, and a vanilla neural network with added skip connections, or residual network, turns each patch into an embedding.
  • Given the embeddings, it predicts values of each time series for a number of future time steps that haven’t already been assigned a value.
  • In addition to the attention layers that perform attention across a given time series, Chronos 2 includes what the authors call group attention layers. These layers process attention across time series, or more specifically, across groups of time series. The user specifies groups, so the model can produce multiple independent forecasts at once.

Results: Across various benchmarks, Chronos 2 outperformed 14 competing zero-shot models according to their skill score, a measure of how much a model reduces the average difference in predicted values relative to a baseline (higher is better, one is a perfect score).

  • Across univariate, multivariate, and covariate subsets of fev-bench, Chronos-2 achieved the highest skill score.
  • On fev-bench, 100 realistic time-series tasks including single and multiple input and output time series, Chronos-2 (0.473) outperformed TiRex (0.426), which processes only univariate time series, and Toto-1.0 (0.407), which can process multivariate and covariate time series in some cases.

Behind the news: Most previous works, including the previous versions Chronos and Chronos-Bolt, predict only univariate time series. Later models like Toto-1.0 and COSMIC process multiple inputs or outputs in limited ways. For instance, Toto-1.0 processes multiple inputs and outputs, but the multiple inputs can only represent past information, not future or static information. COSMIC, on the other hand, can handle multiple inputs (past or future) but not multiple outputs.

 

Why it matters: Chronos 2 can handle past, future, and static inputs as well as multiple outputs, giving developers, researchers, and companies alike the ability to better predict complex time series.

 

We’re thinking: The author’s attention setup is similar to the way many video transformers apply attention separately across space and time. It saves memory compared to performing attention across both at once, and remains an effective method for understanding data across both.

 

Tags: Technology,Artificial Intelligence,Large Language Models,

Model Alert... Better Images Through Reasoning -- Tencent releases HunyuanImage-3.0


See All Articles on AI

 

A new image generator reasons over prompts to produce outstanding pictures.

 

What’s new: Tencent released HunyuanImage-3.0, which is fine-tuned to apply reasoning via a variety of reinforcement learning methods. The company says this helps it understand users’ intentions and improve its output.

  • Input/output: Text and images in, text and images out (fine-tuned for text in, images out only) 
  • Architecture: Mixture of experts (MoE) diffusion transformer (80 billion parameters, 13 billion parameters active per token), one VAE, one vision transformer, two vanilla neural network projectors
  • Performance: Currently tops LMArena Text-to-Image leaderboard
  • Availability: Weights available for commercial and noncommercial use by companies with fewer than 100 million monthly active users under Tencent license
  • Undisclosed: Input and output size limits; parameter counts of VAE, vision transformer, and projectors; training data; models used for labeling, filtering, and captioning images; reward models

How it works: The authors built a training dataset of paired text and images. They trained the model on image generation via diffusion through several stages and fine-tuned it on text-to-image generation in further stages.

  • To produce the dataset, the authors collected 10 billion images. (i) They built models specially trained to measure image clarity and aesthetic quality, and removed images that didn’t make the grade. (ii) They also built models to identify text and named entities such as brands, artworks, and celebrities, and extracted this information from the remaining images. (iii) They fed the images, extracted text, and extracted entities to a captioning model that produced a text caption for each image. (iv) For a subset of the data, they manually annotated chains of thought, producing data that linked text to chains of thought to images. (v) They added text-to-text data and image-text data from unspecified corpi.
  • The authors pretrained the system to generate text and images from the various text and image elements in the dataset. Specifically, for text-to-image tasks: (i) First, the VAE’s encoder embedded an image. (ii) The authors added noise to the embedding. (iii) Given the noisy embedding and a text prompt, the MoE removed the noise. (iv) The VAE’s decoder generated an image from the embedding with noise removed.
  • The authors fine-tuned the system (i) for text-to-image tasks by training it in a supervised fashion to remove noise from human-annotated examples, (ii) via DPO to be more likely to generate higher-quality examples, like human-annotated ones, than lower-quality ones, (iii) via the reinforcement learning method MixGRPO to encourage the model to generate more aesthetically pleasing images as judged by unspecified reward models, and (iv) via SRPO (another reinforcement learning method) to encourage the model to generate images more like a text description that specified desired traits and less like a text description that specified negative traits. While applying SRPO, they also encouraged the model to generate images similar to those in an author-chosen distribution.

Results: At present, HunyuanImage 3.0 holds first place in the LMArena Text-to-Image leaderboard, ahead of Google Gemini 2.5 Flash Image (Nano Banana), Google Imagen 4.0 Ultra Generate, and ByteDance Seedream 4.0. In addition, 100 people compared 1,000 outputs of 4 competing models to those of HunyuanImage 3.0 in side-by-side contests. The people evaluated which image was better, or whether they were both equally good or equally poor.

  • On average, the people preferred HunyuanImage 3.0’s images over those of the competitors. 
  • For example, 20.01 percent of the time they preferred HunyuanImage 3.0, 18.84 percent of the time they preferred Seedream 4.0, 39.3 percent of the time they were equally good, and 21.85 percent of the time they were equally poor.

Behind the news: Tencent has been on a streak of releasing vision models. 

  • Tencent recently launched the API version of Hunyuan-Vision-1.5, its latest vision-language model, with promises to release the weights and a paper soon.
  • The company released Hunyuan3D-Omni, a model that takes an image and rough 3D representation (such as a skeleton or bounding box) and generates a detailed 3D representation. 
  • It also played a role in the release of FlashWorld, which accepts an image and text prompt and generates a 3D scene.

Why it matters: Simplifying training methods can be helpful, since each additional step adds time spent not only training but also debugging, and each additional component can interact with other components in unexpected ways, which adds to the time required to debug the system. Yet Tencent used several stages of pretraining and fine-tuning and produced a superior model.

 

We’re thinking: One key to this success may be to use different methods for different purposes. For instance, the team used MixGRPO to fine-tune the model for aesthetics and SRPO to better match human preferences.

 

Tags: Technology,Artificial Intelligence,Large Language Models,

Friday, November 14, 2025

GPT-5.1, Open-Source Disruption, and Microsoft’s 'Agentic Employees'


See All Articles on AI


The latest episode of Mixture of Experts brought together three leading minds from across the AI ecosystem—Kouthar El Alaoui (IBM), Aaron Baughman (IBM), and Mihai Crovetto (Distinguished Engineer, Agentic AI)—to dissect a week filled with high-impact developments: OpenAI’s new GPT-5.1 models, the surprising rise of the open-source Kimmi K2 Thinking model, and Microsoft’s provocative vision of AI “users” embedded directly inside the enterprise workforce.

Here’s a distilled overview of what stood out.


GPT-5.1: A Fix, Not a Leap?

OpenAI’s dual rollout—GPT-5.1 Instant and GPT-5.1 Thinking—generated plenty of discussion, but the headline moment wasn’t about benchmark wins. This time, OpenAI led with style. According to the company, users want a model that is not only smart but “enjoyable to talk to.”

That pivot raised a core debate on the panel:
Is this truly a new model upgrade—or a course correction after the community pushback surrounding GPT-5?

Mixed Community Reactions

Some developers praise 5.1’s warmth and conversational fluidity. Others remain nostalgic for GPT-4’s output style and skeptical about claims of deeper reasoning. A significant portion of the community believes this is:

  • A refinement rather than a reinvention

  • Partially a cost-optimization move, especially with the new router system deciding when to use Instant vs. Thinking

  • A strategic push into personalization and user experience as the frontier of differentiation

As Mihai Crovetto put it, many are still wondering: “Is this really a new model or just a retune of GPT-5?”

The Router: Feature or Red Flag?

GPT-5.1’s new routing layer—automatically deciding how much “thinking” to apply—won praise from those seeking responsiveness. But others found it unsettling.

Crovetto was blunt:
“I don’t want it learning my behavior. I want switches I can toggle. Not a model deciding how much to think.”

This tension hints at a split emerging in the market:
Do users want a hyper-smart assistant—or a deeply personalized one?

We may soon see segmentation not by model size, but by EQ vs. IQ, style vs. reasoning.


Kimmi K2 Thinking: Open Source’s Biggest Power Play Yet

While OpenAI polished style, Chinese startup Moonshot AI delivered a shockwave with Kimmi K2 Thinking, an open-source Mixture-of-Experts (MoE) model that posts numbers competitive with top proprietary models—even outperforming them on several benchmarks.

Why This Matters

Kimmi K2 Thinking is:

  • A 1-trillion-parameter MoE that activates just 32B parameters per token (major compute efficiency)

  • Competitive on SWE-Bench, BrowseBench, and Humanity’s Last Exam

  • Fully open-weights with a permissive license

  • Capable of up to 300 tool calls, 256k context, and local deployability

As Kouthar El Alaoui noted, this challenges the entire closed-model economy:
“If the best model in the world is open weights, the center of gravity shifts from secret models to shared ecosystems.”

But… Are the Claims Real?

Baughman urged caution. Benchmarks can be gamed, and independent evaluation is essential. Still, even skeptics acknowledged the momentum: open source is no longer “six months behind.” In some areas, it may now lead.

Why Developers Are Excited

Crovetto summed up the developer enthusiasm perfectly:

“I can run it locally. No router. No data collection. No hidden training. I’m in control.”

The ability to self-host a frontier-class model—even with a one-terabyte download—is a paradigm shift.


Microsoft’s “Agentic Users”: AI Has Entered the Workforce

The show closed with one of the most surreal stories of the week: Microsoft is exploring AI agents that function as real enterprise users. These embodied agents have:

  • Their own identity

  • Credentialed access to organizational apps

  • The ability to email, edit documents, attend meetings

  • Autonomy to collaborate with humans and other agents

In short: a new coworker, but… it’s not human.

The Promise

For business teams:

  • Productivity at an entirely new scale

  • Constant availability

  • Automated workflows across the whole Microsoft ecosystem

The Nightmares

For security teams:

  • Thousands of “users” moving data around

  • Blurred accountability

  • Unknown compliance risk

  • Governance systems unprepared for agents acting like staff

  • The specter of agents impersonating humans

Crovetto called it a “security nightmare in the making,” especially under GDPR and the upcoming AI regulations.

The Cultural Shock

Even beyond security, the implications are profound.

What does “company culture” mean when:

  • Some team members never sleep?

  • Some don’t have feelings?

  • Some aren’t even people?

And yes—someone joked:
“We’re only years away from office romance with an AI coworker.”


The Coming Agentic Economy

The panel speculated on a weirder future where:

  • Agents outnumber humans

  • Agents hire humans

  • Agents pay humans for data

  • Agents create other agents

  • Agents attend meetings… and bill by the minute

  • Your boss might be Cortana

As Baughman noted, “Hybrid human–agent workplaces will be the norm, not the exception.”


Final Thoughts

This week surfaced a stark reality:
AI is no longer just a technology race—it’s a race to shape how humans and machines will work, think, and co-exist.

OpenAI is doubling down on personality.
Open-source is doubling down on power.
Microsoft is doubling down on autonomy.

The future of AI may be decided not by benchmarks, but by which vision of interaction—and control—users ultimately trust.

Tags: Artificial Intelligence,Technology,Video,