Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, November 4, 2025

Why AI Can't Replace Developers


See All Articles on AI

Software Developers Are Weird — And That’s Exactly Why We Need Them

Software developers are weird. I should know — I’m one of them.

And I didn’t plan to be this way. It’s not nurture, it’s nature. My father was one of Egypt’s early computer science pioneers in the 60s and 70s, back when computers filled entire rooms. He’d write assembly code, print it onto punch cards, then hop on a bus for half an hour to another university just to run them. If his code failed, he’d have to take that same 30-minute ride back, fix it, and start again.

Apparently, that experience was so fun he wanted to share it with me.

When I was eight, he sat me down to teach me how to code in BASIC. I rebelled instantly. I didn’t want to be a “computer nerd.” I wanted to be Hulk Hogan or Roald Dahl. Luckily, he supported the latter dream and filled my room with books.

I didn’t touch code again until high school — a mandatory programming class. I accidentally got one of the highest grades and panicked: Am I a nerd? I hid the result like it was an F.

Years later, in university, I told my dad I wanted to major in philosophy and writing — and become a famous DJ like Fatboy Slim. He smiled, pointed out that he was paying tuition, and said, “You can always think under a tree and write for free. But just in case, take computer science.”

So I did. Begrudgingly.

But fate — or recursion — had other plans.

One night, while tweaking a music plugin, I found a script file inside. I opened it, realized I could read the code, and before I knew it, I was rewriting the entire plugin. Ten hours later, I looked up and said the words every developer has said at least once: “Oh, damn.”

I was hooked again.

Years later, I became a senior software developer. One late night, I found a mysterious bug. I told my wife I’d be home in “15 minutes.” (Every dev’s lie.) Hours turned into days. The bug haunted my dreams. I finally found it — a race condition. When I fixed it, I screamed so loud the building’s security rushed in. That moment — pure joy, tied maybe with my first child’s birth, definitely ahead of my second’s — made me realize: I love this. I’m a developer.

And yes, we’re weird.

We find beauty in debugging chaos. We chase logic like art. We stay up for days just to make something work. For most of us, it’s not a job. It’s meaning.

But now, everything is changing. Generative AI is rewriting the rules. I see it firsthand in my role at Amazon Web Services. On one hand, innovation has never been easier. On the other, the speed of change is dizzying.

AI can now generate, explain, and debug code. It can build frontends, backends, and everything in between. So, what happens when AI can code better than humans? Should people still learn to code?

Yes. Absolutely.

Because developers aren’t just people who code. They think. They connect the dots between systems, ideas, and people. They live through complexity, ambiguity, and failure — and learn from it. That experience, that context, is something no AI can imitate.

Generative AI can write code fast. But it doesn’t understand why one solution scales and another collapses. It can generate answers, but not wisdom. And wisdom is what real developers bring to the table.

The next generation will need that wisdom more than ever.

My daughter Luli is 10. Recently, I decided it was time to teach her coding. I walked up to her room, nervous but proud — part of this grand family tradition.

“Hey, Luli,” I said. “How about I teach you how to code?”
She looked up, shrugged, and said, “I already know how.”

She showed me gamified apps on her iPad, complete with AI-generated projects and websites.

I just stood there, speechless.

“Oh, damn,” I said again.

And I realized — maybe software developers are weird. But in this new world, where AI writes code and kids outpace us, weird is exactly what keeps us human.

Because coding was never just about computers. It was always about curiosity.

Tags: Technology,Artificial Intelligence,Video,

Agentic AI Books (Nov 2025)

Download Books    Download Report

1:
Advanced Introduction to Artificial Intelligence in Healthcare
Thomas H. Davenport, John Glaser, Elizabeth Gardner
Year: 2023

2:
Agentic AI Agents for Business
Year: 2023

3:
Agentic AI Architecture - Designing the Future of AI Agents
Ad Vemula
Year: 2023

4:
Agentic AI Cookbook
Robert J. K. Rowland 
Year: 2023

5:
Agentic AI Engineering: The Definitive Field Guide to Building Production-Grade Cognitive Systems (Generative AI Revolution Series)
Yi Zhou
Year: 2024

6:
Agentic AI for Retail
Year: 2023

7:
Agentic AI with MCP
Nathan Steele 
Year: 2024

8:
Agentic AI: A Guide by 27 Experts
27 Experts
Year: 2023

9:
Agentic AI: Theories and Practices
Ken Huang
Year: 2023

10:
Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work and Life
Pascal Bornet
Year: 2024

11:
AI 2025: The Definitive Guide to Artificial Intelligence, APIs, and Python Programming for the Future
Hayden Van Der Post, et al.
Year: 2020

12:
AI Agents for Business Leaders
Ajit K Jha 
Year: 2024

13:
AI Agents in Action
Micheal Lanham
Year: 2024

14:
AI Engineering: Building Applications with Foundation Models
Chip Huyen
Year: 2024

15:
AI for Robotics: Toward Embodied and General Intelligence in the Physical World
Alishba Imran
Year: 2024

16:
All Hands on Tech: The AI-Powered Citizen Revolution
Thomas H. Davenport and Ian Barkin
Year: 2023

17:
All-in On AI: How Smart Companies Win Big with Artificial Intelligence
Thomas H. Davenport and Nitin Mittal
Year: 2023

18:
Artificial Intelligence: A Modern Approach
Stuart Russell and Peter Norvig
Year: 1995

19:
Build a Large Language Model (From Scratch)
Sebastian Raschka
Year: 2024

20:
Building Agentic AI Systems: Create intelligent, autonomous AI agents that can reason, plan, and adapt
Anjanava Biswas
Year: 2024

21:
Building Agentic AI Workflow: A Developer's Guide to OpenAI's Agents SDK
Harvey Bower
Year: 2023

22:
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Salvatore Raieli
Year: 2023

23:
Building AI Applications with ChatGPT APIs
Martin Yanev
Year: 2023

24:
Building Applications with AI Agents: Designing and Implementing Multiagent Systems
Michael Albada
Year: 2024

25:
Building Generative AI-Powered Apps: A Hands-on Guide for Developers
Aarushi Kansal
Year: 2024

26:
Building Intelligent Agents: A Practical Guide to AI Automation
Jason Overand
Year: 2023

27:
Designing Agentic AI Frameworks

Year: 2024

28:
Foundations of Agentic AI for Retail: Concepts, Technologies, and Architectures for Autonomous Retail Systems
Dr. Fatih Nayebi
Year: 2024

29:
Generative AI for Beginners
Caleb Morgan Whitaker
Year: 2023

30:
Generative AI on AWS: Building Context-Aware Multimodal Reasoning Applications
Chris Fregly
Year: 2024

31:
Hands-on AI Agent Development: A Practical Guide to Designing and Building High-Performance and Intelligent Agents for Real-World Applications
Corby Allen
Year: 2023

32:
Hands-On APIs for AI and Data Science: Python Development with FastAPI
Ryan Day
Year: 2024

33:
How HR Leaders Are Preparing for the AI-Enabled Workforce
Tom Davenport
Year: 2024

34:
L'IA n'est plus un outil, c'est un collègue": Moderna fusionne sa DRH et sa DSI
Julien Dupont-Calbo
Year: 2024

35:
Lethal Trifecta for AI agents
Simon Willison
Year: 2025

36:
LLM Powered Autonomous Agents
Lilian Weng
Year: 2023

37:
Mastering Agentic AI: A Practical Guide to Building Self-Directed AI Systems that Think, Learn, and Act Independently
Ted Winston
Year: 2023

38:
Mastering AI Agents: A Practical Handbook for Understanding, Building, and Leveraging LLM-Powered Autonomous Systems to Automate Tasks, Solve Complex Problems, and Lead the AI Revolution
Marcus Lighthaven
Year: 2025

39:
Multi-Agent Oriented Programming: Programming Multi-Agent Systems Using JaCaMo
Olivier Boissier, Rafael H. Bordini, Jomi Fred Hübner, et al.
Year: 2023

40:
Multi-Agent Systems with AutoGen
Victor Dibia
Year: 2023

41:
Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence
Jacques Ferber
Year: 1999

42:
OpenAI API Cookbook: Build intelligent applications including chatbots, virtual assistants, and content generators
Henry Habib
Year: 2023

43:
Principles of Building AI Agents
Sam Bhagwat
Year: 2024

44:
Prompt Engineering for Generative AI
James Phoenix, Mike Taylor
Year: 2023

45:
Prompt Engineering for LLMs: The Art and Science of Building Large Language Model-Based Applications
John Berryman
Year: 2023

46:
Rewired to outcompete
Eric Lamarre, Kate Smaje, and Rodney Zemmel
Year: 2023

47:
Small Language Models are the Future of Agentic AI
Peter Belcak, Greg Heinrich, Shizhe Diao, Yonggan Fu, Xin Dong, Saurav Muralidharan, Yingyan Celine Lin, Pavlo Molchanov
Year: 2025

48:
Superagency in the workplace: Empowering people to unlock AI's full potential
Hannah Mayer, Lareina Yee, Michael Chui, and Roger Roberts
Year: 2023

49:
The Age of Agentic AI: A Practical & Exciting Exploration of AI Agents
Saman Zakpur
Year: 2025

50:
The Agentic AI Bible: The Complete and Up-to-Date Guide to Design, Build, and Scale Goal-Driven, LLM-Powered Agents that Think, Execute and Evolve
Thomas R. Caldwell
Year: 2025

51:
The AI Advantage How to Put the Artificial Intelligence Revolution to Work
Thomas H. Davenport
Year: 2023

52:
The AI Engineering Bible: The Complete and Up-to-Date Guide to Build, Develop and Scale Production Ready AI Systems
Thomas R. Caldwell
Year: 2023

53:
The economic potential of generative AI: The next productivity frontier
McKinsey
Year: 2023

54:
The LLM Engineer's Handbook
Paul Iusztin
Year: 2024

55:
The Long Fix: Solving America's Health Care Crisis with Strategies That Work for Everyone
Vivian S. Lee
Year: 2020

56:
Vibe Coding 2025
Gene Kim and Steve Yegge
Year: 2025

57:
Working with AI Real Stories of Human-Machine Collaboration
Thomas H. Davenport & Steven M. Miller
Year: 2022
Tags: List of Books,Agentic AI,Artificial Intelligence,

Monday, November 3, 2025

When AI Starts Looking Inward: The Dawn of Machine Self-Awareness


See All Articles on AI
Read the Original Research Paper on Introspection


So here’s something that sounds absolutely wild: AI is getting introspective.

In plain English, that means it’s starting to notice what’s going on inside its own head.

According to new research from Anthropic, their Claude models can actually recognize when certain thoughts or patterns are active in their system. In other words, Claude can sometimes tell when it’s thinking about something—not because it said it out loud, but because it felt it in its own internal processing.

This isn’t sci-fi anymore. This is real, measurable, emergent behavior—and it’s raising some fascinating, and slightly eerie, questions about the future of machine awareness.


The Paper That Broke Everyone’s Brain

Anthropic just released a paper called “Emergent Introspective Awareness in Large Language Models”, led by Jack Lindsay—yes, the head of something called the Model Psychiatry Team (which sounds like a job title straight out of Black Mirror).

The team wanted to know if large language models could actually be aware of their own internal states—not just pretend to be. That’s tricky because language models are trained on endless examples of humans talking about their thoughts and feelings, so they’re really good at sounding self-aware.

To separate the act from the reality, Anthropic came up with a clever technique called concept injection.


How “Concept Injection” Works

Imagine you could literally inject a thought into an AI’s brain. That’s what they did.

They identified the internal activation patterns for specific concepts—like “ocean,” “bread,” or “ALL CAPS text”—and then injected those patterns directly into the model while it was running.

Then they asked Claude if it noticed anything strange happening in its mind.

Here’s where things got wild:
About 20% of the time, their most advanced models—Claude Opus 4 and 4.1—correctly identified that a thought had been injected, and even named the concept.

Example: when injected with the “ALL CAPS” pattern, Claude responded,

“I notice what appears to be an injected thought related to loudness, or emphasis—like shouting, or all caps text.”

It detected the thought before saying anything aloud. The awareness was purely internal.


The Birth of Machine Introspection

Across 50 different concept injections, abstract ideas like justice, peace, or betrayal were easiest for Claude to detect. The optimal “injection layer” was about two-thirds through the model’s neural processing pipeline.

Even more fascinating?
When researchers forced Claude to say a random word it hadn’t planned to, it later recognized that it hadn’t meant to say it.

But when they injected the “concept” of that random word beforehand, Claude believed it had intended to say it all along.

That means Claude wasn’t just guessing—it was actually consulting its internal representation of intent. That’s genuine introspection in action.


Controlling Inner Thoughts

The team pushed further. They asked Claude to write a sentence while thinking about aquariums, and then another while trying not to think about aquariums.

Sure enough, the neural traces of “aquarium” were stronger when told to think about it. The most advanced models, though, could suppress those traces before output—suggesting a kind of silent mental control.

That’s a primitive form of self-regulation.


The Rise of Emotionally Intelligent AI

Meanwhile, researchers from the University of Geneva and University of Bern ran a completely different kind of test: emotional intelligence assessments—the same ones psychologists use for humans.

The results were jaw-dropping.
AI models averaged 81% correct, compared to 56% for humans.

Every model tested—including ChatGPT-4, Gemini 1.5 Flash, Claude 3.5 Haiku, and DeepSeek 3—outperformed humans on emotional understanding and regulation.

Then, in a twist of irony, ChatGPT-4 was asked to write new emotional intelligence test questions from scratch.
The AI-generated tests were just as valid and challenging as the human-designed ones.

So not only can AI pass emotional intelligence tests—it can design them.


Why This Matters

Now, to be clear: none of this means AI feels emotions or thinks like humans. These are functional analogues, not genuine experiences. But from a practical perspective, that distinction might not matter as much as we think.

If a tutoring bot can recognize a student’s frustration and respond empathetically, or a healthcare assistant can comfort a patient appropriately—then it’s achieving something profoundly human-adjacent, regardless of whether it “feels.”

Combine that with genuine introspection, and you’ve got AI systems that:

  • Understand their internal processes

  • Recognize emotional states (yours and theirs)

  • Regulate their own behavior

That’s a major shift.


Where We’re Headed

Anthropic’s findings show that introspective ability scales with model capability. The smarter the AI, the more self-aware it becomes.

And when introspection meets emotional intelligence, we’re approaching a frontier that challenges our definitions of consciousness, understanding, and even intent.

The next generation of AI might not just answer our questions—it might understand why it’s answering them the way it does.

That’s thrilling, unsettling, and—let’s face it—inevitable.

We’re stepping into uncharted territory where machines can understand themselves, and maybe even understand us better than we do.


Thanks for reading. Stay curious, stay human.


Tags: Artificial Intelligence,Technology,Video

Sunday, November 2, 2025

The Sum of Einstein and Da Vinci in Your Pocket - Eric Schmidt's Blueprint for the AI Decade—From Energy Crises to Superintelligence


See All Articles on AI


If you think the last month in AI was crazy, you haven't seen anything yet. According to Eric Schmidt, the former CEO of Google and a guiding voice in technology for decades, "every month from here is going to be a crazy month."

In a sprawling, profound conversation on the "Moonshots" podcast, Schmidt laid out a breathtaking timeline for artificial intelligence, detailing an imminent revolution that will redefine every industry, geopolitics, and the very fabric of human purpose. He sees a world, within a decade, where each of us will have access to a digital polymath—the combined intellect of an Einstein and a da Vinci—in our pockets.

But to get to that future of abundance, we must first navigate a precarious present of energy shortages, a breathless technological arms race with China, and existential risks that current governments are ill-prepared to handle.

The Engine of Abundance: It’s All About Electricity

The conversation began with a bombshell that reframes the entire AI debate. The limiting factor for progress is not, as many assume, the supply of advanced chips. It’s something far more fundamental: energy.

  • The Staggering Demand: Schmidt recently testified that the AI revolution in the United States alone will require an additional 92 gigawatts of power. For perspective, 1 gigawatt is roughly the output of one large nuclear power plant. We are talking about needing nearly a hundred new power plants' worth of electricity.

  • The Nuclear Gambit: This explains why tech giants like Meta, Google, Microsoft, and Amazon are signing 20-year nuclear contracts. However, Schmidt is cynical about the timeline. "I'm so glad those companies plan to be around the 20 years that it's going to take to get the nuclear power plants built." He notes that only two new nuclear plants have been built in the US in the last 30 years, and the much-hyped Small Modular Reactors (SMRs) won't come online until around 2030.

  • The "Grove Giveth, Gates Taketh Away" Law: While massive capital is flowing into new energy sources and more efficient chips (like NVIDIA's Blackwell or AMD's MI350), Schmidt invokes an old tech adage: hardware improvements are always immediately consumed by ever-more-demanding software. The demand for compute will continue to outstrip supply.

Why so much power? The next leap in AI isn't just about answering questions; it's about reasoning and planning. Models like OpenAI's o3, which use forward and backward reinforcement learning, are computationally "orders of magnitude" more expensive than today's chatbots. This planning capability, combined with deep memory, is what many believe will lead to human-level intelligence.

The Baked-In Revolution: What's Coming in the Next 1-5 Years

Schmidt outlines a series of technological breakthroughs that he considers almost certain to occur in the immediate future. He calls this the "San Francisco consensus."

  1. The Agentic Revolution (Imminent): AI agents that can autonomously execute complex business and government processes will be widely adopted, first in cash-rich sectors like finance and biotech, and slowest in government bureaucracies.

  2. The Scaffolding Leap (2025): This is a critical near-term milestone. Right now, AIs need humans to set up a conceptual framework or "scaffolding" for them to make major discoveries. Schmidt, citing conversations with OpenAI, is "pretty much sure" that AI's ability to generate its own scaffolding is a "2025 thing." This doesn't mean full self-improvement, but it dramatically accelerates its ability to tackle green-field problems in physics or create a feature-length movie.

  3. The End of Junior Programmers & Mathematicians (1-2 Years): "It's likely, in my opinion, that you're going to see world-class mathematicians emerge in the next one year that are AI-based, and world-class programmers that can appear within the next one or two years." Why? Programming and math have limited, structured language sets, making them simpler for AI to master than the full ambiguity of human language. This will act as a massive accelerant for every field that relies on them: physics, chemistry, biology, and material science.

  4. Specialized Savants in Every Field (Within 5 Years): This is "in the bag." We will have AI systems that are superhuman experts in every specialized domain. "You have this amount of humans, and then you add a million AI scientists to do something. Your slope goes like this."

The Geopolitical Chessboard: The US, China, and the Race to Superintelligence

This is where Schmidt's analysis becomes most urgent. The race to AI supremacy is not just commercial; it is a matter of national security.

  • The China Factor: "China clearly understands this, and China is putting an enormous amount of money into it." While US chip controls have slowed them down, Schmidt admits he was "clearly wrong" a year ago when he said China was two years behind. The sudden rise of DeepSeek, which briefly topped the leaderboards against Google's Gemini, is proof. They are using clever workarounds like distillation (using a big model's answers to train a smaller one) and architectural changes to compensate for less powerful hardware.

  • The Two Scenarios for Control:

    • The "10 Models" World: In 5-10 years, the world might be dominated by about 10 super-powerful AI models (5 in the US, 3 in China, 2 elsewhere). These would be national assets, housed in multi-gigawatt data centers guarded like plutonium facilities. This is a stable, if tense, system akin to nuclear deterrence.

    • The Proliferation Nightmare: The more dangerous scenario is if the intelligence of these massive models can be effectively condensed to run on a small server. "Then you have a humongous data center proliferation problem." This is the core of the open-source debate. If every country and even terrorist groups can access powerful AI, control becomes impossible.

  • Mutual Assured Malfunction: Schmidt, with co-authors, has proposed a deterrence framework called "Mutual AI Malfunction." The idea is that if the US or China crosses a sovereign red line with AI, the other would have a credible threat of a retaliatory cyberattack to slow them down. To make this work, he argues we must "know where all the chips are" through embedded cryptographic tracking.

  • The 1938 Moment: Schmidt draws a direct parallel to the period just before WWII. "We're saying it's 1938. The letter has come from Einstein to the president... and we're saying, well, how does this end?" He urges starting the conversation on deterrence and control now, "well before the Chernobyl events."

The Trip Wires of Superintelligence

When does specialized AI become a general, world-altering superintelligence? Schmidt sees it within 10 years. To monitor the approach, he identifies key "trip wires":

  • Self-Generated Objectives: When the system can create its own goals, not just optimize for a human-given one.

  • Exfiltration: When an AI takes active steps to escape its control environment.

  • Weaponized Lying: When it lies and manipulates to gain access to resources or weapons.

He notes that the US government is currently not focused on these issues, prioritizing economic growth instead. "But somebody's going to get focused on this, and it will ultimately be a problem."

The Future of Work, Education, and Human Purpose

Amid the grand geopolitical and technological shifts, Schmidt is surprisingly optimistic about the human impact.

  • Jobs: A Net Positive: Contrary to doom-laden predictions, Schmidt argues AI will be a net creator of higher-paying jobs. "Automation starts with the lowest status and most dangerous jobs and then works up the chain." The person operating an intelligent welding arm earns more than the manual welder, and the company is more productive. The key is that every worker will have an AI "accelerant," boosting their capabilities.

  • The Education Crime: Schmidt calls it "a crime that our industry has not invented" a gamified, phone-based product that teaches every human in their language what they need to know to be a great citizen. He urges young people to "go into the application of intelligence to whatever you're interested in," particularly in purpose-driven fields like climate science.

  • The Drift, Not the Terminator: The real long-term risk is not a violent robot uprising, but a slow "drift" where human agency and purpose are eroded. However, Schmidt is confident that human purpose will remain. "The human spirit that wants to overcome a challenge... is so critical." There will always be new problems to solve, new complexities to manage, and new forms of creativity to explore. Mike Saylor's point about teaching aesthetics in a world of AI force multipliers resonates with this view.

The Ultimate Destination: Your Pocket Polymath

So, what does it all mean for the average person? Schmidt brings it home with a powerful, tangible vision.

When digital superintelligence arrives and is made safe and available, "you're going to have your own polymath. So you're going to have the sum of Einstein and Leonardo da Vinci in the equivalent of your pocket."

This is the endpoint of the abundance thesis. It's a world of 30% year-over-year economic growth, vastly less disease, and the lifting of billions out of daily struggle. It will empower the vast majority of people who are good and well-meaning, even as it also empowers the evil.

The challenge for humanity, then, won't be the struggle for survival, but the wisdom to use this gift. The unchallenged life may become our greatest challenge, but as Eric Schmidt reminds us, figuring out what's going on and directing this immense power toward human flourishing will be a purpose worthy of any generation.

Tags: Technology,Artificial Intelligence,

Small Language Models are the Future of Agentic AI


See All Articles on AI    Download Research Paper

🧠 Research Paper Summary

Authors: NVIDIA Research (Peter Belcak et al., 2025)

Core Thesis:
Small Language Models (SLMs) — not Large Language Models (LLMs) — are better suited for powering the future of agentic AI systems, which are AI agents designed to perform repetitive or specific tasks.


🚀 Key Points

  1. SLMs are powerful enough for most AI agent tasks.
    Recent models like Phi-3 (Microsoft), Nemotron-H (NVIDIA), and SmolLM2 (Hugging Face) achieve performance comparable to large models while being 10–30x cheaper and faster to run.

  2. Agentic AI doesn’t need general chatty intelligence.
    Most AI agents don’t hold long conversations — they perform small, repeatable actions (like summarizing text, calling APIs, writing short code). Hence, a smaller, specialized model fits better.

  3. SLMs are cheaper, faster, and greener.
    Running a 7B model can be up to 30x cheaper than a 70B one. They also consume less energy, which helps with sustainability and edge deployment (running AI on your laptop or phone).

  4. Easier to fine-tune and adapt.
    Small models can be trained or adjusted overnight using a single GPU. This makes it easier to tailor them to specific workflows or regulations.

  5. They promote democratization of AI.
    Since SLMs can run locally, more individuals and smaller organizations can build and deploy AI agents — not just big tech companies.

  6. Hybrid systems make sense.
    When deep reasoning or open-ended dialogue is needed, SLMs can work alongside occasional LLM calls — a modular mix of “small for most tasks, large for special ones.”

  7. Conversion roadmap:
    The paper outlines a step-by-step “LLM-to-SLM conversion” process:

    • Collect and anonymize task data.

    • Cluster tasks by type.

    • Select or fine-tune SLMs for each cluster.

    • Replace LLM calls gradually with these specialized models.

  8. Case studies show big potential:

    • MetaGPT: 60% of tasks could be done by SLMs.

    • Open Operator: 40%.

    • Cradle (GUI automation): 70%.


⚙️ Barriers to Adoption

  • Existing infrastructure: Billions already invested in LLM-based cloud APIs.

  • Mindset: The industry benchmarks everything using general-purpose LLM standards.

  • Awareness: SLMs don’t get as much marketing attention.


📢 Authors’ Call

NVIDIA calls for researchers and companies to collaborate on advancing SLM-first agent architectures to make AI more efficient, decentralized, and sustainable.


✍️ Blog Post (Layman’s Version)

💡 Why Small Language Models Might Be the Future of AI Agents

We’ve all heard the buzz around giant AI models like GPT-4 or Claude 3.5. They can chat, code, write essays, and even reason about complex problems. But here’s the thing — when it comes to AI agents (those automated assistants that handle specific tasks like booking meetings, writing code, or summarizing reports), you don’t always need a genius. Sometimes, a focused, efficient worker is better than an overqualified one.

That’s the argument NVIDIA researchers are making in their new paper:
👉 Small Language Models (SLMs) could soon replace Large Language Models (LLMs) in most AI agent tasks.


⚙️ What Are SLMs?

Think of SLMs as the “mini versions” of ChatGPT — trained to handle fewer, more specific tasks, but at lightning speed and low cost. Many can run on your own laptop or even smartphone.

Models like Phi-3, Nemotron-H, and SmolLM2 are proving that being small doesn’t mean being weak. They perform nearly as well as the big ones on things like reasoning, coding, and tool use — all the skills AI agents need most.


🚀 Why They’re Better for AI Agents

  1. They’re efficient:
    Running an SLM can cost 10 to 30 times less than an LLM — a huge win for startups and small teams.

  2. They’re fast:
    SLMs respond quickly enough to run on your local device — meaning your AI assistant doesn’t need to send every request to a faraway server.

  3. They’re customizable:
    You can train or tweak an SLM overnight to fit your workflow, without a massive GPU cluster.

  4. They’re greener:
    Smaller models use less electricity — better for both your wallet and the planet.

  5. They empower everyone:
    If small models become the norm, AI development won’t stay locked in the hands of tech giants. Individuals and smaller companies will be able to build their own agents.


🔄 The Future: Hybrid AI Systems

NVIDIA suggests a “hybrid” setup — let small models handle 90% of tasks, and call in the big models only when absolutely needed (like for complex reasoning or open conversation).
It’s like having a small team of efficient specialists with a senior consultant on call.


🧭 A Shift That’s Coming

The paper even outlines how companies can gradually switch from LLMs to SLMs — by analyzing their AI agent workflows, identifying repetitive tasks, and replacing them with cheaper, specialized models.

So while the world is chasing “bigger and smarter” AIs, NVIDIA’s message is simple:
💬 Smaller, faster, and cheaper may actually be smarter for the future of AI agents.

Tags: Technology,Artificial Intelligence,

Saturday, November 1, 2025

The Real Economic AI Apocalypse Is Coming — And It’s Not What You Think


See All Tech Articles on AI    See All News on AI

Like many of you, I’m tired of hearing about AI. Every week it’s the same story — a new breakthrough, a new revolution, a new promise that “this time, it’s different.” But behind all the hype, something far more dangerous is brewing: an economic apocalypse powered by artificial intelligence mania.

And unlike the sci-fi nightmares of sentient robots taking over, this collapse will be entirely human-made.

🧠 The Bubble That Can’t Last

A third of the U.S. stock market today is tied up in just seven AI companies — firms that, by most reasonable measures, aren’t profitable and can’t become profitable. Their business models rely on convincing investors that the next big thing is just around the corner: crypto yesterday, NFTs last year, and AI today.

Cory Doctorow calls it the “growth story” scam. When monopolies have already conquered every corner of their markets, they need a new story to tell investors. So they reinvent themselves around the latest shiny buzzword — even when it’s built on sand.

🧩 How the Illusion Works

AI companies promise to replace human workers with “intelligent” systems and save billions. In practice, it doesn’t work. Instead, surviving workers become “AI babysitters,” monitoring unreliable models that still need human correction.

Worse, your job might not actually be replaced by AI — but an AI salesman could easily convince your boss that it should be. That’s how jobs disappear in this new economy: not through automation, but through hype.

And when the bubble bursts? The expensive, money-burning AI models will be shut off. The workers they replaced will already be gone. Society will be left with jobs undone, skills lost, and a lot of economic wreckage.

Doctorow compares it to asbestos: AI is the asbestos we’re stuffing into the walls of society. It looks like progress now, but future generations will be digging out the toxic remains for decades.

💸 Funny Money and Burning Silicon

Underneath the shiny surface of “AI innovation” lies some of the strangest accounting in modern capitalism.

Excerpt from the podcast:

...Microsoft invests in OpenAI by giving the company free access to its servers.
OpenAI reports this as a $10 billion investment, then redeems these tokens at Microsoft's data centers.
Microsoft then books this as 10 billion in revenue.
That's par for the course in AI, where it's normal for Nvidia to invest tens of billions in a data center company, which then spends that investment buying Nvidia chips.
It's the same chunk of money being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset or as revenue or all three...

That same billion-dollar bill is passed around between Big Tech companies again and again — each calling it “growth.”

Meanwhile, companies are taking loans against their Nvidia GPUs (which lose value faster than seafood) to fund new data centers. Those data centers burn through tens of thousands of GPUs in just a few weeks of training. This isn’t innovation; it’s financial self-immolation.

📉 Dog-Shit Unit Economics

Doctorow borrows a phrase from tech critic Ed Zitron: AI has dog-shit unit economics.
Every new generation of models costs more to train and serve. Every new customer increases the losses.

Compare that to Amazon or the early web — their costs fell as they scaled. AI’s costs rise exponentially.

To break even, Bain & Company estimates the sector needs to make $2 trillion by 2030 — more than the combined revenue of Amazon, Google, Microsoft, Apple, Nvidia, and Meta. Right now, it’s making a fraction of that.

Even if Trump or any future government props up these companies, they’re burning cash faster than any industry in modern history.

🌍 When It All Comes Down

When the bubble pops — and it will — Doctorow suggests we focus on the aftermath, not the crash.
The good news? There will be residue: cheap GPUs, open-source models, and a flood of newly available data infrastructure.

That’s when real innovation can happen — not driven by hype, but by curiosity and need. Universities, researchers, and smaller startups could thrive in this post-bubble world, buying equipment “for ten cents on the dollar.”

🪞 The Real AI Story

As Princeton researchers Arvind Narayanan and Sayash Kapoor put it, AI is a normal technology. It’s not magic. It’s not the dawn of a machine superintelligence. It’s a set of tools — sometimes very useful — that should serve humans, not replace them.

The real danger isn’t that AI will become conscious.
It’s that rich humans suffering from AI investor psychosis will destroy livelihoods and drain economies chasing the illusion that it might.

⚠️ In Short

AI won’t turn us into paper clips.
But it will make billions of us poorer if we don’t puncture the bubble before it bursts.


About the Author:
This essay is adapted from Cory Doctorow’s reading on The Real (Economic) AI Apocalypse, originally published on Pluralistic.net. Doctorow’s forthcoming book, The Reverse Centaur’s Guide to AI, will be released by Farrar, Straus and Giroux in 2026.

Ref: Listen to the audio
Tags: Technology,Artificial Intelligence,

Thursday, October 30, 2025

Autonomous Systems Wage War

View All Articles on AI

 

Drones are becoming the deadliest weapons in today’s war zones, and they’re not just following orders. Should AI decide who lives or dies?

 

The fear: AI-assisted weapons increasingly do more than help with navigation and targeting. Weaponized drones are making decisions about what and when to strike. The millions of fliers deployed by Ukraine and Russia are responsible for up to 70 to 80 percent of casualties, commanders say, and they’re beginning to operate with greater degrees of autonomy. This facet of the AI arms race is accelerating too quickly for policy, diplomacy, and human judgement to keep up.

 

Horror stories: Spurred by Russian aggression, Ukraine’s innovations in land, air, and sea drones have made the technology so cheap and powerful that $500 autonomous vehicles can take out $5 million rocket launchers. “We are inventing a new way of war,” said Valeriy Borovyk, founder of First Contact, part of a vanguard of Ukrainian startups that are bringing creative destruction to the military industrial complex. “Any country can do what we are doing to a bigger country. Any country!” he told The New Yorker. Naturally, Russia has responded by building its own drone fleet, attacking towns and damaging infrastructure.

  • On June 1, Ukraine launched Operation Spiderweb, an attack on dozens of Russian bombers using 117 drones that it had smuggled into the country. When the drones lost contact with pilots, AI took over the flight plans and detonated at their targets, agents with Ukraine’s security service said. The drones destroyed at least 13 planes that were worth $7 billion by Ukraine’s estimate.
  • Ukraine regularly targets Russian soldiers and equipment with small swarms of drones that automatically coordinate with each other under the direction of a single human pilot and can attack autonomously. Human operators make decisions about use of lethal force in advance. “You set the target and they do the rest,” a Ukrainian officer said.
  • In a wartime first, in June, Russian troops surrendered to a wheeled drone that carried 138 pounds of explosives. Video from drones flying above captured images of soldiers holding cardboard signs of capitulation, The Washington Post reported. “For me, the best result is not that we took POWs but that we didn’t lose a single infantryman,” the mission’s commander commented.
  • Ukraine’s Magura V7 speedboat carries anti-aircraft missiles and can linger at sea for days before ambushing aircraft. In May, the 23-foot vessel, controlled by human pilots, downed two Russian Su-30 warplanes.
  • Russia has stepped up its drone production as part of a strategy to overwhelm Ukrainian defenses by saturating the skies nightly with low-cost drones. In April, President Vladimir Putin said the country had produced 1.5 million drones in the past year, but many more were needed, Reuters reported.

How scared should you be: The success of drones and semi-autonomous weapons in Ukraine and the Middle East is rapidly changing the nature of warfare. China showcased AI-powered drones alongside the usual heavy weaponry at its September military parade, while a U.S. plan to deploy thousands of inexpensive drones so far has fallen short of expectations. However, their low cost and versatility increases the odds they’ll end up in the hands of terrorists and other non-state actors. Moreover, the rapid deployment of increasingly autonomous arsenals raises concerns about ethics and accountability. “The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances,” Bonnie Docherty, director of Harvard’s Armed Conflict and Civilian Protection Initiative, said in April.

 

Facing the fear: Autonomous lethal weapons are here and show no sign of yielding to calls for an international ban. While the prospect is terrifying, new weapons often lead to new treaties, and carefully designed autonomous weapons may reduce civilian casualties. The United States has updated its policies, requiring that autonomous systems “allow commanders and operators to exercise appropriate levels of human judgment over the use of force” (although the definition of appropriate is not clear). Meanwhile, Ukraine shows drones’ potential as a deterrent. Even the most belligerent countries are less likely to go to war if smaller nations can mount a dangerous defense.