Final week, we talked about how agentic AI is lastly attending to work.
AI brokers at the moment are beginning to plan, cause and perform digital duties with out fixed prompting.
Coders are utilizing them to search for bugs and rewrite damaged code. Sellers on Amazon are utilizing them to assist handle their inventories. And agentic AI is even getting used to tackle extra complicated points.
For instance, final month researchers revealed a paper on HealthFlow, a self-evolving analysis agent constructed to sort out medical analysis challenges.
As a substitute of ready for a human immediate at each step, HealthFlow plans its personal method to analysis. It exams totally different methods, learns from the outcomes and improves its strategies over time.
It’s like a junior researcher who will get smarter with each experiment. And in benchmark exams, HealthFlow beat high AI techniques on among the hardest well being knowledge challenges.
But as thrilling as that’s, these AI brokers are nonetheless software program. They’re trapped contained in the digital world.
Or are they?
Robots Are Getting an Improve
On September 25, Google’s DeepMind launched Gemini Robotics 1.5.
And with this launch, agentic AI has change into a part of the bodily world.
Gemini Robotics 1.5 is definitely two fashions that work in tandem. Gemini Robotics ER 1.5 is a reasoning mannequin. It might probably use instruments like Google Search to interrupt massive objectives into smaller steps and determine what must occur subsequent.
Gemini Robotics 1.5 is a vision-language-action (VLA) mannequin. It takes the subgoals from ER 1.5 and interprets them into concrete actions like greedy, pointing and manipulating objects.
The mixture of the 2 fashions is one thing new in robotics…
A system that thinks earlier than it strikes.
DeepMind says these fashions are designed for multi-step, on a regular basis duties like sorting laundry, packing for the climate or recycling gadgets based mostly on native guidelines.
This type of adaptability has been the lacking piece in robotics for many years.
Factories are filled with inflexible machines that carry out a single motion, time and again. However the second the product modifications, the robotic must be reprogrammed from scratch.
What DeepMind is creating is a robotic that may generalize and make modifications on the fly.
Equally as essential, they’ve launched movement switch, the flexibility to show a talent as soon as and share it throughout totally different robotic our bodies.
In a single video, they confirmed a robotic arm within the lab studying the way to carry out particular duties. Gemini Robotics 1.5 then enabled Apptronik’s humanoid Apollo robotic to reuse that information with out ranging from scratch.

Picture: DeepMind on YouTube
This can enable robots to quickly scale the sorts of jobs they’ll do in the true world.
And it’s why DeepMind isn’t alone in these ambitions.
Nvidia has been racing down the identical path. At its GTC convention in March, Nvidia’s CEO Jensen Huang confirmed off one thing known as GR00T that’s like a “mind” for humanoid robots.
It’s a basis mannequin skilled to assist them see, perceive and transfer extra like folks.
Just a few months later, Nvidia added the “muscle” when it launched Jetson Thor, a robust pc that sits contained in the robotic itself. As a substitute of sending each choice again to the cloud, it permits robots to suppose and act on the spot in real-time.
Collectively, GR00T and Jetson Thor give robots each the intelligence and the reflexes they’ve been lacking.
Amazon has additionally been transferring on this path. Final 12 months, the corporate started testing Digit, a humanoid robotic from Agility Robotics, inside its warehouses.
Picture: Agility Robotics
The trials had been restricted, however Amazon’s objective is apparent. A fleet of humanoid robots wouldn’t solely by no means tire, they might by no means unionize.
Then there’s Covariant, a startup that launched its personal robotics basis mannequin, RFM-1, earlier this 12 months.
Covariant’s robots can comply with pure language directions, be taught new duties on the fly and even ask for clarification once they’re undecided what to do. In different phrases, RFM-1 offers robots human-like reasoning capabilities.
That’s an enormous leap from the senseless machines we’ve been used to.
Sanctuary AI is constructing robots geared up with tactile sensors. Their objective is to make machines that may really feel what they’re touching.
It’s a capability people take without any consideration, but it surely’s one which robots have all the time struggled with. Mix contact with reasoning and you may see how robots may quickly deal with the type of unpredictable, delicate duties that fill our every day lives.
However what do all these advances in robotics add as much as?
Nothing lower than what I’ve been pounding the desk about for years.
The road between software program and {hardware} is blurring because the digital intelligence of AI brokers is being fused with the bodily capabilities of robots.
As soon as that line disappears, the alternatives are limitless…
And the market potential is staggering.
Goldman Sachs tasks the humanoid market alone may attain $38 billion by 2035.
Whereas the worldwide robotics trade is projected to hit $375 billion in a decade — greater than 5X its measurement as we speak.
Right here’s My Take
As all the time, there are causes to measure optimism with warning.
In spite of everything, real-world environments aren’t the identical as digital environments. Lighting modifications, objects overlap and issues break.
Dexterity and agility are nonetheless points for robots, but security is non-negotiable. A careless robotic may injure somebody.
What’s extra, the prices of constructing and sustaining these techniques stay excessive.
But when historical past tells us something, it’s that breakthroughs hardly ever arrive totally polished.
I’m certain you keep in mind the gradual, unreliable dial-up web of the Nineteen Nineties. However that didn’t cease it from turning into the spine of the worldwide economic system.
I consider that’s the place we’re with the convergence of agentic AI and robotics as we speak…
However I anticipate issues will transfer a lot quicker from right here.
Going ahead, we’re going to begin coping with machines that may suppose and act in the identical world we reside in.
And the disruption that follows has the potential to dwarf something we’ve seen to date.
Regards,
Ian KingChief Strategist, Banyan Hill Publishing
Editor’s Be aware: We’d love to listen to from you!
If you wish to share your ideas or options in regards to the Day by day Disruptor, or if there are any particular matters you’d like us to cowl, simply ship an e-mail to dailydisruptor@banyanhill.com.
Don’t fear, we gained’t reveal your full title within the occasion we publish a response. So be at liberty to remark away!