#News

Google DeepMind’s New AI Models Aim to Make Robots Smarter and More Useful

Google DeepMind’s New AI Models Aim to Make Robots Smarter and More Useful

Date: March 13, 2025

Google DeepMind unveils Gemini Robotics, an AI system designed to make robots smarter, more adaptable, and capable of real-world problem-solving.

Google DeepMind is pushing robotics into new territory with the launch of two groundbreaking AI models: Gemini Robotics and Gemini Robotics-ER. These latest models promise to make robots smarter, more versatile, and far better at understanding our world.

Built on Google’s advanced Gemini 2.0 platform, Gemini Robotics is a vision-language-action (VLA) model that integrates sight, language processing, and physical action into a single system. Unlike traditional AI confined to digital outputs like text or images, this model enables robots to interpret natural language commands and execute complex tasks in real-world environments. 

Meanwhile, Gemini Robotics-ER (Embodied Reasoning) adds advanced spatial awareness, allowing robots to navigate and reason about their surroundings with greater precision.

“We’ve been able to bring the world-understanding—the general-concept understanding—of Gemini 2.0 to robotics,” said Kanishka Rao, a robotics researcher at Google DeepMind who spearheaded the project. In a press briefing, Rao highlighted the models’ ability to control various robots across hundreds of scenarios, even those not included in their training data.

Demonstrations showcased the technology’s potential. Robots equipped with Gemini Robotics performed tasks like folding origami, packing snacks into a Ziploc bag, and plugging devices into power strips—all in response to spoken instructions. One video featured an Apptronik-developed humanoid robot, Apollo, rearranging letters on a tabletop while conversing with a human operator. The system’s adaptability shone when objects slipped or environments changed, with robots quickly recalibrating to complete their tasks.

What’s the Big Deal?

The implications are vast. Google DeepMind is partnering with Apptronik, a Texas-based robotics firm, to integrate Gemini 2.0 into next-generation humanoid robots. “We’re excited to explore how these models can push the boundaries of what robots can achieve,” Rao added. The company is also collaborating with select testers (Agile Robots, Agility Robots, Boston Dynamics, and Enchanted Tools) to refine Gemini Robotics-ER, signaling a cautious but ambitious rollout.

Carolina Parada, head of robotics at Google DeepMind, emphasized the models’ advancements in three key areas: generality, interactivity, and dexterity. “This enables us to build robots that are more capable, more responsive, and more robust to changes in their environment,” she said during the briefing. Parada noted that unlike humans, these robots don’t learn on the fly—yet—but the foundation is being laid for future breakthroughs.

Balancing Innovation with Safety

Safety remains a priority amid such rapid progress. Google DeepMind introduced ASIMOV, a new benchmark named after sci-fi author Isaac Asimov, to assess risks in AI-powered robotics. The tool evaluates whether a robot’s actions could lead to unintended consequences, such as grasping an object dangerously close to a human. “We’re building this technology with safety top of mind,” Parada said, acknowledging that commercialization is still years away.

Arpit Dubey

By Arpit Dubey LinkedIn Icon

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =