Google Launches On-Device Gemini AI Model for Robots
Google DeepMind Unveils Gemini Robotics On-Device for Offline Robot Control
Google DeepMind just dropped an exciting update in the robotics space—Gemini Robotics On-Device, a new language model that lets robots perform tasks without needing an internet connection. This builds on their earlier Gemini Robotics model (released in March) but with a key upgrade: local processing.
Developers can now fine-tune robot movements using natural language prompts, making it easier to adapt robots for different tasks. Google claims its performance is nearly on par with its cloud-based counterpart and outperforms other on-device models (though they didn’t specify which ones).
Image Credits: Google Real-World Robot Skills: From Laundry to Assembly Lines
In demos, robots running this model successfully:
- Unzipped bags
- Folded clothes
- Adapted to new objects (like assembling parts on an industrial belt)
Originally trained for ALOHA robots, the model was later adapted to work on:
- Franka FR3 (a bi-arm industrial robot)
- Apptronik’s Apollo humanoid
Gemini Robotics SDK: Training Robots with Demonstrations
Google also announced a Gemini Robotics SDK, allowing developers to train robots using 50-100 task demonstrations in the MuJoCo physics simulator. This could speed up robot learning for real-world applications.
The Bigger Picture: AI’s Push into Robotics
Google isn’t alone in this race:
- Nvidia is building foundation models for humanoids
- Hugging Face is working on open models—and actual robots
- RLWRLD (a Korean startup) is developing foundational models for robotics
The future of AI-powered robots is heating up—and it’s happening offline, on-device, and in real time.
Want more tech insights?
Join us at TechCrunch Disrupt in Boston, MA (July 15) for deep dives into AI, robotics, and venture trends. Save $200+ on All Stage passes and connect with leaders from Precursor Ventures, NEA, Index Ventures, and Underscore VC.
Related article
Google tests Audio Overviews for Search queries
Google Search Introduces Audio Overviews for Hands-Free LearningGoogle just rolled out an experimental new feature—Audio Overviews—giving users another way to consume search results. The feature, first tested in NotebookLM (Google’s AI-powered research tool), is now available in Google Search Labs,
New Study Reveals How Much Data LLMs Actually Memorize
How Much Do AI Models Actually Memorize? New Research Reveals Surprising InsightsWe all know that large language models (LLMs) like ChatGPT, Claude, and Gemini are trained on enormous datasets—trillions of words from books, websites, code, and even multimedia like images and audio. But what exactly
Google Introduces New AI and Accessibility Upgrades for Android and Chrome
Google Expands AI and Accessibility Features for Android and ChromeGoogle just dropped some exciting updates for Android and Chrome, making them smarter and more accessible than ever. The biggest highlight? TalkBack, Android’s built-in screen reader, now lets users ask Gemini AI questions about imag
Comments (0)
0/200
Google DeepMind Unveils Gemini Robotics On-Device for Offline Robot Control
Google DeepMind just dropped an exciting update in the robotics space—Gemini Robotics On-Device, a new language model that lets robots perform tasks without needing an internet connection. This builds on their earlier Gemini Robotics model (released in March) but with a key upgrade: local processing.
Developers can now fine-tune robot movements using natural language prompts, making it easier to adapt robots for different tasks. Google claims its performance is nearly on par with its cloud-based counterpart and outperforms other on-device models (though they didn’t specify which ones).
Real-World Robot Skills: From Laundry to Assembly Lines
In demos, robots running this model successfully:
- Unzipped bags
- Folded clothes
- Adapted to new objects (like assembling parts on an industrial belt)
Originally trained for ALOHA robots, the model was later adapted to work on:
- Franka FR3 (a bi-arm industrial robot)
- Apptronik’s Apollo humanoid
Gemini Robotics SDK: Training Robots with Demonstrations
Google also announced a Gemini Robotics SDK, allowing developers to train robots using 50-100 task demonstrations in the MuJoCo physics simulator. This could speed up robot learning for real-world applications.
The Bigger Picture: AI’s Push into Robotics
Google isn’t alone in this race:
- Nvidia is building foundation models for humanoids
- Hugging Face is working on open models—and actual robots
- RLWRLD (a Korean startup) is developing foundational models for robotics
The future of AI-powered robots is heating up—and it’s happening offline, on-device, and in real time.
Want more tech insights?
Join us at TechCrunch Disrupt in Boston, MA (July 15) for deep dives into AI, robotics, and venture trends. Save $200+ on All Stage passes and connect with leaders from Precursor Ventures, NEA, Index Ventures, and Underscore VC.












