Date: April 17, 2025
Gemini Live expands screen and camera sharing to all Android users, aiming to make real-time AI help more accessible and useful.
In a quiet but impactful update, Google is now offering its Gemini Live feature, which lets you share your screen or camera in real time with its AI assistant, completely free to all Android users.
The tool, once gated behind the Gemini Advanced paywall, is rolling out to everyone using the Gemini app. And it could be a game-changer for how users interact with AI.
At its core, Gemini Live allows users to literally show the AI what they’re looking at. Want help reading a complex form? Point your camera at it. Need guidance troubleshooting something on your phone? Share your screen, and Gemini can walk you through it.
It’s designed to feel more like a back-and-forth — a natural extension of Google’s push to make AI feel less robotic and more human.
Google hasn't made a big song and dance about the change; The company just simply posted on X saying, “We’ve been hearing great feedback on Gemini Live with camera and screen share, so we decided to bring it to more people ✨ Starting today and over the coming weeks.”
We’ve been hearing great feedback on Gemini Live with camera and screen share, so we decided to bring it to more people ✨
— Google Gemini App (@GeminiApp) April 16, 2025
Starting today and over the coming weeks, we're rolling it out to *all* @Android users with the Gemini app. Enjoy!
PS If you don’t have the app yet,… https://t.co/dTsxLZLxNI
Early access was limited to flagship devices like the Pixel 9 and Galaxy S25. Now, it’s making its way to the broader Android ecosystem. If you’ve got the Gemini app, you should see the new Live option appear soon.
To activate it: open Gemini, tap the "Share screen with Live" icon, and select whether you want to share your camera view or your screen. A persistent notification will let you know sharing is active, and you can end it with a single tap.
By removing the subscription barrier, Google is positioning Gemini Live as a default interaction model — one that’s more visual, more contextual, and arguably more powerful than just text prompts.
With rivals like OpenAI and Meta racing to add multimodal smarts to their assistants, this move gives Google a bit of an edge, especially with Android’s massive built-in user base.
The feature is already rolling out, and if it’s not on your phone yet, expect to see it soon.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. Armed with a Bachelor's in Business Administration and a knack for crafting compelling narratives and a sharp specialization in everything from Predictive Analytics to FinTech—and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
Apple Taps Anthropic to Supercharge Xcode with AI-Powered Coding Assistant
Apple collaborates with Amazon-backed Anthropic to create a next-gen AI assistant for Xcode, aiming to revolutionize how developers write, edit, and test code through an intuitive “vibe-coding” experience.
How Much Does a Digital Marketing Agency Cost?
Discover the factors that manipulate the marketing agency costs and drive you to hefty bills. Observe and plan smartly! We got some tips too.
Quantum Leap: Amaravati to Build India’s First Tech Village
Amravati’s quantum computing village, India’s first, pioneers a tech revolution with IBM, TCS, and L&T, fostering innovation in quantum research and collaboration.
Microsoft Goes Passwordless by Default, Pushing Passkeys Mainstream
Microsoft ditches passwords for new users—passkeys are in, friction is out. Is this the tech giants’ way of embracing smarter sign-ins?