Mobile News

I saw the new Gemini and Project Astra, here’s why it’s the future


We’re quickly entering the realm of AI that is useful, and key to this is Project Astra, Google’s new universal AI agent that is helpful in everyday tasks. Oppo, Honor, Motorola and Tecno have all developed new ways for AI to help you in your daily life, but key to the next generation of artificial intelligence is Astra’s multimodal approach.

The premise is simple: point your phone camera at something and have a live conversation with Google Gemini, where you can ask it questions and have it provide suggestions based on what it is seeing.

The technology behind complex, and as you can imagine, the rollout of features is happening on a more piecemeal basis. The first two features are finally ready, and ahead of their launch later this month, I got to experience a preview of them alongside other Gemini announcements. What I saw is the future of AI, and I’m super excited:

Astra features: Gemini Live Video and screen sharing

Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open
Nirave Gondhia / Digital Trends

The big update to Gemini is the new Gemini Live, which gains new visual abilities powered by Project Astra. It makes logical sense that Astra features will help build the next generation of Google Live in more ways than one.

If you’ve been waiting for an AI that can help you understand the world around you, the new video-sharing feature will change your life. The demo involved asking questions related to a pottery business, and Gemini Live was about understanding colors, shapes, and context without needing multiple prompts.

As you’ll see in the video above, it’s absolutely exhilarating, and the possibilities are endless. I have no idea if it’s possible, but could Gemini help you change a tire or fix a common engine problem if you’ve never had to do it before? What about asking it for fashion advice, looking for a medical diagnosis, or live translation of your surroundings while traveling?

Of course, there’s also the professional use case for this, and the new Gemini Live also supports screen sharing. This will allow you to share your screen, ask questions, and have Gemini guide you through it. I can see this being particularly impactful when performing complex tasks like filing paperwork, learning an advanced subject, or completing financial and tax documentation.

These aren’t the only advancements in this new Agentic AI, as Google showed off other new Gemini-powered features for products in its ecosystem.

Gemini Live can now read files, documents, and images

Document Recognition in Google Gemini Live
Nirave Gondhia / Digital Trends

Alongside the screen sharing feature on Gemini Live, Google showcased the ability for Gemini to read and understand a wide variety of images, files, and documents. This feature extends the core premise of Gemini Live to include a variety of different file types.

This feature will likely be a huge boon for students, as Google demonstrated how a student may use it. Consider a textbook page on DNA. As shown in the video, Gemini Live can explain the topic in much more detail, search its knowledge base for additional relevant information, and even create a rhyme to help you remember the key facts.

The addition of these features will elevate Gemini Live to the next level and will hopefully usher in the era of the next Google Glasses sooner rather than later. The demo took place with the Gemini app on the Galaxy S25 Ultra, so this should be available to all Gemini Advanced users.

New features for Google Home: Gemini Routines

Google Home and Gemini integration on a Nest Hub Max
Nirave Gondhia / Digital Trends

This demo was firmly designed to show how Gemini AI is evolving the smart home. In many ways, Gemini will be used to deliver the long-awaited autonomous smart home dream.

The demo involves a relatable scenario of missing cookies. If you have children, a partner with a sweet tooth, or even a sneaky pet, the new Google Home and Gemini integration will catch them in the act.

The demo showed how Gemini can be used to scroll through the footage from a Nest Cam, find the specific moment where the cookies disappeared, and analyze the scene. All from a simple prompt around who ate the missing cookie? With a single command, Gemini can also then set up a new routine that will automatically execute the next time the culprit is spotted on that camera. I can’t wait to test out routines further, especially with more complex prompts and routines.

The future of AI is Google Gemini

Gemini Logo on the side of the Google Gemini booth at MWC 2025
Nirave Gondhia / Digital Trends

I’m impressed with Google’s Gemini rollout, at least for its smartphone efforts. The widespread rollout across hundreds of millions of Android devices and the partnerships with different phone makers to develop new features are key drivers of the growth in users and features.

Google is the consummate middleman here, helping to fuse different ideas and needs from phone makers as part of its feature roadmap. There will come a time when some features remain exclusive to a specific phone maker, but for now, it’s great that all Gemini users can test and experience these advancements.

Document Recognition in Google Gemini Live
Nirave Gondhia / Digital Trends

That is, if you’re paying for Gemini Advanced. As expected, the Video and Screen-sharing features in Gemini Live are limited to Gemini Advanced users, while it’s unclear whether all or some part of the other features will be available without a paid subscription. If you haven’t already purchased one, now might be a good time. If you need a new phone, you can also still get 1 year of the Google One AI plan — which includes Gemini Advanced — for free with a purchase of either the Pixel 9 Pro, the Pixel 9 Pro XL, or the Pixel 9 Pro Fold.

The latest updates to Gemini have me extremely excited about AI on smartphones in the future. The early Gemini features were less useful to me as they focused on more creative endeavors, whereas I am more interested in productivity hacks, but this has changed quite rapidly. For iPhone users, perhaps Gemini can help fill the hole following the delay to the new AI-powered Siri announced earlier this week.








READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.