This article first appeared in Digital Edge, The Edge Malaysia Weekly on February 24, 2025 – March 2, 2025
At first glance, Samsung’s latest flagship phone, the Galaxy S25 Ultra, seems a modest upgrade. The S25 Ultra is, for the most part, similar to the S24 Ultra, but with notable upgrades. It is 14g lighter, features rounded screen corners, boasts an upgraded Snapdragon 8 Elite Galaxy processor, and includes a 50MP ultra-wide rear camera. Overall, it offers faster performance and improved efficiency.
What stands out the most with Samsung’s newest phone are the upgrades made in its Galaxy AI integration. Features such as Note Assist, Chat Assist and Circle to Search from before are all still here but joined by a host of new AI features. Previously, AI features felt like enhancements to the existing experience but, now, the new features seem more integrated, aiming to make AI a core part of the entire phone experience.
Gemini Live, gen AI a button away
By designing the phone’s OS around a framework integrated with AI agents that connect to various apps, it is clear that Samsung is more committed than ever to making AI integration a central pillar of the smartphone experience.
Simply holding the side button while connected to the internet instantly allows access to Google’s Gemini AI model, known as Gemini Live. This brings up a Gemini chat bar at the bottom of the screen. Users can type in their commands or speak directly into the microphone and the Gemini model will respond accordingly.
These interactions can be regular Gemini or ChatGPT-type exchanges, such as asking for information, links or simple chat prompts. Interestingly, comparing the output to other methods of reaching Gemini, the responses from Gemini Live tend to be similar but more summarised and less in-depth compared to Gemini on browsers, yet more detailed than on the official Gemini app.
After just a week of use, Gemini Live became the feature I used the most; it functions much like past phone assistants such as Google Assistant or Siri. Instead of saying “Hey, Google”, however, you simply hold the side button and can perform many of the same tasks, such as switching apps, setting alarms and answering questions.
What sets it apart from past phone assistants is that its AI agents are more interlinked with various apps and systems within the S25 Ultra, allowing for more specific results, such as finding specific emails or photos based on specific prompts.
While the advantage is that accessing Gemini is intuitive and easy, making the Gemini button the same as the sleep button has led to moments where turning the Gemini overlay on or off led to another round of unlocking the phone.
AI integrated into apps
An additional feature of Gemini AI is allowing users to ask questions directly related to a video, article, webpage or photo open on the phone. By holding the side button, a couple of options will appear above the Gemini chat box: “Ask about the screen/video” or “Talk Live about this”.
“Ask about the screen” is almost always available and essentially allows the user to chat with Gemini regarding what is on screen if an app is open. This ranges from summarising what is on screen to answering more specific questions. For example, in the case of a photo of a refrigerator, you can ask what recipes you can make with what is available or give feedback on a piece of photography.
Talk Live is more complicated and restricted to video content. Restricted to audio commands, the user can ask Gemini questions about a video being played, such as a summary of the video and key points discussed.
For example, I watched a video of people eating various types of food in a restaurant and asked Gemini for a list of all the food consumed. Its ability to generate an accurate list seemed limited, particularly depending on the length or complexity of the video.
Talk Live was disabled for video podcasts, and the “Ask about the video” feature gave me only a general list of key talking points of the whole video instead of anything specific or from specific points of the videos.
Galaxy AI also added an audio eraser for video editing, which uses AI to remove background noises such as wind blowing into the microphone or the noise of crowds.
Upgrades to previous AI features
Galaxy AI’s integration with Google has seen a few upgrades. Last year, Samsung brought in the Circle to Search, where holding the Home button would bring up an overlay in which any element on the screen circled would be used for a Google Image search.
This overlay has been updated to now include a few more features, including a conventional Google search bar, an option for speech-to-text Google search, a music note icon that allows Google to listen to a song clip or the user’s humming to find a song, and the option to run whatever is on screen through Google Translate.
These features were easy to access and the song search feature proved reliable, even with simple humming. Having two sets of overlays across two separate buttons — holding the Home button for Google and holding the side button for Gemini — is cumbersome, however, and could be streamlined.
Photo-editing features have also seen an impressive update. When viewing a photo, an auto removal button appears in the top right corner of the screen. Tapping it prompts the AI to automatically detect and select various elements in the photo to be removed, usually people in the background, making crowded shots feel more personal.
Users can still manually select elements to remove or add elements from other photos and the entire process is more intuitive than before, with options and icons being more accessible and user-friendly.
The AI removal itself seems improved from last year. Previously, if I wanted a specific element removed, the AI would often select the entire area around it. Now, it seems better at differentiating specific items to remove. For example, on an earlier model, when I wanted to remove a drink from my colleague’s hand, it would always select the entire hand. Now, it is easier to select just the drink, but the results are still not perfect. Close inspection of the AI-edited picture shows that, where the drink was removed, her sleeve melts into her palms.
Z Series’ Portrait Studio and Drawing Assist introduced
Exclusive to the Galaxy Fold and Flip last year, Drawing Assist has been implemented in the S25 Ultra. This feature turns drawings, sketches or doodles done on the phone into AI art.
While experimenting with the feature on photos, I found that the AI interpreted my rough sketches more accurately, especially when doodles were added to the images. The lack of shadows on the inserted AI element makes it obvious that it is AI-generated, but it is still a fun distraction to play around with.
While this does not quite make up for the lack of Bluetooth functionality in the S Pen, the Drawing Assist feature does give the S Pen a little more to do.
In addition, Portrait Studio from the Z series has also been introduced, allowing users to turn selfies into AI portraits using one of four styles: comic, 3D cartoon, watercolour or sketch. This feature was fun to experiment with, but it is clear that the AI’s data is based mostly on Americans or Koreans, and not Malaysians.
Gemini Live functions much like past phone assistants such as Google Assistant or Apple’s Siri
Save by subscribing to us for
your print and/or
digital copy.
P/S: The Edge is also available on
Apple’s App Store and
Android’s Google Play.