Have you wondered who decides which Galaxy AI features make it to your Samsung smartphone and what gets shelved for another time? Ever thought about how much of Galaxy AI is Google’s smartphone AI?
Where is Samsung really trying to go with its AI strategy? Also, is Galaxy AI a paid feature after 2025?
We had the opportunity to speak to EVP Jisun Park, Head of Smartphone Software Engineering Group, Samsung Mobile eXperience, who is directly in charge of the Galaxy AI features that make it to every generation of Samsung’s flagship devices, including the Galaxy Z Fold6 and Galaxy Z Flip6.
Disclaimer: The interview was conducted in a closed-door, roundtable format with multiple media publications present. Responses by Samsung have been slightly edited for brevity and accuracy.
How does Samsung choose which features to debut on Galaxy AI?
We believe in providing the best user experience. Whenever we come up with ideas, we explore them further and investigate possible solutions as well. When we are confident about the quality and experience level, we decide on the feature(s) we want to ship with the specific product.
If we want to deliver the experience but the quality isn’t there yet, we will pause that feature until the next release. We are purely focusing on the user experience, and that’s how we decide which features to put on which Galaxy devices.
“Multimodality (combining text, images, files, and other content) is a very natural evolution of artificial intelligence,” — EVP Jisun Park.
How do you choose older or midrange Samsung phones to receive more of Samsung’s Galaxy AI features, especially onboard AI features?
We are trying to democratise AI by introducing Galaxy AI to more devices. When we make decisions about devices where we can actually expand these features, we consider other factors. For example, the chipset performance, audio characteristics, and other things as well. So, those factors are considered when we expand, for example, Live Translate, to other devices. Once we are confident about providing that experience to the level where users can be satisfied, that’s when we decide to expand to those devices. We put a lot of effort into investigating that.
We are considering expanding these features to the Galaxy A series as well, but nothing has been decided yet.
Is Samsung working on a multimodal version of Galaxy AI?
Yeah, so if you think about human perception, we can read, we can hear, and we can see images. Multimodality (combining text, images, files, and other content) is a very natural evolution of artificial intelligence.
We are working on identifying key scenarios and use cases where we can deliver the real value (of multimodality) to our customers, and also investigating the right solutions for the technology to help realise that.
So yeah, we are working on it.
How is Samsung planning to stay competitive even with brands like Apple coming up with their own AI, and partnering with OpenAI?
We have a principle of open collaboration. Right now, Google is our key partner to help realise that experience and work together on solutions. We are open to collaborating with other partners if that will help us provide the user experience we want to provide. That’s where we are right now, but there are no specifics we can share. Based on the open collaboration principle, we’re not closing off possibilities.
How long was Galaxy AI in development before Samsung brought it out to users?
Samsung has been investing heavily in AI research, alongside our strong heritage in providing the best-in-class experience to our users. I would say it was a longstanding effort in the making; there has been a lot of investment, and it’s not just about a business unit’s effort but about the research team’s effort in providing the foundation for the current AI experience.
Does Google’s Gemini and Galaxy AI share any DNA? Is Galaxy AI purely Samsung’s effort, and Gemini is just a cloud offering to users?
We pursue open collaboration. Google has been our key partner. Although the Gemini app is the assistant provided by Google, we work very closely together to optimise the app, for example, optimising it for the Galaxy Z Fold6. We are also working closely together to provide other Galaxy-specific features in the near future. Because of the collaboration, the Gemini app is not purely independent from Galaxy AI.
Even for Circle To Search, we worked closely together to determine which devices to roll out on first and what features we could actually provide for Galaxy devices. A similar collaboration is happening through Gemini.
What are the top 3 use cases of Galaxy AI among consumers? Were there any surprises to the features that users used more of?
Circle To Search is the top feature that most Galaxy AI users use. Other top features are Photo Assist and Live Translate. These are the top three features most people use. We are also introducing new features this time with the Galaxy Z Fold6 and Galaxy Z Flip6, so we’ll continue to see what the most popular features are moving forward.
Do seven years of OS and security upgrades also include upgrades to Galaxy AI features?
We want to apply Galaxy AI features to as many devices as possible, as long as possible. If there is no hardware limitation, we will support it. However, if you think about seven years (and the progress AI would make), some features could not be supported. We will try to maximise Galaxy AI coverage, but there could be a possibility some features may have to be excluded as Galaxy AI expands.
Does AI have a significant impact on a device’s battery life, and would having more on-device AI processing exacerbate the issue?
Battery life and power consumption are some of the fundamentals we improve upon whenever we ship devices. Right now, the AI features (we have) are not significantly impacting battery life. It’s actually consuming a minimal amount of the batteries (uptime). Even for on-device AI features, we ensure there’s battery life consumption is minimised. We are managing this.
Regarding the Sketch to Image feature in Galaxy AI, how do you ensure that people do not abuse it and create inappropriate images?
We developed Sketch To Image by understanding user behaviour, demands, and needs. We believe users will like and need Sketch To Image. We proposed the feature to our partner and designed the architecture together to ensure minimal latency and optimal quality.
Additionally, there is concern about the inappropriate generation of content. We work with Google to ensure proper safety filters are in place, and that’s one of the most important aspects of Galaxy AI because we want to make sure Galaxy AI is a responsible AI.
We also add an AI watermark to the image and inside its metadata to prevent inappropriate content generation while guaranteeing users’ privacy.
Did Google Gemini and Galaxy AI agree on how user data is used for AI processing?
In general, when we work with Google, we are transparent (to them) about our privacy requirements and concerns.
If you are talking about using data to train the dataset, user data is not used to pre-train Google’s AI models. The training dataset is completely separate from the device. User data is used for the inference side (solutions). It’s completely protected.
What are some challenges and opportunities of developing AI features specifically for the foldable form factor?
Foldable devices are a completely new form factor, and their usage is pretty different from that of bar-type phones. One aspect, for example, is the Galaxy Z Fold6 device, which features a larger screen. So, we have to focus on optimising the larger screen for that phone, even for the new features.
If you think about the Interpreter feature, folding the device provides better user interaction (between speakers). I think we are where we are supposed to be, providing the best experience by utilising foldable characteristics through AI.
Is there going to be a free tier and a paid tier of Galaxy AI?
Previously, we announced that Galaxy AI would be provided for free until the end of 2025. We will see where the industry and technology go after that. We don’t have specific plans to make features “not free” after 2025. The decision will only be made after there are more data points for observation.
Read More-vivo X Fold3 Pro, a month on: Is it the best foldable ever?