Well into the first year of on-device generative AI on phones and laptops, Snapdragon chipmaker Qualcomm looks toward the future.
The Samsung Galaxy S24 series kicked off 2024 with on-device generative AI, and we saw a ton of new AI features from phone-makers this year. But the next wave might go way beyond adding a few new features to your phone; it could change the apps we use in the future, says Durga Malladi, Qualcomm senior vice president and general manager of technology planning and edge solutions.
“We’re just entering the era of more proactive computing with pervasive AI constantly running in the background and anticipating your next move, figuring out what you might be doing next and getting input solutions before you even ask for them,” Malladi said.
It’s a bit like when Apple released the first iPhone in 2007 and it took developers making apps harnessing its expansive touch display for phone owners to understand how it could be so useful. Today, we have some predictive features, like Apple suggesting focus modes throughout the day, but that’s just a drop in the predictive bucket — especially if developers outside the big tech companies start using it in their apps. Qualcomm has laid out the tech welcome mat, but developers now have to figure out what they can do with generative AI capabilities on phones, laptops and other devices to make consumers care.
That’s why Qualcomm is encouraging app developers to embrace the technology in their software.
“Developers are from all parts of the world, and it’s not easy for them to immediately start building an app, but we are trying to make it much more simple for them,” Malladi said.
This year’s slate of generative AI tools are varied. Some have built on regular AI tools we’ve seen before, like how the Pixel 9’s Reimagine feature, powered by Google’s Tensor G4 chip that lets you add objects to photos, is an extension of the Magic Editor tool introduced last year.
Other features use generative AI to push the user experience in new directions, like Circle to Search that debuted on the Galaxy S24 phones, which lets users look up anything pictured on their screen by tracing a circle around it rather than having to manually search for terms. Samsung’s suite of Galaxy AI tools includes Conversation mode, which instantly translates in-person chats between languages, though it can also translate through text messages as well.
For Malladi, there are too many missing pieces to gauge how competitive Apple Intelligence will be, as there still isn’t clarity on what kinds of Large Language Models (LLMs) will be used as well as which applications will benefit from generative AI capabilities. The use of a hybrid cloud — handling some processing on Apple devices while supposedly securely doing other processing on the cloud — intrigued Malladi, but without more information, “I think the jury is still out.”
Qualcomm has avoided cloud-based AI in favor of processing requests exclusively on devices running its chips, which protects privacy, makes it easier to personalize responses by consulting user data and reduces latency of responses to speed up results.
But Qualcomm’s lead might also depend on what third-party app developers bring to devices powered by Snapdragon chips. Just what those third-party developer use cases will be — the kind that make generative AI a must-have feature — are unknown in these early days.
But what is certain, according to Malladi, is that generative AI will be part of our phones, both in the apps that we use and the software that powers them. Qualcomm’s Snapdragon chips have built-in support for phone-makers to incorporate LLMs, the kinds of computational models that ingest massive data sets to generate answers to prompted questions by predicting the next likely word or component, while Apple released a white paper earlier this year explaining its progress training its own LLM.
It’s still early for generative AI on phones, and we’re left to speculate how it could change what our phones are capable of. Some of the applications from the last year point to how generative AI may streamline our phone use by, say, giving us shortcuts to what we want (like with Circle to Search) or even predicting our wants outright by watching our usage patterns and suggesting apps or tasks.
Ultimately, AI could get us what we want faster than we could find it ourselves. A big piece of the puzzle is missing: the third-party apps that Qualcomm wants to encourage and empower to tap into generative AI. And when they do, our relationships with those apps could change as we rely more and more on AI agents to do things for us — taking cues from how AI-enhanced voice assistants like Siri or virtual assistants like Gemini are poised to provide solutions better than their previous versions ever did.
Can a meditation app tailor a session to your specific day’s challenges? Can a navigation app automatically route you past the bakery you always stop by on Fridays? Can a messaging app surface the chats with pressing issues first and warn you if you’re not headed to a meetup on time?
A lot of these future features rely on integration of AI we haven’t seen yet, but then again, so many parts of our daily smartphone experiences — from touchscreen-only interfaces to GPS to 5G-powered feeds of data — were once novel perks we’d never seen before. Perhaps generative AI will give us our next newly-essential feature before too long.