I’m here to see the future of computing. But at the moment, I’m trying to coax a butterfly onto a nectar-dipped stick.
I feel like I’m bothering the insects, but the monarch butterfly caretakers accompanying us in a 10-foot screened-in box insist that it’s okay, so I follow their instructions and keep gently prodding the feet of whichever butterfly is closest, willing it to hold on.
As we each gradually work on securing a butterfly, one of the butterfly experts asks our small group politely how our product launch is going. There’s a brief, collective silence. None of us have the energy to explain that it’s not our launch; we’re just here to cover and analyze it. But rather than explain this deeply boring backstory, someone in our group mercifully pipes up, “It’s going great.”
After many failed attempts, I finally get one of the little guys to hang on. There’s a rush of pride as I turn to the rest of the group and announce, “Look, I got one!” And then there’s nothing to do except stand awkwardly, wondering what comes next.
Qualcomm’s annual Snapdragon Summit is weird like that. Every year, the company invites a lot of industry partners, analysts, and members of the press to Hawaii to bear witness to its next flagship chip announcement. I’ll tell you right now that industry partners, analysts, and members of the press are largely indoorsy people who are wholly unaccustomed to tropical climates.
By the end of day two, I’d sweated through every item of clothing I packed and started doing laundry in my hotel room sink. On the plus side, my room’s patio is so hot that my clothes are bone-dry in a few hours. (Per The Verge’s ethics policy, we don’t accept paid trips. With the exception of a few prearranged group meals, Vox Media paid for my travel, lodging, food, and other expenses.)
Our butterfly encounter is part of a circuit of demo stations designed to show off the capabilities of the company’s latest tech. The stations are all positioned outside in the midday tropical sun, and by the time we get to the butterfly area, we are looking generally unwell and quite damp. Qualcomm has done a conscientious thing of incorporating elements of traditional Hawaiian culture into each station alongside its technology demos. Some are loosely connected; we learn the history of slack-key guitar while we try out a new audio switching technology.
Others don’t tie in as neatly, and an hour into the session, I’m not clear on what the monarch butterflies have to do with the next generation of mobile computing, but I’m too hot to care. After a while, our butterfly guides show us how to gently grasp a butterfly by holding its closed wings between two fingers, and we’re instructed to take one out of the enclosure and release them en masse as we each make a wish. My mind flips rapidly through about a half-dozen, from thoughts of peace and healing for the people of Maui, where we are visitors, to, “I’d like to get out of the sun as quickly as possible.”
With our butterflies free, we step over to the tech demo station and see one of the features I’ve been waiting for: generative photo expansion. It’s a feature supported by the Snapdragon 8 Gen 3, the mobile chipset Qualcomm has just announced. You pinch and zoom out of an image and watch as generative AI fills in the borders in a matter of seconds.
The concept is neat; the demo itself is a mixed bag. It handled some preloaded scenes quite well. But when challenged to fill in part of a picture of a face, things don’t go so well. Later on, I would see similar results — sometimes it’s incredibly impressive, but one time, it adds a disembodied sexy leg alongside a landscape. Other demos throughout the summit are a similar mix of impressive and not-quite-right. A couple of onstage demonstrations of on-device text generation go slightly sideways: what starts with a request to plan a trip from San Diego to Seattle shifts mid-demo to a trip from Maui to Seattle. Impressive, until it isn’t.
And that kind of sums up my feelings about the vision of a generative AI future I was shown over the week. The most optimistic scenario is the picture Qualcomm executives painted for me through its keynotes and a series of interviews: that on-device AI is truly the next turn in mobile computing — our phones won’t be the annoying little boxes of apps that they’ve turned into. AI will act as a more natural, accessible interface and a tool for all the things we want our devices to do. We’ll regain the mental and emotional overhead we spend every day on tapping little boxes and trying to remember what we were doing in the first place as we get lost in a sea of unscheduled scrolling.
Impressive, until it isn’t
AI could also be a real dumpster fire, too. There’s all the potential for misuse that could undo the very fabric of our society. Deepfakes, misinformation, you know, the real bad stuff. But the AI we’re probably going to encounter the most just seems annoying. One of the demos we’re shown features a man talking to a customer service AI chatbot about his wireless plan upgrade options, which is a totally pleasant exchange that also sounds like a living nightmare. You better believe that AI chatbots are about to start showing up in a lot of places where we’re accustomed to talking to a real person, while the barriers to letting you just “TALK TO A REPRESENTATIVE” grow ever higher.
To someone who isn’t constantly immersed in the whirling hot tub that is the consumer tech news cycle, this latest Coming of AI might sound thoroughly unimpressive. Hasn’t AI been around for a while now? What about the AI in our phone cameras, our assistants, and ChatGPT? The thing to know — and the thing that Qualcomm takes great pains in emphasizing over the course of a week — is that when the AI models run on your device and not in the cloud, it’s different.
If you had to wait 15 or 20 seconds for confirmation every time you asked Google to set a timer, you’d never use it again
The two keywords in this round of AI updates are “generative” and “on-device.” Your phone has already been using software trained on machine learning to decide which part of your photo is the sky and how blue it should be. This version of AI runs the machine learning models right on your phone and uses them to make something new — a stormy sky instead of a blue one.
Likewise, ChatGPT introduced the world to generative AI, but it runs its massive models in the cloud. Running smaller, condensed models locally allows your device to process requests much faster — and speed is crucial. If you had to wait 15 or 20 seconds for confirmation every time you asked Google to set a timer, you’d never use it again. Cutting out the trip to the cloud means you can reasonably ask AI to do things that often involve several follow-up requests, like generating image options from text again and again. It’s private, too, since nothing leaves your phone. Using a tool like Google’s current implementation of Magic Editor requires that you upload your image to the cloud first.
Generative AI as a tool has well and truly arrived, but what I’m trying to understand on my trip to the tropics is what it looks like as a tool on your phone. Qualcomm’s senior vice president of technology planning Durga Malladi provides the most compelling, optimistic pitch for AI on our phones. It can be more personal, for one thing. When I ask for suggested activities for a week in Maui, on-device AI can take into account my preferences and abilities and synthesize that information with data fetched from the cloud.
Beyond that, Malladi sees AI as a tool that can help us take back some of the time and energy we spend getting what we want out of our phones. “A lot of the time you have to think on its behalf, learn how to operate the device.” With AI at your disposal, he says it’s “the other way around.” Big if true!
The advanced speech recognition possible with on-device language models means you can do a lot more by just talking to your phone — and voice is a very natural, accessible user interface. “What AI brings to the table now is a much more intuitive and simple way of communicating what you really need,” says Malladi. It can open up mobile computing to those who have been more or less shut out of it in the past.
It’s a lovely vision, and to be honest, it’s one I’d like to buy into. I’d like to spend less time jumping from app to app when I need to get something done. I’d like to ask my phone questions more complex than “What’s the weather today?” and feel confident in the answer I get. Outsourcing the boring stuff we do on our phones day in and day out to AI? That’s the dream.
But as I am reminded often on my trip, Qualcomm is a horizontal solutions provider, meaning they just make the stuff everyone else builds on top of. Whatever AI is going to look like on our phones is not ultimately up to this company, so later on in the week, I sat down with George Zhao, CEO of Honor, to get the phone-maker’s perspective. In his view, on-device AI will — and should — work hand-in-hand with language models in the cloud. They each have technical limitations: models like Chat GPT’s are massive and trained on a wide-ranging data set. Conversely, the smaller AI models that fit on your phone don’t need to be an expert on all of humanity — they just need to be an expert on you.
Referencing an example he demonstrated onstage earlier in the day, Zhao said an on-device AI assistant with access to your camera roll can help sort through videos of your child and pick out the right ones for a highlight reel — you don’t need to give a cloud server access to every video in your library. After that, the cloud steps in to compile the final video. He also reiterates the privacy advantage of on-device AI, and that its role in our lives won’t be to run all over our personal data with wild abandon — it will be a tool at our disposal. “Personal AI should be your assistant to help you manage the future — the AI world,” he says.
It’s a lovely vision, and I think the reality of AI in the near future lies somewhere between “dumpster fire” and “a new golden age of computing.” Or maybe it will be both of those things in small portions, but the bulk of it will land somewhere in the middle. Some of it really will be revolutionary, some of it will be used for awful things. But mostly it’ll be a lot of yelling at chatbots to refill your prescription or book a flight or asking your assistant to coordinate a night out with friends by liaising with their AI assistants.
It strikes me that the moments I appreciated the most on my trip to Maui weren’t in the tech demos or keynotes. They were in the human interactions, many of them unexpected, in the margins of my day. Talking about relentless storms on the Oregon coast with Joseph, my Uber driver. The jokes and in-the-trenches humor shared with my fellow technology journalists. The utter delight and surprise shared with other swimmers as a giant sea turtle cruised by just under the waves. (A real thing that happened!) The alohas and mahalos as I pay for my groceries and order my coffee.
Sandra, another Uber driver, has printed lists of recommended restaurants and activities in her car. One comes with a tip to “Tell them Sandy sent you,” and there’s a directive to check under the passenger seat for a notebook with more suggestions. I’d rather walk into a restaurant and say “Sandy sent me” than “My AI personal assistant sent me.”
I don’t think we’re headed for a future where AI replaces all of our cherished human interactions, but I do think a future where we all have a highly personalized tool to curate and filter our experiences holds somewhat fewer of these chance encounters. Qualcomm can set the stage and paint rosy pictures of an inclusive AI future, but that’s the job of a tech company organizing an annual pep rally in the tropics to talk about its latest chips. What happens next will likely be messy and at times ugly, and it will be defined by the companies that make the software that runs on those chips.
Qualcomm got the butterfly onto the stick. Now what?
Photography by Allison Johnson / The Verge