Design Strategy
Fire, Ready, Aim
The frantic race to push out half-baked AI features into enterprise products is a cynical subversion of foundational UX principles. Their customers may never forgive them.
The other day, I noticed that the logo for a business communications tool a client makes available to me has changed. Dialpad, which bundles a variety of VoIP, video, and text messaging tools, had previously boasted a simple, clean logo that pointed to what it actually does: a pair of speech bubbles. Now, a revamped new logo has dispensed with that quaint contrivance for what somebody there must believe is an upgrade: instead of speech bubbles, now we just get the letters A and I in what looks a lot like the NASA font on a graded neon purple background. The Dialpad logo is now just AI. Full stop. Everything they want you to know and remember about Dialpad as a brand is captured in that little square badge. Forget their communications solutions. Forget their brand strategy and legacy. Forget, certainly, what their loyal customers have come to Dialpad for. Nothing matters, apparently, except that the company has hitched its star to the fizziest business buzzword of the moment, even as the fizz begins rapidly to fizzle.
If we can separate the current hyperbole around AI from the hands-on reality, and setting aside legitimate concerns over what may be ultimately unsustainable energy requirements, and assuming that the profound challenges large language models have with bias and accurately representing fact-based reality are eventually solved, and that the myriad destructive and malicious uses to which itâs being put (to say nothing of how itâs being trained) are somehow mitigated, AI may eventually prove a net benefit. Itâs hard to say just yet.
But what is certain is that many of the ways enterprises have rushed to bolt on AI to their products and services, never mind their brand images, reveal precious little consideration of any actual UX strategy and trample foundational principles of user centered product design and, often, well established heuristics of usability. Itâs also a good bet that the haste to react to still unproven market hype and satisfy impatient investors and restless creative directors is likely to degrade the overall user experience in the short term, making the thoughtful adoption of AI in the general population in the long term less likely instead of more.
Thereâs no doubt many casual users of these AI tools have found compelling uses for them, largely through experimentation. Thatâs good and expected. Itâs through this individual, iterative process of exploration and play that weâll eventually land on the optimal distribution of human-computer interaction â what can we confidently delegate to AI, and what we as humans must do.
In the meantime, however, the two-year-long rollout (if it can be called such, generously) of generative AIâs most popular tools to date, large language models, presents a textbook study in how not to think about technology in relation to problem solving and, especially, human beings. What weâve witnessed, from otherwise design-forward companies who surely know better, is an embarrassing self-own in which technology has been allowed to lead the innovation process instead of being a tool in service of that effort. A global survey from Mind the Product recently found that only 15% of product leaders in North America and Europe report their users are embracing new AI features. Their conclusion? Business and product teams must work harder to drive the adoption of AI features. This is feature-driven, cart-before-the-horse design at its worst.
Their haste to react to still unproven market hype and satisfy impatient investors and restless creative directors is likely to make the thoughtful adoption of AI in the general population less likely instead of more.
Design, fundamentally, is the craft of human-centered problem solving. Every decent designer knows that we cannot begin with a solution and devise a problem for it to solve (though one might reasonably respond that this is precisely what advertising is for). Creating a useful, usable, and delightful solution must first start with understanding exactly what problem we are solving. Only then can we create a fit-to-purpose solution.
This is as true of businesses as it is of products. The overwhelming majority of new businesses, big and small, fail not because they donât have an interesting product, or a compelling technology, or strong marketing strategy, or responsive support, or ample funding, but because their product does not solve a clear problem for someone. Too many otherwise intelligent business people start with the product, or the âtech,â and hope that marketing and scale will do the rest â an almost certain path to irrelevance. We cannot lead with the solution; as Steve Jobs famously observed: âYouâve got to start with the customer experience and work backwards to the technology. You canât start with the technology and try to figure out where youâre going to try and sell it.â
Throwing the tech de jour at a problem â or worse, and still more common, going all in on a new technology (blockchain, crypto, autonomous vehicles, brain-computer interfaces, the metaverse and XR, etc.) without having a clear understanding of what problem it is the solution to â is a recipe for throwing away a great deal of time and money, frustrating your customers, undermining your organizationâs mission and brand, blindly driving your business off a cliff, or all the above.
As Steve Jobs famously observed: âYouâve got to start with the customer experience and work backwards to the technology. You canât start with the technology and try to figure out where youâre going to try and sell it.â
Weâve been here before. The first dot-com internet bubble was a perfect storm of just such errors. Entrepreneurs of every stripe developed shiny new business ideas based on the nascent World Wide Web and hotfooted it to their nearest venture capitalist. The low interest rates of 1998 assured plentiful cheap money, and investors fell over themselves in their eagerness to fund almost any new businesses with a URL and a .com after its name, in many cases without a second look at the business model or the value proposition at its center. The inevitable result: the dot-com bubble burst in 2001, U.S. stocks lost $5 trillion in market capitalization, and by October 2002, the NASDAQ-100 had dropped 78% from its 1999 peak.
Or consider the 2013 launch of Google Glass, the âsmartâ headwear absolutely nobody needed. Engineers at Google X, smitten with the technological possibilities and designing mainly with themselves in mind, were blindsided (pun fully intended) by the negative public response to their toy: in public, it isolated wearers inside a private bubble of heads-up data; it allowed users to surreptitiously photograph and record anyone they looked at; it used Googleâs face recognition technology to identify even strangers; it immersed wearers in a constant stream of distracting visual notifications and data. Wearers were almost instantly derided as âglassholes,â and myriad public facilities around San Francisco sprouted signs prohibiting the use of Google Glass on the premises. In the productâs several-year development process, nobody at Google ever seemed to have paused to ask themselves, âWhat human problem are we solving?â
Youâd imagine that Big Tech might have learned a lesson or two from these boondoggles. But no. See: Metaâs metaverse fixation and the Apple Vision Pro. See also: driverless cars (how much new innovative, affordable, and accessible public transportation might the gazillions sunk into this pointless pipe dream have bought us?)
An important caveat here is that itâs easy to rationalize every new product and service as solving a business problem (We need more users! We need more revenue! We need more growth!); but the business problem is always secondary to the people problem that the business exists to solve. (If a successful business does not exist to solve a people problem, Iâd argue itâs probably not a business per se but something more akin to a social parasite; see hedge funds or Trump Steaks).
Thatâs not to say that businesses must operate as charitable social services, but that solving problems for our customers is how we best serve the business. If youâre a global social media platform, pumping billions into a name change, a new brand identity, and a pivot away from your core product into âthe metaverseâ simply because you fantasize about people interacting as disembodied legless digital avatars inside an immersive total-surveillance ad-delivery system does not solve a material problem for your customers, no matter how much rebranding promotional confetti you spray at them. On the other hand, directing those same billions into effective content moderation, eliminating rampant misinformation, mitigating political polarization, implementing child age verification systems, and reducing the prevalence of scams and human trafficking would, in fact, address real and vexing problems your customers are facing daily.
Simply because you fantasize about people interacting as disembodied legless digital avatars inside an immersive total-surveillance ad-delivery system does not solve a material problem for your customers.
When businesses fail to identify a clear human problem around which they can craft a compelling value proposition for their solution, they inevitably fall back on the all-purpose catch-all of claiming improved efficiency (doing the same with less) â or its close cousin, productivity (doing more with the same). But efficiency and productivity, however compelling they might be as marketing hooks, are not problems, per se, that are amenable to solutions, mainly because they have no fixed measures of success; every advancement toward the goalpost simply results in the goalposts being moved further away.
Consider: A hundred years ago, philosophers and economists (there were no âfuturistsâ then) warned that our obsession with ever-improving efficiency would lead to a future of overabundant leisure. John Maynard Keynes, citing the torrent of technological innovation the early 20th century had unleashed, foresaw 15-hour workweeks and a world in which citizensâ biggest problem was figuring out what to do with all their leisure time. And yet somehow, despite the innumerable advances in efficiency over lo these hundred years, Americans regularly report feeling overwhelmed and overworked. When was the last time anyone you know complained about having too little to do? All those time-saving tools and productivity hacks seem to have accomplished exactly nothing â in fact, less than nothing; college-educated Americans report feeling more overwhelmed, more stressed and busier than ever. Our worries over efficiency, nurtured and commodified by Big Tech and fashioned into a bottomless appetite for consumer âfixesâ and a thriving cult of productivity by capitalist imperatives, is no more a real problem than a logo that doesnât sufficiently broadcast your abasement before the tech idol of the moment.
(And anyway, the goal of the design process is rarely âefficiencyâ in and of itself. One end result of a good design may very well be a more efficient process â but that will be because we prioritized making the process easier for human beings to use, and collaborate within, and to reach their desired outcomes. We achieve our objectives by designing for human beings and what theyâre trying to do, not by aiming for efficiency per se.)
All of this is not to say that we should let new technologies languish because their application is not immediately apparent. The speed with which shockingly capable large language models were released upon the world, and the pace at which theyâve improved since then, has created the illusion that current AI tools are more akin to a discovery than an invention, some fundamental property of the universe that we have no choice but to put to immediate and profitable use. Rocketing to 100 million users in two months made OpenAIâs toy the fastest-growing consumer software application in history, and the fact that it did so without a clear use case other than being what might best be called a probabilistic content extruder suggests that, as so many believed of the World Wide Web of 1998, itâs certainly exciting and it must be good for something.
The World Wide Web eventually figured out its purpose: to hijack our dopamine reward pathway in order to divide, track, and surveil us in the service of getting us to buy more shit. (Kidding, not kidding.) And itâs fair to say the early Internet, too, started out as a novelty in search of applications, even as its origins with DARPA as a nuclear strike-proof decentralized communications system, and the 1990 creation by Tim Berners Lee of hypertext, gave us some strong hints about where it might prove most useful.
So the initial flurry of experimentation and exploration that followed the public release of Open AIâs first ChatGPT model in 2022 was gratifying to see and participate in. Each of us had a sandbox in which we could safely (for the most part) explore these tools and their capabilities and the creative uses to which they could be put. Very quickly, and to the great surprise of exactly no one, a subset of users discovered a host of malicious and nefarious applications to which the tool might be pointed. Nevertheless, the magnitude of the public interest, the generalizability of applications, and the much hyped near-future potential for still more capable models was like upending a dump truck of bloody chum in the water for Big Techâs shareholders and any business seeking a sliver of advantage in the never-ending war for attention.
The resulting frenzy has been a spectacle of hubris, irrationality, and greed, a desperate greased-pig catching competition in which all participants have found themselves degraded, covered in filth and the remnants of broken faith with their customers, and without a pig.
A casual glance at technology headlines at almost any time over the past several months reveals a portrait of customer experience straight from the mind of Hieronymous Bosch: Googleâs twisted AI Overviews scrape the nether regions of Reddit to recommend glue as a pizza topping; McDonaldâs AI-empowered drive-throughs screw up orders to the tune of hundreds of Chicken McNuggets and earn customer reviews like âdystopianâ; Meta was so hot to get in bed with the newness that their much-hyped AI-fueled scientific writing tool, Galactica, had to be hastily euthanized three days after its launch because of the torrents of authoritative-sounding scientific bullshit it was spewing â what the burned PhDâs whoâd tried it called âlittle more than statistical nonsense at scale.â
A casual glance at technology headlines at almost any time over the past several months reveals a portrait of customer experience straight from the mind of Hieronymous Bosch.
Microsoftâs disastrous rollout of its AI-powered âRecallâ assistant, which meant to envelop our PCs in a privacy-shredding total surveillance system because something something AI, was followed by a global meltdown of Windows software whose origins lay in a botched update from cybersecurity firm CrowdStrike. The outage â which affected fewer than 1% of Windows machines yet still managed to disrupt airlines, banks, retailers, emergency services, and healthcare providers around the world â shone a very bright light upon exactly how fragile our global networked infrastructure is before itâs been handed over to AI systems that tell people to eat rocks and plagiarizes without the slightest compunction.
The pressure to prioritize shareholder and market expectations over customer needs is so great that âAI washingâ is now a thing: businesses are falsely advertising to consumers that a product or business includes AI when it actually doesnât. As if, in the face of all this immiseration, customers are clamoring for more AI.
None of this, letâs be clear, has the first thing to do with solving well understood problems for customers. Mark Hurst, writing at his blog Creative Good, puts it perfectly:
âThe AIs werenât developed for us. They were never meant for us. Theyâre instead meant for the owners of the corporations: promising to cut costs, or employee count, or speed up operations, or otherwise juice the quarterly metrics so that ânumber go upâ just a bit more â with no regard for how it affects the customer experience, the worker experience, the career prospects of creators and writers and musicians who have been raising the alarm about these technologies for years.â
IBM CEO Thomas Watson, Jr. once observed that âGood design is good business,â a statement thatâs been marshaled in the support of design-led business cultures since 1973. Itâs lost none of its relevance since then. If anything, itâs become more prescient with each passing year, as the experience economy turns technologies and products into mere commodities; and yet big business culture continues, mystifyingly, to move further and further away from a focus on the customer in favor of âgrowthâ at all costs, the customer be damned â what Ed Zitron calls the Rot Economy, and what Cory Doctorow has memorably described as Enshittification.
But good design is in fact good business â and good design is fundamentally a human-centered process. Any tool, however, that cannot reliably produce accurate results and has no actual understanding of true human behavior is not remotely human centered. Yes, AI can effectively mimic human behavior. But it is still pure mimicry. Just because an African Gray Parrot can convincingly impersonate the sound of a woman laughing does not mean it understands the first thing about either women or humor. Similarly, current AI tools have no inherent interest in or understanding of the functional, social and emotional needs of human beings, except in the most abstract and superficial ways. And what an AI may have surreptitiously scraped from an internet database or social platform has little or no relevance to your specific users, to their specific needs in their specific context. Throwing AI-enabled features at your product or service that you know full well may not be around in a year or, god help us, slathering your brand with the flashy iconography of AI because youâre hoping to ride the bubble to an acquisition, does nothing to help your customers solve their problems, and in fact only creates more confusion for them, more frustration, a greater number of options to wade through, and a bigger and more impermeable barrier between them and you.
Any tool that cannot reliably produce accurate results and has no actual understanding of true human behavior is not remotely human centered.
In the generation since the first dot.com bubble burst, business strategy has undergone a profound transformation. The explosive development of the design field known as User Experience, with its roots in human factors design and user interface design, has led businesses everywhere to adopt a wholly new approach to competing in the marketplace, one where success no longer lies in crushing the competition but by out-competing oneâs rivals in the mission to better serve customers. Today, UX is a cornerstone of product strategy, a critical component of customer satisfaction, the backbone of most innovation pipelines, and a field with a voice at the executive table. From Walmart and Fannie Mae to Citigroup and State Farm, the Fortune 500 has come to understand that their fortunes rest on continuously meeting their customersâ needs in useful, usable, and delightful ways.
And yet. It seems you can take the man out of the Paleolithic, but you canât take the Paleolithic out of the man (not that itâs all men, butâ¦). Something about a new technology makes even the most evolved business leaders forget everything they know and revert to their most primitive, reactionary, knuckle-dragging selves. New tool good, exciting, take new tool everywhere, hit things with new tool, show everyone how strong new tool make. Bam, bam, bam!
Meanwhile, the customers whose loyalty businesses have worked so diligently for years to win have watched in dismay as the products and services theyâve come to rely on have been, practically overnight, colonized by an infestation of poorly considered, hastily implemented AI-fueled âfeaturesâ that, like a break-out of acne, make almost any interaction a nightmarish exercise in humiliation and futility.
None of this is to say there are no practicable implementations of AI in current products and services, to say nothing of the possibilities for heretofore unimagined new ways of better serving our customers. But there is one way to do this right, and itâs the same human-centered process that has enabled organizations with design-led cultures to increase their revenues and shareholder returns at nearly twice the rate of their industry counterparts: start with your customersâ needs, pains, challenges, and frustrations â and work iteratively to the right solution and the best technology from there, not the other way around.
There is one way to do this right, and itâs the same human-centered process that has enabled organizations with design-led cultures to increase their revenues and shareholder returns at nearly twice the rate of their industry counterparts.
Keep experimenting with AI, by all means. Point all your R&D efforts at these tools, encourage your own employees to play around with them to discover what works and what doesnât. Maybe there are even âefficienciesâ to be gained, for whatever thatâs worth. But stop making your customers the guinea pigs in this misbegotten experiment, and stop hoping that you can work out what these tools are good for by outsourcing that thankless and time-consuming task to the only people whose goodwill and trust are likely to see you through the economic shock that will follow the inevitable implosion of an AI bubble that even Goldman Sachs says is dangerously overhyped. This time, the fallout is all but certain to take down not just a swath of forgettable and ill-considered startups but, rather, much of the valuation of all the Big Tech names dominating AI â names that comprise a significant portion of the U.S. stock marketâs value and are mainstays of pension funds and retirement portfolios the world over.
Business leaders love to talk about the âmoatâ they have in place to fend off competition; many are about to find out what happens when the threat comes from inside the fortress. Mess with your customersâ experience at your peril. If you want to know whatâs more critical to your business â your moat or your UX, keep shoving superfluous, half-baked AI âsolutionsâ down your usersâ throats and find out.