Gainesville is a medium-size city in the middle of northern Florida. It is subtropical and often beautiful: its streets are lined with water oak and sweet gum and Chickasaw plum trees, its buildings crawling with Spanish moss. It is also poor. Nearly 34 percent of the city’s 134,000 residents live below the federal poverty line—more than double the national average—and 57 percent struggle to meet basic needs. Gainesville’s academically excellent University of Florida enrolls more than fifty thousand students annually, but graduates tend to skip town after finishing their degrees. They go elsewhere in Florida, or to other states, where opportunity seems greater.
In 2013, Gainesville elected a mayor named Ed Braddy, who promised to change the economic outlook. What Gainesville needed, Braddy said, was to become “a more competitive place for new businesses and talent.” Greater “competitiveness,” he believed, would spur economic activity, keep graduates in town, and build what Richard Florida has called a “creative class.” In pursuit of this vision, Braddy and Gainesville’s City Commission appointed the Blue Ribbon Committee on Economic Competitiveness. The committee took a trip to Silicon Valley, where its members visited the legendary Palo Alto consultancy IDEO. After a conversation with IDEO’s leaders, they hired the firm to design Gainesville’s metamorphosis.
IDEO isn’t a management consultancy like McKinsey or Deloitte. It’s a design consultancy—one that sees “design” as the discipline not just of conceiving physical and digital products, but also of transforming services and institutions. Over eight weeks in late 2015, a team of IDEO designers took over a downtown Gainesville storefront, from which they interviewed hundreds of city residents. They prototyped and tested solutions to municipal problems. Finally, in collaboration with the Blue Ribbon Committee, the IDEO team published a report. It aimed to address the problem—an absence of “competitiveness” for new business and talent—that Mayor Braddy had diagnosed.
IDEO’s Gainesville Blue Ribbon Report is an optimistic document. Its opening pages use desaturated colors: gray-yellow, dull orange, soft brown. But as the report reveals its solution, its palette bursts into vibrancy. “Today the world runs on ideas,” a header announces. “We have one. And we think it’s a very good one.” That idea is for Gainesville to become “the most citizen-centered city in the world.”
How do you make a city “citizen-centered”? IDEO’s report prescribes nine changes for Gainesville. Early in the list is rebranding: adopt a new logo, tagline, and visual style. Another is to create a “Department of Doing,” an office to help people start or grow businesses in Gainesville. Finally, the report says, the city should become more design-minded. It should train city employees in “design thinking”: the use of design methods to solve problems. It should replace City Commission subcommittees with design-thinking workshops and frame policy questions as design questions. (“Instead of assuming,” for instance, “that the right answer to dealing with trees cut as a result of development is a policy to limit the amount of trees that can be cut, why not ask the question, ‘How can we maintain a desirable degree of shade and tree coverage as part of Gainesville’s overall design?’”)
Gainesville’s City Commission embraced the Blue Ribbon Report. So did the press. The Gainesville Sun endorsed the Department of Doing. In 2016, Fast Company published a long feature on “How One Florida City Is Reinventing Itself With UX Design.”
On the heels of the report’s release, city commissioners selected the director of the Blue Ribbon Committee, Anthony Lyons, who had worked closely with IDEO, to be city manager. Lyons became responsible, first on an interim and then a long-term basis, for more than a thousand city employees and the general fund budget. He gained the resources he would need to make IDEO’s recommendations real. In spring 2017, the Gainesville Sun ran an admiring profile of Lyons under the headline “Revolutionary-In-Chief.” In a photo, Lyons perches on a play structure in a local park, wearing a suit jacket, black Nikes, and black jeans. He states his mission in simple terms: “How do we make Gainesville better,” he asks, “by design?”
Design is a talismanic word with nearly infinite meanings. Design is fashion design and urban design and graphic design and product design; it’s also assay design (biology) and object-oriented design (computer science) and intelligent design (creationism) and designer drugs and designer dykes and “I think he has designs on you.” Some of these uses seem to have little in common. And yet design’s English-language lives do orbit around certain ideas: intention, planning, aesthetics, method, vocation. These ideas together form a social system that generates meaning, defining the boundaries of knowledge and the locations of cultural and economic value. Design and the ideas that travel with it, in other words, make up a discourse.
Which is also to say: they’re historical. Even in English, design hasn’t always meant what it means now. As recently as the late 19th century, people used the word to refer to most visual arts. (“The arts of design,” as in Michelangelo’s time, were all arts that might begin with what Michelangelo called disegno, drawing for the purposes of planning.) But then the discourse changed. Early in the 20th century, design came to refer to the visual styling of existing products. And then, as modernist ideas circulated in Europe before World War II and as Americans adopted the idea of “industrial design,” design began to refer not just to styling products but also to conceiving and planning their function. That was when design came to mean, as Steve Jobs put it much later, “not just what it looks like and feels like” but “how it works.”
Design means something even broader now. Sometime around World War II, it came to mean making things that “solve problems.” With the influence of mid-century global social movements and the rise of digital technology, it began to mean making things that are “human-centered.” And as of recently, design doesn’t have to involve making things at all. It can just mean a way of thinking.
Of all these developments, the idea of design as a broadly applicable way of thinking—the idea of “design thinking”—may end up being the most influential. The broader category of “design” can be found everywhere—in advertising (Design Within Reach, Target’s Made By Design), on television (Grand Designs, Design on a Dime, Divine Design), and in podcasts and blogs (99% Invisible, designlovefest). But “design thinking” has also reached the halls of power. You can find it in the upper reaches of corporations and governments and universities. It organizes and mediates decision-making among executives and elites. At Stanford’s d.school, as cofounder Robert Sutton has said, “design thinking” is often treated “more like a religion than a set of practices for sparking creativity.” So what is it?
This question feels urgent to me, mainly because I’ve been trying to talk about “design thinking” with design students. I’m an interaction designer. I work for a studio that makes digital tools for corporate clients. I also teach a graduate class on digital design and design research. I try to teach some practical skills—interviewing, sketching, wireframing, prototyping, usability testing—and a little theory. But lately it seems necessary to talk about “design thinking,” too, because my students are being asked to discuss it in job interviews and then do it in their jobs.
Here’s what I say “design thinking” is: using a particular set of design methods to solve problems that traditionally have fallen outside the purview of design. I show my students what designers call the “hexagon diagram,” a ubiquitous image that came out of the d.school in the mid-2000s and purports to represent the five steps of design thinking. It consists of five hexagons that read: “Empathize,” “Define,” “Ideate,” “Prototype,” and “Test.” The idea is that design thinking involves listening to and empathizing with some group of people, then using what you’ve heard to define the problem you want to solve. Then you come up with ideas, prototype those ideas, and test the prototypes to see if they work.
It’s a simple, comprehensible diagram. My students come away seeing design thinking as a digestible problem-solving method. They also come away optimistic. Design thinking, they read, has been applied in realms as diverse as education, health, government, and the transformation of corporate culture. Design thinkers are more employable, more useful, more valuable. Suddenly everything is a design-thinking problem: postpartum depression, racial injustice in sentencing, unsustainable growth.
To a person with a hammer, everything looks like a nail. But my students aren’t stupid; they’re smart. They’re picking up on something. In the worlds they inhabit, “Better by design” is a dominant structure of feeling.
Design has been a profession in the United States since the 1930s. When consumer purchasing power collapsed during the Great Depression, manufacturers embraced “industrial design” in hopes that imbuing consumer goods with artistry would entice more people to buy them. The practice stuck. By the mid-1940s, there were two thriving professional organizations for industrial designers. Designers convened at MoMA in 1946 for the “Conference on Industrial Design, A New Profession” to discuss what standards and oaths and educational requirements they might adopt for their new craft.
But the story of design thinking as such—and of how design reached its apotheosis as a floating signifier, detached from any one object or medium or output—starts with World War II. As the war began, American industrial designers entered government service en masse. They designed everything from liquid propellant rockets to molded plywood splints to a pair of strategy rooms and a giant spinning globe for the Joint Chiefs of Staff. And they exposed their collaborators—scientists, mathematicians, engineers, and others in industry, academia, and government—to the idea of the designer as an all-capable architect of clever solutions across domains.
As their collaborators dispersed back into their professions after the war, design’s sphere of influence expanded accordingly—as did the meaning of the word. Industrial designers got more work. Graphic design became recognized as a separate craft. More general “design conferences” arose—conferences focused not on a single subdiscipline (industrial design, fashion design, architecture) but on the new idea of design as a unified practice. The International Design Conference in Aspen, founded in 1951 by Walter Paepcke and Herbert Bayer, is a good example. Paepcke, president of the Container Corporation of America, believed the US to be in a “new era” in which “the influence of the designer and his consultants penetrates the entire organization.” The Aspen conference became an annual affair, attended by notable architects and industrial and graphic designers, as well as by Gloria Steinem, C. Wright Mills, Robert Rauschenberg, John Cage, Susan Sontag, and Gwendolyn Brooks. “Design” had become not just a set of occupations but a broadly pertinent domain of concern.
Broadening the scope of design was the first step toward establishing “design thinking,” which relies on capaciousness. The next step was a rise in self-consciousness about what the new capaciousness meant. Over the two decades after the war, American and British intellectuals began to ask questions about what design was. What did the word mean? What counted as design, and what didn’t? How was design different from any other way of thinking and knowing? Was it simply planning? Planning by professional planners, or planning like everyone did every day?
Design was a multiplicity of critical voices batting a problem around unknown terrain until it formed itself, or not, into some kind of resolution.
Tweet
The midcentury systems scientist C. West Churchman served as midwife to much of design’s new self-consciousness. A Quaker-educated Philadelphia native with a PhD in philosophy from the University of Pennsylvania, Churchman joined that department as an assistant professor in 1939. His doctoral work had focused on the branch of logic called propositional calculus. As the US prepared to enter the war, Churchman took a position doing more concrete, applied work at the US Ordnance Laboratory at Philadelphia’s Frankford Arsenal. Frankford Arsenal was an enormous, coordinated, modern place, with a workforce of both soldiers and civilians. Its purpose was to design, manufacture, and test munitions. Churchman became head of the mathematical section, where he solved problems of statistical quality control. He also designed experimental methods for testing the arsenal’s small arms ammunition.
When the war ended, Churchman left Frankford Arsenal, but he never returned to wholly abstract academic work. Instead, he veered into the new field of operations research: the study of the application of scientific methods to decision-making, especially decision-making for businesses and other institutions. And he became interested in design. Specifically, he studied the design of social systems that might improve the human condition. He wondered whether everyone designed—everyone planned for the future, didn’t they?—or whether design should be considered a specialized discipline.
In 1958, Churchman became a professor in the School of Business Administration at the University of California, Berkeley. (He maintained connections to the military and to industry, consulting for the US Department of Energy, the Texas Energy Council, the US Fish and Wildlife Service, NASA, and others, even as he was founding Berkeley’s graduate programs in operations research.)
At Berkeley, Churchman began a weekly faculty and graduate workshop, held in brand-new Barrows Hall, dedicated to understanding design and design methods. Colleagues called it “West’s seminar.” Its premise, one participant recalled later, was “that design is a ubiquitous activity practiced by almost everybody, at least some of the time, and that there may be some generalizable observations to be made about how people go about it.”
How do people go about design? West Churchman and his colleagues grew more energized about the question as the first conference on design methods was convened in London in 1962. The mood, at the conference and in West’s seminar, was optimistic. In 1964, the Berkeley architect and design theorist Christopher Alexander published Notes on the Synthesis of Form, an elegant little book that argued that because design problems are complex they should be approached by simplification. Design isn’t mystical or intuitive, Alexander wrote. It’s more like math: as a mathematician does when calculating the seventh root of a fifty-digit number, a designer should simply write a problem down and break it into smaller problems. Then those problems can be reorganized into sets and subsets and patterns, which point to the right solution.
It was the early ’60s: peak season for such innocent self-assurance. Alexander and Churchman and their “methodologist” peers — designer John Chris Jones; engineer and design researcher L. Bruce Archer; economist, psychologist, and political scientist Herbert Simon — shared the belief that the design process could be fully cataloged, described, and rationalized. (Simon, who would go on to receive the Nobel Prize in Economics, believed all design was problem-solving and could be reproduced as a computer program.)
Then Horst Rittel walked into West’s seminar room. Born in 1930 and raised in Berlin, Rittel had come to Berkeley from a German college of design descended from the Bauhaus, the Hochschule für Gestaltung Ulm (HfG Ulm), an internationally admired institution whose faculty at the time was fracturing over severe political divisions. Rittel had an open, sad face and a high forehead, and he always wore a suit. Like most of the other seminar members, he had seen World War II. But he had seen it from the other side. He understood the postwar turn to rational methods differently.
Rittel, too, was fundamentally interested in why and how people designed.1 But this was where his commonality with the rest of the group ended. Unlike other seminar members, Rittel was not optimistic about rationalizing, or making a method of, anyone’s design or planning process.
The paper Rittel read aloud to West’s seminar in 1967 was not primarily about design methods but design problems: problems that Rittel believed should be the purview of design, from poverty to the need for sanitary sewers. Rittel placed this class of problems into the unfolding historical context of the 1960s, when, as he wrote, “the unitary conception of ‘The American Way of Life’” was “giving way,” and when individuals were rightly questioning the power of the professional class to make decisions on their behalf.
What united these problems, Rittel said, was first that the actual problem was always indeterminate. It was hard to tell, in other words, if you had diagnosed the problem correctly, because if you dug deeper — why does this problem occur? — you could always find a more fundamental cause than the one you were addressing. These problems also didn’t have true or false answers, only better or worse solutions. They were not, indeed, like math problems. There was no definitive test of a solution, no proof. More effort might not always lead to something better.
There were other ways these problems were not like math. They had intrinsically high stakes, wrote Rittel and a colleague, Melvin M. Webber, when they published Rittel’s talk as a paper. Any solution implemented would leave “traces” that couldn’t be undone. “One cannot build a freeway to see how it works, and then easily correct it after unsatisfactory performance,” they wrote. “Large public works are effectively irreversible, and the consequences they generate have long half-lives.” The designer had no “right to be wrong,” because these problems mattered. Human lives, or the quality of human lives, were on the line.
Rittel called them “wicked problems.” They were “wicked” not because they were unethical or evil, but because they were malignant and incorrigible and hard. There did exist simple problems that didn’t rise to this level. But “now that [the] relatively easy problems have been dealt with,” the problems worth designers’ time were the wickedest ones. The hardest problems of heterogeneous social life called for designers’ exclusive focus and concentration.
For Rittel, design problems’ wickedness meant that they could never be subject to a single process of resolution. There could be no one “method.” Textbooks tended to break down, say, engineering work into “phases”: “gather information,” “synthesize information and wait for the creative leap,” et cetera. But for wicked problems, Rittel wrote, “this type of scheme does not work.” Understanding the problem required understanding its context. It wasn’t possible to gather full information before starting to formulate solutions. Nothing was linear or consistent; designers didn’t, couldn’t, think that way. If there was any describing the design process, it was as an argument. Design was a multiplicity of critical voices batting a problem around unknown terrain until it formed itself, or not, into some kind of resolution.
This methodlessness, Rittel believed, was a wonderful thing. It entailed, he wrote later, an “awesome epistemic freedom” (the italics are his), without algorithmic guardrails or rules of validity. “It is not easy to live with epistemic freedom,” he wrote, and so designers often sought out sachzwang — practical constraint, inherent necessity, “a device to ‘derive ought from fact.’” But they shouldn’t. Without methodological constraint, design had room for heterogeneity. It had the capacity to surprise. “Nothing has to be or to remain as it is,” Rittel wrote, “or as it appears to be.”
For Rittel’s peers who had been committed to identifying and systematizing the one true design method, it was particularly difficult to live with this epistemic freedom. Rittel’s uncompromising, rigorous, calmly academic voice of refusal, together with the incorrigible political and social conflicts of the 1960s, helped spell the end of their rational-ist project. John Chris Jones, who had co-organized that first design methods conference in London in 1962, now, in his words, “reacted against design methods.” Christopher Alexander repudiated his own methodological work, including Notes on the Synthesis of Form. “I would say forget it,” he wrote in 1971, “forget the whole thing.”
Rittel had set his sights high. Design should address the hardest problems facing civilization. In helping to bring his peers to this point of view, he had begun, albeit quietly, a broadening of design’s discursive scope. At the same time, he argued that every designer would, and should, approach this expansive project differently. Exactly because the problems were complex, no one method would do.
In 1971, before Rittel and Webber’s full article was published, the Austrian American designer Victor Papanek published a book that began to push these claims into the public discourse. Design for the Real World, which its publisher today calls “one of the world’s most widely read books on design,” exhorts designers to tackle only complicated problems: ecological catastrophe, say, rather than “mass leisure and phony fads.” Papanek, too, believed that trying to extract replicable methods from design — developing “rules, taxonomies, classifications, and procedural design systems” — was folly.
Capaciousness had led to self-consciousness. Now self-consciousness produced the consensus that capaciousness precluded any one method. In 1987, Peter Rowe published an ethnographic study of designers called Design Thinking (this may be the first printed instance of the phrase). But Rowe’s study of observed evidence concluded, just as Rittel and Papanek had argued, that in fact there was no one “design thinking.” “Rather,” Rowe wrote, “there are many different styles of decision making, each with individual quirks as well as manifestations of common characteristics.” It had become a commonplace that there was no one way to make design. The more interesting question was how to observe and negotiate the proliferation of differences.
In 1978, a 27-year-old electrical engineer from Ohio, David Kelley, finished a product design master’s degree at Stanford and joined with a classmate to start a design studio. They opened above a dress shop in Palo Alto and hired four more friends from graduate school. Former professors began to send clients their way. The breakthrough came when someone introduced Kelley to Steve Jobs, who asked him to design the mouse for the new Apple Lisa. Kelley wondered what a “mouse” was. He used a butter dish and a ball from a roll-on deodorant, among other things, to make the first prototypes. “We’d ask,” he recalled, “should you use the mouse with your fingertips or slide it like a bar of soap?” The mechanism the company designed is still in use. In 1991, David Kelley Design — his cofounder had departed — merged with firms owned by Bill Moggridge, who designed the first laptop computer, and visual designer Mike Nuttall. After an all-company contest, they called their new firm IDEO: a word fragment, Kelley recalled later, that resonated in part because it sounded like idea and ideology.
IDEO was native-born to the new Silicon Valley. Its members were industrial designers: people who designed physical products with an eye to both aesthetics and function. They were also “interaction designers” — the term newly invented by Moggridge himself — in that they used the new science of “human factors” to design interactions between people and machine interfaces. Given their expertise in both industrial and interaction design as well as engineering, they were well positioned to win jobs designing things that Silicon Valley and friends were just discovering they needed. IDEO designed a user-friendly portable defibrillator, a revamped PalmPilot, a fast-acting mealtime insulin pen, and the three-and-a-half-ton mechanical orca for Free Willy. By the early 2000s, the firm had worked on thousands of products, most of which bridged physical and digital worlds. Its revenues were reportedly in the high tens of millions, and it was opening new studios in Munich, Tokyo, and Milan.
When the dot-com bubble burst, revenues fell. IDEO depended heavily on internet start-up clients, and more heavily yet on its clients’ confidence in the future. What to do? In 2003, David Kelley had an epiphany: Why not rebrand what they already did? Why be “a guy who designs a new chair or car, ” Kelley later recalled to Fast Company — or, indeed, a software interface — when he could be “an expert at methodology”? Suddenly, Kelley said, “it all made sense.” They would stop referring to IDEO’s approach as “design.” Instead, they would call it “design thinking.”
It was design for a service economy: memorable, saleable, repeatable, apparently universal, and slightly vague in the details.
Tweet
“I’m not a words person,” Kelley noted, “but in my life, it’s the most powerful moment that words or labeling ever made.” Kelley was not familiar with earlier designers’ and design theorists’ disappointment with methodologies and methodological expertise. But he knew his own bottom line, and he knew his market.
I have been told, both by fancy professors and by corporate “thought leaders,” that if you want to win big in life you should coin a memorable phrase. Design thinking turned out to be a memorable phrase. It was “design thinking” — not the Apple mouse, not the lifesaving portable defibrillator, not Free Willy — that made IDEO the world’s most famous design firm. It gave David Kelley the clout to start the d.school at Stanford. It’s the ideology that drives hundreds of thousands of people worldwide to participate in the OpenIDEO community, running volunteer chapters in thirty cities that organize events around IDEO-conceived “design-thinking challenges.” And it has contributed to the weird spell under which IDEO seems to hold the design world. “I’ve been a professional designer for almost twenty years,” the well-regarded former Google Ventures designer Jake Knapp recently wrote, “and the entire time, I’ve been obsessed with IDEO. What goes on inside? How does it work?” Several years ago, I had some meetings in IDEO’s Cambridge, Massachusetts, studios, and as a then-aspiring designer I felt pride and awe simply being there. This is pretty ridiculous: IDEO is just another multinational corporation. But it’s a multinational corporation whose niche branding and marketing, funded by the success of “design thinking,” have been so phenomenally successful as to seem like straight sorcery.
Design thinking’s proponents don’t define it all that clearly. “Put simply,” IDEO CEO Tim Brown wrote in the Harvard Business Review in 2008, it “is a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.” This is, to be sure, not put as “simply” as one might hope.
But the stories of design thinking in practice are clear enough. The health care consortium Kaiser Permanente hired IDEO to address the problem of losing important patient information as Kaiser nurses handed off their shifts. Through a series of workshops, in which participants presumably empathized with nurses’ experiences, defined the problem, ideated potential solutions, and prototyped and tested them, IDEO and Kaiser defined a new shift-change process: to prevent the loss of important information, nurses would relay that information in front of patients themselves.
Another example: In 2006, Colombia’s Ministry of National Defense approached a Bogotá-based advertising agency, Lowe-SSP3, seeking a campaign to convince the guerrilla fighters of the Revolutionary Armed Forces of Colombia, or FARC, to demobilize. The agency followed the hexagons. They conducted in-depth interviews with former guerrilla combatants (“empathize”). They found that what the combatants missed most while mobilized were their families. Then the agency prototyped, and in 2010 launched a campaign called “Operation Christmas.” Ten giant jungle trees, each near a guerrilla stronghold, were strung with two thousand motion-activated Christmas lights and banners reading si la navidad pudo Llegar hasta la selva, usted también puede llegar hasta su casa. desmovilícese. en navidad todo es posible. (“If Christmas can come to the jungle, you can come home. Demobilize. At Christmas everything is possible.”) The campaign — and additional iterations launched over the next three years — were credited with motivating many guerrillas to demobilize. (Other factors certainly also drove this trend.)
To be a design thinker, then, is to see a hospital-shift change and a guerrilla war as design problems. It is to see “design,” whatever the word might mean, as applicable to just about anything. But even as “design thinking” rendered “design” yet more capacious , it also jettisoned the self-conscious suspicion of “methodology” at which designers, following Horst Rittel, had arrived in the ’60s. Design thinking was unambiguously a recipe, a formula, a five-step program. The stories of Kaiser and Colombia are stories of a defined and tidy linear process, a jaunt from one colored hexagon to the next.
It was design for a service economy: memorable, saleable, repeatable, apparently universal, and slightly vague in the details. Horst Rittel had convincingly described the folly of trying to define or rationalize design’s “how”; IDEO’s template for design thinking brought back the “how” with a vengeance.
So it was that in the United States in the early 2000s, design again became not just a method but a universal method — and a method that seemed a little bit magical. It applied to everything, and anyone could do it. “Contrary to popular opinion,” read a sidebar in Brown’s 2008 Harvard Business Review essay, “you don’t need weird shoes or a black turtleneck to be a design thinker.” You didn’t need, in fact, to be a designer. All you needed was a set of designerly qualities — empathy, “integrative thinking,” optimism, experimentalism, a collaborative nature — and that brightly colored five-step map.
By the early years of our current decade, IDEO would give anyone a self-paced video course to learn design thinking. (The course is now “Insights for Innovation”: $499 for five weeks online.) The Stanford d.school had begun offering an executive-education “Design Thinking Bootcamp.” (This runs a less modest $12,600 for four days.) “The idea that the design process can be usefully applied outside its conventional context,” a New York Times columnist observed, “has triggered an explosion of activity that ranges from using design as a medium of intellectual inquiry to devising ingenious solutions to acute social problems like homelessness and unemployment.” Meanwhile, a Chronicle of Higher Education headline asked: “Is ‘Design Thinking’ the New Liberal Arts?”
In Gainesville, city manager Anthony Lyons pursued design-driven transformation for three years. Fifty city employees received training in design thinking. The city opened the new Department of Doing. The city website received an overhaul. With the help of a local branding agency, Gainesville replaced its logo and branding.
There were other successes. Lyons’s initiatives strengthened the city’s relationship with the University of Florida, which in 2017 joined Lyons formally in a partnership to make Gainesville “the New American City.” The joint initiative, and many of Lyons’s other projects, received endorsements from the Gainesville Sun.
But there also was resistance. In June 2017, Paul Folkers, one of Gainesville’s two assistant city managers, resigned without warning. Within the week, Lyons replaced him with Dan Hoffman, the former chief innovation officer for Montgomery County, Maryland. That same month, the local NAACP filed a complaint against Lyons on behalf of city employees, alleging that Lyons had “fired qualified people who are professional public administrators because they question his actions, and according to him, they serve him at his will.” The complaint said he had forced resignations, passed over qualified city employees for promotions, and hired people — many of them, allegedly, “Millennials” — who were from outside the city and less experienced than internal Gainesville applicants. Human resources director Cheryl McBride, who also resigned, filed an overlapping complaint with the city’s Equal Opportunity Office.
McBride and Lyons settled McBride’s claim. The city responded to the NAACP complaint with an investigation into Gainesville’s hiring practices. The investigation did not find that Lyons had created a hostile work environment. It did find, however, that Lyons’s team had skirted or violated Gainesville policies in various hiring activities.
More departures followed. In August 2018, a longtime city spokesman, Bob Woods, and finance director Chris Quinn resigned. (In 2017 the previous finance director, April Shuping, had also resigned, after a twelve-year tenure.) The second assistant city manager, Fred Murry, who had focused on affordable housing, resigned as well. By early winter, three of the six city commissioners other than the mayor — the only two commissioners of color, and the one additional woman — were advocating for a public hearing regarding alleged forced resignations, low employee morale, and high employee turnover, among other issues. The hearing’s real purpose, as a Sun editorialist, Ron Cunningham, put it, was “so everybody and his brother can weigh in about whether City Manager Anthony Lyons ought to be fired.” In December 2018, the City Commission passed the motion to conduct the hearing. Preempting them, Lyons resigned.
Americans love design most when we’re afraid.
Tweet
Gainesville is among the poorest cities in America, and one of the least racially equitable in the distribution of income and resources. Black residents, who make up 22 percent of the city population, live largely in East Gainesville, where residents report severely limited grocery options, inadequate transportation, and poor street lighting. The median household income for Black residents of Gainesville’s county is $26,561 — just over 50 percent of the median household income for non-Hispanic whites. High school graduation rates for Black residents of the county are 18 percent lower than those of white residents. Black residents are almost 2.5 times as likely as white residents to be unemployed.
Lyons and IDEO’s design-driven project aimed to solve the alleged problem of insufficient “competitiveness.” That problem, as stated — and the changes Gainesville instituted to address it, including beautiful graphic design, better web resources, and that friendly new office called the Department of Doing — had at best a tenuous relationship to the experiences of many of Gainesville’s poor and Black residents. Although the plans were intended to boost Gainesville’s economy on the whole, they did not create affordable housing, eradicate food deserts, or raise high school graduation rates. They didn’t address those for whom “competitiveness” seemed a distant problem. They seemed to leave much of Gainesville behind.
Meanwhile, the city was losing visible Black staff, including staff whose jobs did address these problems. (Fred Murry and Cheryl McBride are both Black, as are many other former and current Gainesville leaders.) And Lyons’s high-profile hires — including Assistant City Manager Hoffman and Department of Doing director Wendy Thomas — were, like Lyons himself, white people recruited from out of state. Lyons, the Gainesville Sun editorial board wrote, “didn’t help his cause by pushing through changes, however laudable, without working more to build consensus with staff and community stakeholders.”
Lyons’s resignation signaled the end of Gainesville’s design experiment. The mayor choked up as he said his public goodbye to Lyons; the director of Gainesville’s Community Redevelopment Agency praised Lyons’s “hairy and audacious” ideas. Others were less convinced. “Gainesville is not a Silicon Valley startup,” one resident told the Alligator, the newspaper of the University of Florida. “Looking good in a magazine is not a marker of true success.”
I don’t think Gainesville’s design experiment did irreparable damage to the city. I do think that it promised much more than it could have delivered. Beneath most problems (“competitiveness”), Horst Rittel would remind us, lie wickeder problems. “Design thinking” can’t solve the wicked problems that organize Gainesville’s inequality: poverty, income disparity, structural racism, environmental injustice, unregulated market capitalism. You face wicked problems by struggling with them, not by solutioning them. You argue, you iterate, you fail, you grieve, you fight.
Horst Rittel believed design could, in fact, entail all of that: it could mean fighting one’s way to an honest and collaborative approach to a problem of genuine complexity. Anyone could participate in such a fight. But there was no one method to get there, and no guarantee that it would work out.
This is what worries me about design thinking: its colossal and seductive promise. There was an earlier Anglo-American vogue for design — a love affair with industrial design, beginning in the Depression era — but it was relatively benign in its claims and its outcomes. This more recent vogue for design thinking seems more insidious because it promises so much more. It promises a creative and delightful escape from difficulty, a caper through the Post-it Notes to innovative solutions. And it promises this as a service, delivered at what is often great cost — not just to IBM and Intuit and Starbucks, but to villages and nonprofit organizations and cities like Gainesville without enormous resources to spare.
There’s another problem, too. By embracing “design thinking,” we attribute to design a kind of superior epistemology: a way of knowing, of “solving,” that is better than the old and local and blue-collar and municipal and unionized and customary ways. We bring in “design thinkers” — some of them designers by trade, many of them members of adjacent knowledge fields — to “empathize” with Kaiser hospital nurses, Gainesville city workers, church leaders, young mothers, and guerrilla fighters the world over. Often, as in Gainesville, the implicit goal is to elevate the class bases of the institutions that have organized their informants’ lives. Only within this new epistemology can such achievements be considered unambiguously good.
And yet this is the ur-story of the multi-act American romance with the idea of design. Americans love design most when we’re afraid. Just as the Depression enabled industrial design to present itself as the solution to US manufacturing woes, the 2000 to 2002 dot-com crash and 2008 recession, with their long tails, have enabled the rise of a new embrace of design and a new broadening of design’s imagined jurisdiction. This time the specific fear is that the knowledge economy is coming for everyone. Bewildered and anxious leaders, public and private, have responded by throwing in their lots with the seemingly magical knowledge-work that is design.
But design isn’t magic. To address a wicked problem is to look for its roots — and there’s no hexagon map for getting there. Stop at “insufficient competitiveness” and what you get is a solution that can be tidy exactly because it doesn’t touch the deep causes of Gainesville’s economic stagnation. You get a solution that’s indifferent to the legacies of slavery and segregation, to the highway projects that systematically cut off and blighted East Gainesville, to East Gainesville’s miserable public transportation, and to Florida’s $8.46 minimum wage. Stop at that top turtle and you miss that it’s turtles all the way down.
Better to acknowledge, as Rittel wrote in 1988, that the top turtle often obscures real, substantial, and inconvenient difference. There is no consensus as to how resources should be distributed, social life arranged, justice done. To design, really design, is to acknowledge those divergences — and then to listen one’s way, and push one’s way, to somewhere new. Such battles from competing positions can be truly wicked, Rittel believed, but it’s better to fight than to obscure irresolution with optimism. He had a point. Design may come in an elegant package, but it doesn’t always make things right.
Early in his career, Rittel often called design planning. Design, as he saw it, was the planning of complex systems, environments, and tools. Later in his career, though, as critics came to associate planning with precisely the kind of top-down rationalism that Rittel would critique, design took a more prominent place in his scholarly vocabulary. ↩