One day in early June 2018, Sara-Jayne Terp, a British data scientist, flew from her home in Oregon to Tampa, Florida, to take part in an exercise that the US military was hosting. On the anniversary of D-Day, the US Special Operations Command was gathering a bunch of experts and soldiers for a thought experiment: If the Normandy invasion were to happen today, what would it look like? The 1944 operation was successful in large part because the Allies had spent almost a year planting fake information, convincing the Germans they were building up troops in places they weren't, broadcasting sham radio transmissions, even staging dummy tanks at key locations. Now, given today's tools, how would you deceive the enemy?
Terp spent the day in Florida brainstorming how to fool a modern foe, though she has never seen the results. âI think they instantly classified the report,â she says. But she wound up at dinner with Pablo Breuerâthe Navy commander who had invited herâand Marc Rogers, a cyberÂsecurity expert. They started talking about modern Âdeception and, in particular, a new danger: campaigns that use ordinary Âpeople to spread false information through social media. The 2016 election had shown that foreign countries had playbooks for this kind of operation. But in the US, there wasn't much of a responseâor defense.
âWe got tired of admiring the problem,â Breuer says. âEverybody was looking at it. Nobody was doing anything.â
They discussed creating their own playbook for tracking and stopping misinformation. If someone launched a campaign, they wanted to know how it worked. If Âpeople worldwide started reciting the same strange theory, they wanted a sense of who was behind it. As hackers, they were used to taking things apart to see how they workedâusing artifacts lurking in code to trace malware back to a Russian crime syndicate, say, or reverse engineering a Âdenial-of-service attack to find a way to defend against it. Misinformation, they realized, could be treated the same way: as a cybersecurity problem.
The trio left Tampa convinced there had to be a way of analyzing misinformation campaigns so researchers could understand how they worked and counter them. Not long after, Terp helped pull together an international group of security experts, academics, journalists, and government researchers to work on what she called âmisinfosec.â
Terp knew, of course, there's one key difference between malware and influence campaigns. A virus propagates through the vulnerable end points and nodes of a computer network. But with misinfo, those nodes aren't machines, they're humans. âBeliefs can be hacked,â Terp says. If you want to guard against an attack, she thought, you have to identify the weaknesses in the network. In this case, that network was the people of the United States.
So when Breuer invited Terp back to Tampa to hash out their idea six months later, she decided not to fly. On the last day of 2018, she packed up her red Hyundai for a few weeks on the road. She stopped by a New Year's Eve party in Portland to say goodbye to friends. A storm was coming, so she left well before midnight to make it over the mountains east of the city, Âskidding through the pass as Âhighway workers closed the roads behind her.
Thus began an odyssey that started with a 3,000-mile drive to Tampa but didn't stop there. Terp spent almost nine months on the roadâroving from Indianapolis to San Francisco to Atlanta to Seattleâdeveloping a playbook for tackling misinformation and promoting it to colleagues in 47 states. Along the way, she also kept her eye out for vulnerabilities in America's human network.
Terp is a shy but warm Âmiddle-aged woman, with hair that she likes to change upânow gray and cropped short, now a blond bob, now an auburn-lavender hue. She once gave a presentation called âAn Introvert's Guide to Presentationsâ at a hacker convention, where she recommended bringing a teddy bear. She likes finishing half-completed cross-stitches she buys at Âsecond-hand stores. She is also an expert at making the invisible visible and detecting submerged threats.
Terp began her career working in defense research for the British government. Her first gig was developing algorithms that could combine sonar readings with oceanographic data and human intelligence to locate submarines. âIt was big data before big data was cool,â she says. She soon became interested in how data shapes beliefsâand how it can be used to manipulate them. This was during the Cold War, and maintaining the upper hand meant knowing how the enemy would try to fool you.
After the Cold War ended, Terp shifted her focus to disaster response; she became a crisis mapper, collecting and synthesizing data from on-the-ground sources to create a coherent picture of what was really happening.
It was during disasters like the Haiti earthquake and the BP oil spill in 2010, when Terp's job included amassing real-time data from social media, that she started to notice what seemed to be intentionally false information engineered to sow confusion in an already chaotic situation. One article, citing Russian scientists, claimed the BP spill would collapse the ocean floor and cause a tsunami. Initially, Terp considered them isolated incidents, garbage clogging her data streams. But as the 2016 election drew near, it became clear to herâand many othersâthat misinformation campaigns were being run and coordinated by sophisticated adversaries.
As Terp crisscrossed the country in 2019, it was a little like she was Âcrisis-mapping the US. She'd stop to Âpeople-watch in coffee shops. She struck up conversations over breakfast at Super 8. She wanted to get a feel for the communities people belonged to, how they saw themselves. What were they thinking? How were they talking to each other? She gathered her impressions slowly.
In Tampa, Terp and Breuer swiftly got down to plotting their defense against misinfo. They worked from the premise that small cluesâlike particular fonts or misspellings in viral posts, or the pattern of Twitter profiles shouting the loudestâcan expose the origin, scope, and purpose of a campaign. These âartifacts,â as Terp calls them, are bread crumbs left in the wake of an attack. The most effective approach, they figured, would be to organize a way for the security world to trace those bread-crumb trails.
Because cybercriminals tend to cobble together their exploits from a common inventory of techniques, many cyberÂsecurity researchers use an online database called the ATT&CK Framework to analyze intrusionsâit's like a living catalog of all the forms of mayhem in circulation among hackers. Terp and Breuer wanted to build the same kind of library, but for misinformation.
Terp stayed in Tampa for a week before hitting the road again, but she kept working as she traveled. To seed their database, the misinfosec team dissected earlier campaigns, from 2015's Jade Helm 15 military training exerciseâwhich on social media was twisted into an attempt to impose martial law in Texasâto the ÂRussia-linked Blacktivist accounts that stoked racial division before the 2016 election. They were trying to parse how each campaign worked, cataloging artifacts and identifying strategies that showed up again and again. Did a retweet from an influencer give a message legitimacy and reach? Was a hashtag borrowed from another campaign in hopes of poaching followers?
Once they could recognize patterns, they figured, they would also see choke points. In cyberwarfare, there's a concept called a kill chain, adapted from the military. Map the phases of an attack, Breuer says, and you can anticipate what they're going to do: âIf I can somehow interrupt that chain, if I can break a link somewhere, the attack fails.â
The misinfosec group eventually developed a structure for cataloging misinformation techniques, based on the ATT&CK Framework. In keeping with their field's tolerance for acronyms, they called it AMITT (Adversarial Misinformation and Influence Tactics and Techniques). They've identified more than 60 techniques so far, mapping them onto the phases of an attack. Technique 49 is flooding, using bots or trolls to overtake a conversation by posting so much material it drowns out other ideas. Technique 18 is paid targeted ads. Technique 54 is amplification by Twitter bots. But the database is just getting started.
Last October, the team integrated AMITT into an international, open source threat-sharing platform. That meant anyone, anywhere, could add a misinformation campaign and, with a few clicks, specify which tactics, techniques, and procedures were at play. Terp and Breuer adopted the term âcognitive securityâ to describe the work of preventing malefactors from hacking people's beliefsâwork they hope the world's cybersecurity teams and threat researchers will take on. They foresee burgeoning demand for this sort of effort, whether it's managing a brand's reputation, guarding against market manipulation, or protecting a platform from legal risk.
As Terp drove, she listened to a lot of talk radio. It told one long story of a nation in crisisâof a liberal plot to ruin America and of outsiders intent on destroying a way of life. Online, people on the left, too, were constantly agitated by existential threats.
This kind of fear and division, Terp thought, makes people perfect targets for misinformation. The irony is that the folks who hack into those fears and beliefs are typically hostile outsiders themselves. Purveyors of misinformation always have a goal, whether it's to destabilize a political system or just to make money. But the Âpeople on the receiving end usually don't see the big picture. They just see #5G trending or a friend's ÂPizzagate posts. Or, as 2020 got off the ground, links to sensational videos about a new virus coming out of China.
This February, Terp was attending a hacker convention in DC when she started feeling terrible. She limped back to an apartment she'd rented in Bellingham, north of Seattle. A doctor there told her she had an unusual pneumonia that had been moving through the area. Weeks later, Seattle became the first coronavirus hot spot in the USâand soon the Covid pandemic began to run in parallel with what people described as an âinfodemic,â a tidal wave of false information spreading along with the disease.
Around the same time Terp fell sick, Breuer's parents sent him a slick Facebook video claiming that the novel virus was a US-made bioweapon. His parents are from Argentina and had received the clip from worried friends back home. The video presented a chance to put AMITT through its paces, so Breuer began cataloging artifacts. The narration was in Castilian Spanish. At one point the camera pans over some patent numbers the narrator claims are for virus mutations. Breuer looked up the patents; they didn't exist. When he traced the video's path, he found it had been shared by sock-puppet accounts on Facebook. He called friends in South and Latin America to ask if they'd seen the video and realized it had been making its way through Mexico and Guatemala two weeks before showing up in Argentina. âIt was kind of like tracking a virus,â Breuer says.
As Breuer watched the video, he recognized several misinformation techniques from the AMITT database. âCreate fake social media profilesâ is technique 7. The video used fake experts to seem more legitimate (technique 9). He thought it might be planting narratives for other misinformation campaigns (technique 44: seeding distortion).
As with malware, tracing misinformation back to its source isn't an exact science. The Castilian Spanish seemed designed to give the video an air of authority in Latin America. Its high production value pointed to significant financial backing. The fact that the video first appeared in Mexico and Guatemala, and the timing of its releaseâFebruary, right before migrant workers leave for spring planting in the USâsuggested that its goal might be undermining American food security. âThey targeted the US by targeting somebody else. It's somebody who really understood geopolitical consequences,â Breuer says. This all led him to believe it was a professional job, likely Russian.
Of course, he might be wrong. But by analyzing a video like this, and putting it into the database, Breuer hopes the next time there's a polished video in Castilian Spanish making its way through South America and relying on sock puppets, law enforcement and researchers can see just how it spread the last time, recognize the pattern, and inoculate against it sooner.
A month or so into her recovery, Terp got a message from Marc Rogers, with whom she'd had dinner after the D-Day event. Rogers had helped organize an international group of volunteer researchers who were working to protect hospitals from cyberattacks and virus-related scams. They'd been seeing a flood of misinformation like the video Breuer analyzed, and Rogers wanted to know if Terp would run a team that would track campaigns exploiting Covid. She signed on.
On a Tuesday morning in August, Terp was at home trying to dissect the latest misinformation. A video posted the previous day claimed that Covid-19 was a hoax perpetrated by the World Health Organization. It had already racked up nearly 150,000 views. She also got word about a pair of Swiss websites claiming that Anthony Fauci doubted a virus vaccine would be successful and that doctors thought masks were useless. Her team was searching for other URLs linked to the same host domain, identifying ad tags used on the sites to trace funding and cataloging particular phrases and narrativesâlike one claiming German authorities wanted Covid-infected kids to be moved to internment campsâto pinpoint where else they appeared. All of this will be entered into the database, adding to the arsenal of information for battling misinformation. She's optimistic about the project's momentum: The more it's used, the more effective AMITT will be, Terp says, adding that her group is working with NATO, the EU, and the Department of Homeland Security to test-drive the Âsystem.
She's also cautiously optimistic about the strength of the network that's under assault. On her road trip, Terp says, the more she drove, the more hopeful she became. ÂPeople were proud of their cities, loved their communities. She saw that when people have something concrete to fight for, they are less prone to end up in phantom battles against illusory enemies. âYou have to involve Âpeople in their own solution,â she says. By creating a world where misinformation makes more sense, Terp hopes more people will be able to reject it.
During the George Floyd protests, Terp's team was tracking another rumor: A meme kept resurfacing, in various forms, about âbusloads of antifaâ being driven to protests in small towns. One of the things she saw was people in small, conservative communities debunking that idea. âSomebody went, âHang on, this doesn't seem right,ââ she says. Those people understood, on some level, that their communities were being hacked, and that they needed defending.
SONNER KEHRT (@etskehrt) is a freelance writer in California. This is her first story for WIRED.
This article appears in the October issue. Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
Fraud-proof. Hacker-proof. Doubt-proof. Across the country, people are working hard to reboot the American voting system.
- A Texas County Clerkâs Bold Crusade to Transform How We Vote
- The International Playbook for Foiling Russian Interference
- The Facebook Defectors Turning Trumpâs Strategy Against Him
- Stacey Abrams on How Weâll Beat Back Voter Suppression
- How Weâll Know the Election Wasnât Rigged
- One Data Scientist's Quest to Quash Misinformation