Okoye’s grip shoes

Like so much of the tech in Black Panther, this wearable battle gear is quite subtle, but critical to the scene, and much more than it seems at first. When Okoye and Nakia are chasing Klaue through the streets of Busan, South Korea, she realizes she would be better positioned on top of their car than within it.

She holds one of her spears out of the window, stabs it into the roof, and uses it to pull herself out on top of the swerving, speeding car. Once there, she places her feet into position, and the moment the sole of her foot touches the roof, it glows cyan for a moment.

She then holds onto the stuck spear to stabilize herself, rears back with her other spear, and throws it forward through the rear-window and windshield of some minions’ car, where it sticks in the road before them. Their car strikes the spear and get crushed. It’s a kickass moment in a film of kickass moments. But by all means let’s talk about the footwear.

Now, it’s not explicit, the effect the shoe has in the world of the story. But we can guess, given the context, that we are meant to believe the shoes grip the car roof, giving her a firm enough anchor to stay on top of the car and not tumble off when it swerves.

She can’t just be stuck

I have never thrown a javelin or a hyper-technological vibranium spear. But Mike Barber, PhD scholar in Biomechanics at Victoria University and Australian Institute of Sport, wrote this article about the mechanics of javelin throwing, and it seems that achieving throwing force is not just by sheer strength of the rotator cuff. Rather, the thrower builds force across their entire body and whips the momentum around their shoulder joint.

 Ilgar Jafarov, CC BY-SA 4.0, via Wikimedia Commons

Okoye is a world-class warrior, but doesn’t have superpowers, so…while I understand she does not want the car to yank itself from underneath her with a swerve, it seems that being anchored in place, like some Wakandan air tube dancer, will not help her with her mighty spear throwing. She needs to move.

It can’t just be manual

Imagine being on a mechanical bull jerking side to side—being stuck might help you stay upright. But imagine it jerking forward suddenly, and you’d wind up on your butt. If it jerked backwards, you’d be thrown forward, and it might be much worse. All are possibilities in the car chase scenario.

If those jerking motions happened to Okoye faster than she could react and release her shoes, it could be disastrous. So it can’t be a thing she needs to manually control. Which means it needs to some blend of manual, agentive, and assistant. Autonomic, maybe, to borrow the term from physiology?

So…

To really be of help, it has to…

  • monitor the car’s motion
  • monitor her center of balance
  • monitor her intentions
  • predict the future motions of the cars
  • handle all the cybernetics math (in the Norbert Wiener sense, not the sci-fi sense)
  • know when it should just hold her feet in place, and when it should signal for her to take action
  • know what action she should ideally take, so it knows what to nudge her to do

These are no mean feats, especially in real-time. So, I don’t see any explanation except…

An A.I. did it.

AGI is in the Wakandan arsenal (c.f. Griot helping Ross), so this is credible given the diegesis, but I did not expect to find it in shoes.

An interesting design question is how it might deliver warning signals about predicted motions. Is it tangible, like vibration? Or a mild electrical buzz? Or a writing-to-the-brain urge to move? The movie gives us no clues, but if you’re up for a design challenge, give it a speculative design pass.

Wearable heuristics

As part of my 2014 series about wearable technologies in sci-fi, I identified a set of heuristics we can use to evaluate such things. A quick check against those show that they fare well. The shoes are quite sartorial, and look like shoes so are social as well. As a brain interface, it is supremely easy to access and use. Two of the heuristics raise questions though.

  1. Wearables must be designed so they are difficult to accidentally activate. It would have been very inconvenient for Okoye to find herself stuck to the surface of Wakanda while trying to chase Killmonger later in the film, for example. It would be safer to ensure deliberateness with some mode-confirming physical gesture, but there’s no evidence of it in the movie.
  2. Wearables should have apposite I/O. The soles glow. Okoye doesn’t need that information. I’d say in a combat situation it’s genuinely bad design to require her to look down to confirm any modes of the shoes. They’re worn. She will immediately feel whether her shoes are fixed in place. While I can’t name exactly how an enemy might use the knowledge about whether she is stuck in place or not, but on general principle, the less information we give to the enemy, the safer you’ll be. So if this was real-world, we would seek to eliminate the glow. That said, we know that undetectable interactions are not cinegenic in the slightest, so for the film this is a nice “throwaway” addition to the cache of amazing Wakandan technology.

Black Georgia Matters and Today is the Day

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Today is the last day in the Georgia runoff elections. It’s hard to overstate how important this is. If Ossoff and Warnock win, the future of the country has a much better likelihood of taking Black Lives Matter (and lots of other issues) more seriously. Actual progress might be made. Without it, the obstructionist and increasingly-frankly-racist Republican party (and Moscow Mitch) will hold much of the Biden-Harris administration back. If you know of any Georgians, please check with them today to see if they voted in the runoff election. If not—and they’re going to vote Democrat—see what encouragement and help you can give them.

Some ideas…

  • Pay for a ride there and back remotely.
  • Buy a meal to be delivered for their family.
  • Make sure they are protected and well-masked.
  • Encourage them to check their absentee ballot, if they cast one, here. https://georgia.ballottrax.net/voter/
  • If their absentee ballot has not been registered, they can go to the polls and tell the workers there that they want to cancel their absentee ballot and vote in person. Help them know their poll at My Voter Page: https://www.mvp.sos.ga.gov/MVP/mvp.do

This vote matters, matters, matters.

Deckard’s Photo Inspector

Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.

Description

Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.

Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.

Deckard does digital forensics, looking for a lead.

He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.

If this is distracting you from reading, YOU SEE MY POINT.

After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”

In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.

A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.

Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”

Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”

Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.

This image helps lead him to Zhora.

I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.

But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…

Some critiques, as it is

  • Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
  • It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
  • It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
  • And if he’s memorized it, why show the overlay at all?
  • Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
  • Why is the printed picture so unlike the still image where he asks for a hard copy?
  • Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
The photo inspector: My interface is up HERE, Rick.

How might it be improved for 1982?

So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…

Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.

Rendered in glorious 4:3 NTSC dimensions.

With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.

The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.

How might it be improved for 2020?

What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.

With that in mind, let’s talk about the display.

Display

To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.

If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.

The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.

Modification of a pair of images found on Evermotion
  • In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
  • In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
  • The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.

This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.

Flat screen or volumetric projection?

Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.

But…

Also seriously who wants a lamp embedded in a headrest?

…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.


OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.

Inputs

To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.

Manual Tool

This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.

We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.

Special edition made possible by our sponsor, Tom Nook.
(I hope we can pay this loan back.)

Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.

One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?

Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.

In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.

This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).

Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.

Tipping the virtual drone to the right.

Assistant Tool

Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.

Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.

There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.

Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.

*Left: The convex mirror in Leon’s 21st century apartment.
Right: The convex mirror in Arnolfini’s 15th century apartment

Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”

All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.

Agentive Tool

To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.

It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.

Though I’ve never figured out why she has a snake tattoo here (and it seems really important to the plot) but then when Deckard finally meets her, it has disappeared.

Scene

  • Interior. Deckard’s apartment. Night.
  • Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch and places the photo on the coffee table.
  • Deckard
  • Photo inspector.
  • The machine on top of a cluttered end table comes to life.
  • Deckard
  • Let’s look at this.
  • He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomalies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector.
  • Deckard
  • OK. Anyone hiding? Moving?
  • Photo inspector
  • No and no.
  • Deckard
  • Zoom to that arm and pin to the face.
  • He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue.
  • Deckard
  • What’s the confidence?
  • Photo inspector
  • 95.
  • On the side of the screen the inspector overlays Leon’s police profile.
  • Deckard
  • Unpin.
  • Deckard lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table.
  • Deckard
  • New surface.
  • He turns the glass clockwise. The camera turns and he sees into a bedroom.
  • Deckard
  • How do we have this much inference?
  • Photo inspector
  • The convex mirror in the hall…
  • Deckard
  • Wait. Is that a foot? You said no one was hiding.
  • Photo inspector
  • The individual is not hiding. They appear to be sleeping.
  • Deckard rolls his eyes.
  • Deckard
  • Zoom to the face and pin.
  • The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face.
  • Deckard
  • That look like Zhora to you?
  • The inspector overlays her police file.
  • Photo inspector
  • 63% of it does.
  • Deckard
  • Why didn’t you say so?
  • Photo inspector
  • My threshold is set to 66%.
  • Deckard
  • Give me a hard copy right there.
  • He raises his glass and finishes his drink.

This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.

Spreading pathogen maps

So while the world is in the grip of the novel COVID-19 coronavirus pandemic, I’ve been thinking about those fictional user interfaces that appear in pandemic movies that project how quickly the infectious-agent-in-question will spread. The COVID-19 pandemic is a very serious situation. Most smart people are sheltering in place to prevent an overwhelmed health care system and finding themselves with some newly idle cycles (or if you’re a parent like me, a lot fewer idle cycles). Looking at this topic through the lens of sci-fi is not to minimize what’s happening around us as trivial, but to process the craziness of it all through this channel that I’ve got on hand. I did it for fascism, I’ll do it for this. Maybe this can inform some smart speculative design.

Caveat #1: As a public service I have included some information about COVID-19 in the body of the post with a link to sources. These are called out the way this paragraph is, with a SARS-CoV-2 illustration floated on the left. I have done as much due diligence as one blogger can do to not spread disinformation, but keep in mind that our understanding of this disease and the context are changing rapidly. By the time you read this, facts may have changed. Follow links to sources to get the latest information. Do not rely solely on this post as a source. If you are reading this from the relative comfort of the future after COVID-19, feel free to skip these.

A screen grab from a spreading pathogen map from Contagion (2011), focused on Africa and Eurasia, with red patches surrounding major cities, including Hong Kong.
Get on a boat, Hongkongers, you can’t even run for the hills! Contagion (2011)

And yes, this is less of my normal fare of sci-fi and more bio-fi, but it’s still clearly a fictional user interface, so between that and the world going pear-shaped, it fits well enough. I’ll get back to Blade Runner soon enough. I hope.

Giving credit where it’s due: All but one of the examples in this post were found via the TV tropes page for Spreading Disaster Map Graphic page, under live-action film examples. I’m sure I’ve missed some. If you know of others, please mention it in the comments.

Four that are extradiegetic and illustrative

This first set of pandemic maps are extradiegetic.

Vocabulary sidebar: I use that term a lot on this blog, but if you’re new here or new to literary criticism, it bears explanation. Diegesis is used to mean “the world of the story,” as the world in which the story takes place is often distinct from our own. We distinguish things as diegetic and extradiegetic to describe when they occur within the world of the story, or outside of it, respectively. My favorite example is when we see a character in a movie walking down a hallway looking for a killer, and we hear screechy violins that raise the tension. When we hear those violins, we don’t imagine that there is someone in the house who happens to be practicing their creepy violin. We understand that this is extradiegetic music, something put there to give us a clue about how the scene is meant to feel.

So, like those violins, these first examples aren’t something that someone in the story is looking at. (Claude Paré? Who the eff is—Johnson! Get engineering! Why are random names popping up over my pandemic map?) They’re something the film is doing for us in the audience.

The Killer that Stalked New York (1950) is a short about a smallpox infection of New York City.
Edge of Tomorrow (2014) has this bit showing the Mimics, spreading their way across Europe.
The end of Rise of the Planet of the Apes (2011) shows the fictional virus ALZ-113 spreading.
The beginning of Dawn of the Planet of the Apes (2014) repeats the fictional virus ALZ-113 spreading, but augments it with video overlays.

There’s not much I feel the need to say about these kinds of maps, as they are a motion graphic and animation style. I note at least two use aposematic signals in their color palette and shapes, but that’s just because it helps reinforce for the audience that whatever is being shown here is a major threat to human life. But I have much more authoritative things to say about systems that are meant to be used.

Before we move on, here’s a bonus set of extradiegetic spreading-pathogen maps I saw while watching the Netflix docuseries Pandemic: How to Prevent an Outbreak, as background info for this post.

A supercut from Pandemic: How to Prevent an Outbreak.
Motion graphics by Zero Point Zero Productions.

Five that are diegetic and informative

The five examples in this section are spread throughout the text for visual interest, but presented in chronological order. They are The Andromeda Strain (1977), Outbreak (1995), Evolution (2001), Contagion (2011), and World War Z (2013). I highly recommend Contagion for the acting, movie making, the modeling, and some of the facts it conveys. For instance, I think it’s the only film that discusses fomites. Everyone should know about fomites.

Since I raise their specter: As of publication of this post the CDC stated that fomites are not thought to be the main way the COVID-19 novel coronavirus spreads, but there are recent and conflicting studies. The scientific community is still trying to figure this out. The CDC says for certain it spreads primarily through sneezes, coughs, and being in close proximity to an infected person, whether or not they are showing symptoms.

Note that these five spreading pathogen examples are things that characters are seeing in the diegesis, that is, in the context of the story. These interfaces are meant to convey useful information to the characters as well as us in the audience.

Which is as damning a setup as I can imagine for this first example from The Andromeda Strain (1971). Because as much as I like this movie, WTF is this supposed to be? “601” is explained in the dialogue as the “overflow error” of this computer, but the pop-art seizure graphics? C’mon. There’s no way to apologize for this monstrosity.

This psychedelic nonsense somehow tells the bunkered scientists about how fast the eponymous Andromeda Strain will spread. (1971) Somehow the CRT gets nervous, too.

I’m sorry that you’ll never get those 24 seconds back. But at least we can now move on to look at the others, which we can break down into the simple case of persuasion, and the more complex case of use.

The simple case

In the simplest case, these graphics are shown to persuade an authority to act. That’s what happening in this clip from Outbreak (1995).

General Donald McClintock delivers a terrifying White House Chief-of-Staff Briefing about the Motaba virus. Outbreak (1995)

But if the goal is to persuade one course of action over another, some comparison should be made between two options, like, say, what happens if action is taken sooner rather than later. While that is handled in the dialogue of many of these films—and it may be more effective for in-person persuasion—I can’t help but think it would be reinforcing to have it as part of the image itself. Yet none of our examples do this.

Compare the “flatten the curve” graphics that have been going around. They provide a visual comparison between two options and make it very plain which is the right one to pick. One that stays in the mind of the observer even after they see it. This is one I’ve synthesized and tweaked from other sources.

This is a conceptual diagram, not a chart. The capacity bar is terrifyingly lower on actual charts. Stay home as much as you can. Special shouts out to Larry West.

There is a diegetic possibility, i.e., that no one amidst the panic of the epidemic has the time to thoughtfully do more than spit out the data and handle the rest with conversation. But we shouldn’t leave it at that, because there’s not much for us to learn there.

More complex case

The harder problem is when these displays are for people who need to understand the nature of the threat and determine the best course of action, and now we need to talk about epidemiology.

Caveat #2: I am not an epidemiologist. They are all really occupied for the foreseeable future, so I’m not even going to reach out and bother one of them to ask their opinions on this post. Like I said before about COVID-19, I really hope you don’t come to sci-fi interfaces to become an expert in epidemiology. And, since I’m just Some Guy on the Internet Who Has Read Some Stuff on the Internet, you should take whatever you learn here with a grain of salt. If I get something wrong, please let me know. Here are my major sources:

A screen gran from Contagion (2011) showing Dr. Erin Mears standing before a white board, explaining to the people in the room what R-naught is.
Kate Winslet, playing epidemiologist Dr. Erin Mears in Contagion (2011), is probably more qualified than me. Hey, Kate: Call me. I have questions.

Caveat #3: To discuss using technology in our species’ pursuit of an effective global immune system is to tread into some uncomfortable territory. ​Because of the way disease works, it is not enough to surveil the infected. We must always surveil the entire population, healthy or not, for signs of a pathogen outbreak, so responses can be as swift and certain as possible. We may need to surveil certain at-risk or risk-taking populations quite closely, as potential superspreaders. Otherwise we risk getting…well…*gestures vaguely at the USA*. I am pro-privacy, so know that when I speak about health surveillance in this post, I presume that we are simultaneously trying to protect as much “other” privacy as we can, maybe by tracking less-abusable, less-personally identifiable signals. I don’t pretend this is a trivial task, and I suspect the problem is more wicked than merely difficult to execute. But health surveillance must happen, and for this reason I will speak of it as a good thing in this context.

A screen grab from Idiocracy (2006) showing one of the vending machines that continually scanned citizens bar codes and reported their location.
Making this seem a lot less stupid than it first appeared.

Caveats complete? We’ll see.


Epidemiology is a large field of study, so for purposes of this post, we’re talking about someone who studies disease at the level of the population, rather than individual cases. Fictional epidemiologists appear when there is an epidemic or pandemic in the plot, and so are concerned with two questions: What are we dealing with? and What do we need to do?

Part 1: What are we dealing with?

Our response should change for different types of threat. So it’s important for an epidemiologist to understand the nature of a pathogen. There are a few scenes in Contagion where we see scientists studying a screen with gene sequences and a protein-folding diagram, and this touches on understanding the nature of the virus. But this is a virologists view, and doesn’t touch on most of what an epidemiologist is ultimately hoping to build first, and that’s a case definition. It is unlikely to appear in a spreading pathogen map, but it should inform one. So even if your pathogen is fictional, you ought to understand what one is.

A screen grab from Contagion (2011), showing a display for a virologist, including gene sequences, and spectroscopy.
“We’ve sequenced the virus and determined its origin, and we’ve modeled the way it edges the cells of the lung and the brain…” —Dr. Hextall, Contagion (2011)

A case definition is the standard shared definition of what a pathogen is; how a real, live human case is classified as belonging to an epidemic or not. Some case definitions are built for non-emergency cases, like for influenza. The flu is practically a companion to humanity, i.e., with us all the time, and mutates, so its base definition for health surveillance can be a little vague. But for the epidemics and pandemics that are in sci-fi, they are building a case definition for outbreak investigations. These are for a pathogen in a particular time and place, and act as a standard for determining whether or not a given person is counted as a case for the purposes of studying the event.

Case definition for outbreak investigations

The CDC lists the following as the components of a case definition.

  • Clinical criteria
    • Clinical description
    • Confirmatory laboratory tests
      • These can be pages long, with descriptions of recommended specimen collections, transportation protocols, and reporting details.
    • Combinations of symptoms (subjective complaints)
    • Signs (objective physical findings)
    • Source
  • (Sometimes) Specifics of time and place.

There are sometimes different case definitions based on the combination of factors. COVID-19 case definitions with the World Health Organization, for instance, are broken down between suspect, probable, and confirmed. A person showing all the symptoms and who has been in an area where an infected person was would be suspect. A person whose laboratory results confirmed the presence of SARS-CoV-2 is confirmed. Notably for a map, these three levels might warrant three levels of color.

As an example, here is the CDC case definition for ebola, as of 09 JUL 2019.

n.b. Case definitions are unlikely to work on screen

Though the case definition is critical to epidemiology, and may help the designer create the spreading pathogen map (see the note about three levels of color, above), but the thing itself is too text-heavy to be of much use for a sci-fi interface, which rely much more on visuals. Better might be the name or an identifying UUID to the definition. WHO case references look like this: WHO/COVID-19/laboratory/2020.5 I do not believe the CDC has any kind of UUID for its case definitions.

While case definitions don’t work on screen, counts and rates do. See below under Surveil Public Health for more on counts and rates.

Disease timeline

Infectious disease follows a fairly standard order of events, depicted in the graphic below. Understanding this typical timeline of events helps you understand four key metrics for a given pathogen: chains of transmission, R0, SI, and CFR.

A redesigned graphic from the CDC Principles epidemiology handbook, showing susceptibility, exposure, subclinical disease with pathologic changes and the beginning of an infectious period, the onset of symptoms and beginning of clinical disease, diagosis, the end of the infectious period, and a resolution of recovery, life-long disability, or death.

For each of the key metrics, I’ll list ranges and variabilities where appropriate. These are observed attributes in the real world, but an author creating a fictional pathogen, or a sci-fi interfaces maker needing to illustrate them, may need to know what those numbers look like and how they tend to behave over time so they can craft these attributes.

Chains of Transmission

What connects the individual cases in an epidemic are the methods of transmission. The CDC lists the following as the basics of transmission.

  • Reservoir: where the pathogen is collected. This could be the human body, or a colony of infected mynocks, a zombie, or a moldy Ameglian Major flank steak forgotten in a fridge. Or your lungs.
  • Portal of exit, or how the pathogen leaves the reservoir. Say, the open wound of a zombie, or an innocent recommendation, or an uncovered cough.
  • Mode of transmission tells how the pathogen gets from the portal of exit to the portal of entry. Real-world examples include mosquitos, fomites (you remember fomites from the beginning of this post, don’t you?), sex, or respiratory particles.
  • Portal of entry, how the pathogen infects a new host. Did you inhale that invisible cough droplet? Did you touch that light saber and then touch your gills? Now it’s in you like midichlorians.
  • Susceptible host is someone more likely than not to get the disease.

A map of this chain of transmission would be a fine secondary-screen to a spreading pathogen map, illustrating how the pathogen is transmitted. After all, this will inform the containment strategies.

Variability: Once the chain of transmission is known, it would only change if the pathogen mutated.

Basic Rate of Reproduction = How contagious it is

A famous number that’s associated with contagiousness is the basic reproduction rate. If you saw Contagion you’ll recall this is written as R0, and pronounced “R-naught.” It describes, on average, how many people an infected person will infect before they stop being infectious.

  • If R0 is below 1, an infected person is unlikely to infect another person, and the pathogen will quickly die out.
  • If R0 is 1, an infected person is likely to infect one other, and the disease will continue through a population at a steady rate without intervention.
  • If R0 is higher than 1, a pathogen stands to explode through a population.

The CDC book tells me that R0 describes how the pathogen would reproduce through the population with no intervention, but other sources talk of lowering the R0 so I’m not certain if those other sources are using it less formally, or if my understanding is wrong. For now I’ll go with the CDC, and talk about R0 as a thing that is fixed.

It, too, is not an easy thing to calculate. It can depend on the duration of contagiousness after a person becomes infected, or the likelihood of infection for each contact between a susceptible person and an infectious person or vector, and the contact rate.

Variability: It can change over time. When a novel pathogen first emerges, the data is too sparse and epidemiologists are scrambling to do the field work to confirm cases. As more data comes in and numbers get larger, the number will converge toward what will be its final number.

It can also differ based on geography, culture, geopolitical boundaries, and the season, but the literature (such as I’ve read) refers to R0 as a single number.

Range: The range of R0 >1 can be as high as 12–18, but measles morbillivirus is an infectious outlier. Average range of R0, not including measles, of this sample is 2.5–5.2. MEV-1 from Contagion has a major dramatic moment when it mutates and its predicted R0 becomes 4, making it roughly as contagious as the now-eradicated killer smallpox.

Data from https://en.wikipedia.org/wiki/Basic_reproduction_number

Serial Interval = How fast it spreads

Serial interval is the average time between successive cases in a chain of transmission. This tells the epidemiologist how fast a pathogen stands to spread through a population.

Variability: Like the other numbers, SI is calculated and updated with new cases while an epidemic is underway, but tend to converge toward a number. SI for some respiratory diseases is charted below. Influenza A moves very fast. Pertussis is much slower.

Range: As you can see in the chart, SI can be as fast as 2.2 days, or as slow as 22.8 days. The median in this set is 14 days and the average is 12.8. SARS-CoV-2 is currently estimated to be about 4 days, which is very fast.

Data from: https://academic.oup.com/aje/article/180/9/865/2739204

CFR = How deadly it is

The case fatality rate is a percentage that any given case will prove fatal. It is very often shortened to CFR. This is not always easy to calculate.

Variability: Early in a pandemic it might be quite low because hospital treatment is still available. Later in a pandemic, as hospital and emergency rooms are packed full, the CFR might raise quite high. Until a pathogen is eradicated, the precise CFR is changing with each new case. Updates can occur daily, or in real time with reports. In a sci-fi world, it could update real time directly from ubiquitous sensors, and perhaps predicted by a specialty A.I. or precognitive character.

Range: Case fatality rates range from the incurable, like kuru, at 100%. to 0.001% for chickenpox affecting unvaccinated children. The CFR changes greatly at the start of a pandemic and slowly converges towards its final number.

So, if the spreading pathogen map is meant to convey to an epidemiologist the nature of the pathogen, it should display these four factors:

  1. Mode of Transmission: How it spreads
  2. R0: How contagious it is
  3. SI: How fast it spreads
  4. CFR: How deadly it is

Part 2: What do we do?

An epidemiologist during an outbreak has a number of important responsibilities beyond understanding the nature of the pathogen. I’ve taken a crack at listing those below. Note: this list is my interpretation of the CDC materials, rather than their list. As always, offer corrections in comments.

  • Surveil the current state of things
  • Prevent further infections
  • Communicate recommendations

Epidemiology has other non-outbreak functions, but those routine, non-emergency responsibilities rarely make it to cinema. And since “communicate recommendations” is pretty covered under “The Simple Case,” above, the rest of this post will be dedicated to health surveillance and prevention tools.

Surveil the current state of things

In movies the current state of things is often communicated via the spreading pathogen map in some command and control center. The key information on these maps are counts and rates.

Counts and Rates

The case definition (above) helps field epidemiologists know which cases to consider in the data set for a given outbreak. They routinely submit reports of their cases to central authorities like the CDC or WHO, who aggregate them into counts, which are tallies of known cases. (And though official sources in the real world are rightly cautious to do it, sci-fi could also include an additional layer of suspected or projected cases.) Counts, especially over time, are important for tracking the spread of a virus. Most movie goers have basic numeracy, so red number going up = bad is an easy read for an audience.

Counts can be broken down into many variables. Geopolitical regions make sense as governmental policies and cultural beliefs can make meaningful distinctions in how a pathogen spreads. In sci-fi a speculative pathogen might warrant different breakdowns, like frequency of teleportation, or time spent in FTL warp fields, or genetic distance from the all-mother.

In the screen cap of the John Hopkins COVID-19 tracker, you can see counts high in the visual hierarchy for total confirmed (in red), total deaths (in white), and total recovered (in green). The map plots current status of the counts.

From the Johns Hopkins COVID-19 tracker, screen capped in the halcyon days of 23 MAR 2020.

Rates is another number that epidemiologists are interested in, to help normalize the spread of a pathogen for different group sizes. (Colloquially, rate often implies change over time, but in the field of epidemiology, it is a static per capita measurement of a point in time.) For example, 100 cases is around a 0.00001% rate in China, with its population of 1.386 billion, but it would be a full 10% rate of Vatican City, so count can be a poor comparison to understand how much of a given population is affected. By representing the rates alongside the counts you can detect if it’s affecting a subgroup of the global population more or less than others of its kind, which may warrant investigation into causes, or provide a grim lesson to those who take the threat lightly.

Counts and rates over time

The trend line in the bottom right of the Johns Hopkins dashboard helps understand how the counts of cases are going over time, and might be quite useful for helping telegraph the state of the pandemic to an audience, though having it tucked in a corner and in orange may not draw attention as it needs to for instant-understanding.

These two displays show different data, and one is more cinegenic than the other. Confirmed cases, on the left, is a total, and at best will only ever level off. If you know what you’re looking at, you know that older cases represented by the graph are…uh…resolved (i.e. recovery, disability, or death) and that a level-off is the thing we want to see there. But the chart on the right plots the daily increase, and will look something like a bell curve when the pandemic comes to an end. That is a more immediate read (bad thing was increasing, bad thing peaked, bad thing is on the decline) and so I think is better for cinema.

At a glance you can also tell that China appears to have its shit sorted. [Obviously this is an old screen grab.]

In the totals, sparklines would additionally help a viewer know whether things are getting better or getting worse in the individual geos, and would help sell the data via small multiples on a close-up.

Plotting cases on maps

Counts and rates are mostly tables of numbers with a few visualizations. The most cinegenic thing you can show are cases on geopolitical maps. All of the examples, except the trainwreck that is The Andromedia Strain pathogen map, show this, even the extradiegetic ones. Real-world pathogens mostly spread through physical means, so physical counts of areas help you understand where the confirmed cases are.

Which projection?

But as we all remember from that one West Wing scene, projections have consequences. When wondering where in the world do we send much-needed resources, Mercator will lie to you, exaggerating land at the poles at the expense of equatorial regions. I am a longtime advocate for alternate projections, such as—from the West Wing scene—the Gall-Peters. I am an even bigger big fan of Dymaxion and Watterman projections. I think they look quite sci-fi because they are familiar-but-unfamiliar, and they have some advantages for showing things like abstract routes across the globe.

A Dymaxion or Fuller projection of the earth.

If any supergenre is here to help model the way things ought to be, it’s sci-fi. If you only have a second or less of time to show the map, then you may be locked to Mercator for its instant-recognizability, but if the camera lingers, or you have dialogue to address the unfamiliarity, or if the art direction is looking for uncanny-ness, I’d try for one of the others.

What is represented?

Of course you’re going to want to represent the cases on the map. That’s the core of it. And it may be enough if the simple takeaway is thing bad getting worse. But if the purpose of the map is to answer the question “what do we do,” the cases may not be enough. Recall that another primary goal of epidemiologists is to prevent further infections. And the map can help indicate this and inform strategy.

Take for instance, 06 APR 2020 of the COVID-19 epidemic in the United States. If you had just looked at a static map of cases, blue states had higher counts than red states. But blue states had been much more aggressive in adopting “flattening the curve” tactics, while red states had been listening to Trump and right wing media that had downplayed the risk for many weeks in many ways. (Read the Nate Silver post for more on this.) If you were an epidemiologist, seeing just the cases on that date might have led you to want to focus social persuasion resources on blue states. But those states have taken the science to heart. Red states on the other hand, needed a heavy blitz of media to convince them that it was necessary to adopt social distancing and shelter-in-place directives. With a map showing both cases and social acceptance of the pandemic, it might have helped an epidemiologist make the right resource allocation decision quickly.

Another example is travel routes. International travel played a huge role in spreading COVID-19, and visualizations of transportation routes can prove more informative in understanding its spread than geographic maps. Below is a screenshot of the New York Times’ beautiful COVID-19 MAR 2020 visualization How the Virus Got Out, which illustrates this point.

Other things that might be visualized depend, again, on the chain of transmission.

  • Is the pathogen airborne? Then you might need to show upcoming wind and weather forecasts.
  • Is the reservoir mosquitoes? Then you might want to show distance to bodies of still water.
  • Is the pathogen spread through the mycelial network? Then you might need to show an overlay of the cosmic mushroom threads.

Whatever your pathogen, use the map to show the epidemiologist ways to think about its future spread, and decide what to do. Give access to multiple views if needed.

How do you represent it?

When showing intensity-by-area, there are lots of ways you could show it. All of them have trade offs. The Johns-Hopkins dashboard uses a Proportional Symbol map, with a red dot, centered on the country or state, the radius of which is larger for more confirmed cases. I don’t like this for pandemics, mostly because the red dots begin to overlap and make it difficult to any detail without interacting with the map to get a better focus. It does make for an immediate read. In this 23 MAR 2020 screen cap, it’s pretty obvious that the US, Europe, and China are current hotspots, but to get more detail you have to zoom in, and the audience, if not the characters, don’t have that option. I suppose it also provides a tone-painting sense of unease when the symbols become larger than the area they are meant to represent. It looks and feels like the area is overwhelmed with the pathogen, which is an appropriate, if emotional and uninformative, read.

The Johns-Hopkins dashboard uses a proportional symbol map. And I am distraught at how quaint those numbers seem now, much less what they will be in the future.

Most of the sci-fi maps we see are a variety of Chorochromatic map, where color is applied to the discrete thing where it appears on the map. (This is as opposed to a Cloropleth map, where color fills in existing geopolitical regions.) The chorochromatic option is nice for sci-fi because the color makes a shape—a thing—that does not know of or respect geopolitical boundaries. See the example from Evolution below.

Governor Lewis watches the predicted spread of the Glen Canyon asteroid organisms out of Arizona and to the whole of North America. Evolution (2001)

It can be hard to know (or pointlessly-detailed) to show exactly where a given thing is on a map, like, say, where infected people literally are. To overcome this you could use a dot-distribution map, as in the Outbreak example (repeated below so you don’t have to scroll that far back up).

Outbreak (1995), again.

Like many such maps, the dot-distribution becomes solid red to emphasize passing over some magnitude threshold. For my money, the dots are a little deceptive, as if each dot represented a person rather than part of a pattern than indicates magnitude, but a glance at the whole map gives the right impression.

For a real world example of dot-distribution for COVID-19, see this example posted to reddit.com by user Edward-EFHIII.

COVID-19 spread from January 23 through March 14th.

Often times dot-distribution is reserved for low magnitudes, and once infections are over a threshold, become cloropleth maps. See this example from the world of gaming.

A screen grab of the game Plague, Inc., about 1/3 of the way through a game.
In Plague, Inc., you play the virus, hoping to win against humanity.

Here you can see that India and Australia have dots, while China, Kyrgyzstan, Tajikistan, Turkmenistan, and Afghanistan (I think) are “solid” red.

The other representation that might make sense is a cartogram, in which predefined areas (like country or state boundaries) are scaled to show the magnitude of a variable. Continuous-area cartograms can look hallucinogenic, and would need some explanation by dialogue, but can overcome the inherent bias that size = importance. It might be a nice secondary screen alongside a more traditional one.

A side by side comparison of a standard and cartographic projection.
On the left, a Choropleth map of the 2012 US presidential election, where it looks like red states should have won. On the right, a continuous cartogram with state sizes scaled to reflect states’ populations, making more intuitive sense why blue states carried the day.

Another gorgeous projection dispenses with the geographic layout. Dirk Brockman, professor at the Institute for Theoretical Biology, Humboldt University, Berlin, developed a visualization that places the epicenter of a disease at the center of a node graph, and plots every city around it based on how many airport flights it takes to get there. Plotting proportional symbols to this graph makes the spread of the disease radiate in mostly- predictable waves. Pause the animation below and look at the red circles. You can easily predict where the next ones will likely be. That’s an incredibly useful display for the epidemiologist. And as a bonus, it’s gorgeous and a bit mysterious, so would make a fine addition in a sci-fi display to a more traditional map. Read more about this innovative display on the CityLab blog. (And thanks, Mark Coleran, for the pointer.)

How does it move?

First I should say I don’t know that it needs to move. We have information graphics that display predicted change-over-area without motion: Hurricane forecast maps. These describe a thing’s location in time, and simultaneously, the places it is likely to be in the next few days.

National Hurricane Center’s 5-day forecast for Hurricane Florence, 08 SEP 2018.
Image: NHC

If you are showing a chorochromatic map, then you can use “contour lines” or color regions to demonstrate the future predictions.

Not based on any real pathogen.

Another possibility is small multiples, where the data is spread out over space instead of time. This makes it harder to compare stages, but doesn’t have the user searching for the view they want. You can mitigate this with small lines on each view representing the boundaries of other stages.

Not based on any real pathogen.

The side views could also represent scenarios. Instead of +1, +2, etc., the side views could show the modeled results for different choices. Perhaps those scenario side views and their projected counts could be animated.

To sing the praises of the static map: Such a view, updated as data comes in, means a user does not have to wait for the right frame to pop up, or interact with a control to get the right piece of information, or miss some detail when they just happened to have the display paused on the wrong frame of an animation.

But, I realize that static maps are not as cinegenic as a moving map. Movement is critical to cinema, so a static map, updating only occasionally as new data comes in, could look pretty lifeless. Animation gives the audience more to feel as some red shape slowly spreads to encompass the whole world. So, sure. I think there are better things to animate than the primary map, but doing so puts us back into questions of style rather than usability, so I’ll leave off that chain of thought and instead show you the fourth example in this section, Contagion.

MEV-1 spreads from fomites! It’s fomites! Contagion (2011), designed by Cory Bramall of Decca Digital.

Prevent further transmissions: Containment strategies

The main tactic for epidemiological intervention is to deny pathogens the opportunity to jump to new hosts. The top-down way to do this is to persuade community leaders to issue broad instructions, like the ones around the world that have us keeping our distance from strangers, wearing masks and gloves, and sheltering-in-place. The bottom-up tactic is to identify those who have been infected or put at risk for contracting a pathogen from an infected person. This is done with contact tracing.

Contain Known Cases

When susceptible hosts simply do not know whether or not they are infected, some people will take their lack of symptoms to mean they are not infectious and do risky things. If these people are infectious but not yet showing symptoms, they spread the disease. For this reason, it’s critical to do contact tracing of known cases to inform and encourage people to get tested and adopt containment behaviors.

Contact tracing

There are lots of scenes in pathogen movies where scientists stand around whiteboards with hastily-written diagrams of who-came-into-contact-with-whom, as they hope to find and isolate cases, or to find “patient 0,” or to identify super-spreaders and isolate them.

An infographic from Wikimedia showing a flow chart of contact tracing. Its label reads “Contact tracing finds cases quickly so they can be isolated and reduce spread.”
Wikimedia file, CC BY-SA 4.0

These scenes seem ripe for improvement by technology and AI. There are opt-in self-reporting systems, like those that were used to contain COVID-19 in South Korea, or the proposed NextTrace system in the West. In sci-fi, this can go further.

Scenario: Imagine an epidemiologist talking to the WHO AI and asking it to review public footage, social media platforms, and cell phone records to identify all the people that a given case has been in contact with. It could even reach out and do field work, calling humans (think Google Duplex) who might be able to fill in its information gaps. Field epidemiologists are focused on situations when the suspected cases don’t have phones or computers.

Or, for that matter, we should ask why the machine should wait to be asked. It should be set up as an agent, reviewing these data feeds continually, and reaching out in real time to manage an outbreak.

  • SCENE: Karen is walking down the sidewalk when her phone rings.
  • Computer voice:
  • Good afternoon, Karen. This is Florence, the AI working on behalf of the World Health Organization.
  • Karen:
  • Oh no. Am I sick?
  • Computer voice:
  • Public records indicate you were on a bus near a person who was just confirmed to be infected. Your phone tells me your heart rate has been elevated today. Can you hold the phone up to your face so I can check for a fever?
  • Karen does. As the phone does its scan, people on the sidewalk behind her can be seen to read texts on their phone and move to the other side of the street. Karen sees that Florence is done, and puts the phone back to her ear.
  • Computer voice:
  • It looks as if you do have a fever. You should begin social distancing immediately, and improvise a mask. But we still need a formal test to be sure. Can you make it to the testing center on your own, or may I summon an ambulance? It is a ten minute walk away.
  • Karen:
  • I think I can make it, but I’ll need directions.
  • Computer voice:
  • Of course. I have also contacted your employer and spun up an AI which will be at work in your stead while you self-isolate. Thank you for taking care of yourself, Karen. We can beat this together.

Design challenge: In the case of an agentive contact tracer, the display would be a social graph displayed over time, showing confirmed cases as they connect to suspected cases (using evidence-of-proximity or evidence-of-transmission) as well as the ongoing agent’s work in contacting them and arranging testing. It would show isolation monitoring and predicted risks to break isolation. It would prioritize cases that are greatest risk for spreading the pathogen, and reach out for human intervention when its contact attempts failed or met resistance. It could be simultaneously tracing contacts “forward” to minimize new infections and tracing contacts backward to find a pathogen’s origins.

Another consideration for such a display is extension beyond the human network. Most pathogens mutate and much more freely in livestock and wild animal populations, making their way into humans occasionally. it happened this way for SARS (bats → civets → people), MERS (bats → camels → people), and COVID-19 (bats → pangolin → people). (Read more about bats as a reservoir.) It’s not always bats, by the way, livestock are also notorious breeding grounds for novel pathogens. Remember Bird flu? Swine flu? This “zoonotic network” should be a part of any pathogen forensic or surveillance interface.

A photograph of an adorable pangolin, the most trafficked animal in the world. According to the International Union for Conservation of Nature (IUCN), more than a million pangolins were poached in the decade prior to 2014.
As far as SARS-CoV-2 is concerned, this is a passageway.
U.S. Fish and Wildlife Service Headquarters / CC BY (https://creativecommons.org/licenses/by/2.0)

Design idea: Even the notion of what it means to do contact tracing can be rethought in sci-fi. Have you seen the Mythbusters episode “Contamination”? In it Adam Savage has a tube latexed to his face, right near his nose that drips a florescent dye at the same rate a person’s runny nose might drip. Then he attends a staged dinner party where, despite keeping a napkin on hand to dab at the fluid, the dye gets everywhere except the one germophobe. It brilliantly illustrates the notion of fomites and how quickly an individual can spread a pathogen socially.

Now imagine this same sort of tracing, but instead of dye, it is done with computation. A camera watches, say, grocery shelves, and notes who touched what where and records the digital “touch,” or touchprint, along with an ID for the individual and the area of contact. This touchprint could be exposed directly with augmented reality, appearing much like the dye under black light. The digital touch mark would only be removed from the digital record of the object if it is disinfected, or after the standard duration of surface stability expires. (Surface stability is how long a pathogen remains a threat on a given surface). The computer could further watch the object for who touches it next, and build an extended graph of the potential contact-through-fomites.

Ew, I got touchprint on me.

You could show the AR touchprint to the individual doing the touching, this would help remind them to wear protective gloves if the science calls for it, or to ask them to disinfect the object themselves. A digital touchprint could also be used for workers tasked with disinfecting the surfaces, or by disinfecting drones. Lastly, if an individual is confirmed to have the pathogen, the touchprint graph could immediately identify those who had touched an object at the same spot as the infected person. The system could provide field epidemiologists with an instant list of people to contact (and things to clean), or, if the Florence AI described above was active, the system could reach out to individuals directly. The amount of data in such a system would be massive, and the aforementioned privacy issues would be similarly massive, but in sci-fi you can bypass the technical constraints, and the privacy issues might just be a part of the diegesis.

In case you’re wondering how long that touch mark would last for SARS-CoV-2 (the virus that causes COVID-19), this study from the New England Journal of Medicine says it’s 4 hours for copper, 24 hours for paper and cardboard, and 72 hours on plastic and steel.

Anyway, all of this is to say that the ongoing efforts by the agent to do the easy contact tracing would be an excellent, complicated, cinegenic side-display to a spreading pathogen map.

Destroying non-human reservoirs

Another way to reduce the risk of infection is to seal or destroy reservoirs. Communities encourage residents to search their properties and remove any standing water to remove the breeding grounds for mosquitos, for example. There is the dark possibility that a pathogen is so lethal that a government might want to “nuke it from orbit” and kill even human reservoirs. Outbreak features an extended scene where soldiers seek to secure a neighborhood known to be infected with the fictional Motoba virus, and soldiers threaten to murder a man trying to escape with his family. For this dark reason, in addition to distance-from-reservoir, the location of actual reservoirs may be important to your spreading pathogen map. Maybe also counts of the Hail Mary tools that are available, their readiness, effects, etc.

To close out the topic of What Do We Do? Let me now point you to the excellent and widely-citied Medium article by Tomas Peuyo, “Act Today or People Will Die,” for thoughts on that real-world question.

The…winner(?)

At the time of publication, this is the longest post I’ve written on this blog. Partly that’s because I wanted to post it as a single thing, but also because it’s a deep subject that’s very important to the world, and there are lots and lots of variables to consider when designing one.

Which makes it not surprising that most of the examples in this mini survey are kind of weak, with only one true standout. That standout is the World War Z spreading disaster map, shown below.

World War Z (2013)

It goes by pretty quickly, but you can see more features discussed above in this clip than any of the other exmaples.

Description in the caption.
A combination of chorochromatic marking for the zombie infection, and cloropleth marking for countries. Note the signals showing countries where data is unavailable.
Description in the caption.
Along the bottom, rates (not cases) are expressed as “Population remaining.” That bar of people along the bottom would start slow and then just explode to red, but it’s a nice “things getting worse” moment. Maybe it’s a log scale?
Description in the caption.
A nice augmentation of the main graphic is down the right-hand side. A day count in the upper right (with its shout-out to zombie classic 28 Days Later), and what I’m guessing are resources, including nukes.

It doesn’t have that critical layer of forecasting data, but it got so much more right than its peers, I’m still happy to have it. Thanks to Mark Coleran for pointing me to it.


Let’s not forget that we are talking about fiction, and few people in the audience will be epidemiologists, standing up in the middle of the cinema (remember when we could go to cinemas?) to shout, “What’s with this R0 of 0.5? What is this, the LaCroix of viruses?” But c’mon, surely we can make something other than Andromeda Strain’s Pathogen Kaleidoscope, or Contagion’s Powerpoint wipe. Modern sci-fi interfaces are about spectacle, about overwhelming the users with information they can’t possibly process, and which they feel certain our heroes can—but they can still be grounded in reality.

Lastly, while I’ve enjoyed the escapism of talking about pandemics in fiction, COVID-19 is very much with us and very much a threat. Please take it seriously and adopt every containment behavior you can. Thank you for taking care of yourself. We can beat this together.

The Cloak of Levitation, Part 4: Improvements

In prior posts we looked at an overview of the cloak, pondered whether it could ever work in reality (Mostly, in the far future), and whether or not the cloak could be considered agentive. (Mostly, yes.) In this last post I want to look at what improvements we might make if I was designing something akin to this for the real world.

Given its wealth of capabilities, the main complaint might be its lack of language.

A mute sidekick

It has a working theory of mind, a grasp of abstract concepts, and intention, so why does it not use language as part of a toolkit to fulfill its duties? Let’s first admit that mute sidekicks are kind of a trope at this point. Think R2D2, Silent Bob, BB8, Aladdin’s Magic Carpet (Disney), Teller, Harpo, Bernardo / Paco (admittedly obscure), Mini-me. They’re a thing.

tankerbell.gif

Yes, I know she could talk to other fairies, but not to Peter.

Despite being a trope, its muteness in a combat partner is a significant impediment. Imagine its being able to say, “Hey Steve, he’s immune to the halberd. But throw that ribcage-looking thing on the wall at him, and you’ll be good.” Strange finds himself in life-or-death situations pretty much constantly, so having to disambiguate vague gestures wastes precious time that might make the difference between life and death. For, like, everyone on Earth. Continue reading

The Cloak of Levitation, Part 3: But is it agentive?

Full_coverSo I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.

That sales pitch done, I can quickly cover the key concepts here.

  • A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
  • Assistive technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
  • Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.

When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.

roomba_r2_d2_1

Yes, it’s a real thing you can own.

Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.

Which brings us back to the Cloak. Continue reading

“Real-time,” Interplanetary Chat

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

SBU_Tulsa.png
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBothttps://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

Hey-mars-chat.gif
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

  • Ask her to answer the same question first, probing into details to understand rationale and buy more time
  • Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
  • Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
  • Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

  • TULSA
  • OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

  1. (you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
  2. (related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
  3. (new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
  4. (story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

  • To the stalling GARDNERBOT…
  • GARDNER
  • For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
  • As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
SBU_Gardner.png
  • At a natural break in the conversation…
  • GARDNERBOT
  • OK. I think I finally have an answer to your earlier question. How about…India?
  • TULSA
  • India?
  • GARDNERBOT
  • Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

  • GARDNERBOT
  • Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

SBU_whodis.png

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

  • GARDNER
  • Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
  • TULSABOT
  • I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
  • GARDNER
  • I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

  • TULSA
  • GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

Luke’s predictive HUD

When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.

Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.

childrenofmen-impact-08

It commands attention effectively

Continue reading

Rebel videoscope

Talking to Luke

SWHS-rebelcomms-02

Hidden behind a bookshelf console is the family’s other comm device. When they first use it in the show, Malla and Itchy have a quick discussion and approach the console and slide two panels aside. The device is small and rectangular, like an oscilloscope, sitting on a shelf about eye level. It has a small, palm sized color cathode ray tube on the left. On the right is an LED display strip and an array of red buttons over an array of yellow buttons. Along the bottom are two dials.

SWHS-rebelcomms-03

Without any other interaction, the screen goes from static to a direct connection to a hangar where Luke Skywalker is working with R2-D2 to repair some mechanical part. He simply looks up to the camera, sees Malla and Itchy, and starts talking. He does nothing to accept the call or end it. Neither do they. Continue reading

The Mechanized Squire

Avengers-Iron-Man-Gear-Down06

Having completed the welding he did not need to do, Tony flies home to a ledge atop Stark tower and lands. As he begins his strut to the interior, a complex, ring-shaped mechanism raises around him and follows along as he walks. From the ring, robotic arms extend to unharness each component of the suit from Tony in turn. After each arm precisely unscrews a component, it whisks it away for storage under the platform. It performs this task so smoothly and efficiently that Tony is able to maintain his walking stride throughout the 24-second walk up the ramp and maintain a conversation with JARVIS. His last steps on the ramp land on two plates that unharness his boots and lower them into the floor as Tony steps into his living room.

Yes, yes, a thousand times yes.

This is exactly how a mechanized squire should work. It is fast, efficient, supports Tony in his task of getting unharnessed quickly and easily, and—perhaps most importantly—how we wants his transitions from superhero to playboy to feel: cool, effortless, and seamless. If there was a party happening inside, I would not be surprised to see a last robotic arm handing him a whiskey.

This is the Jetsons vision of coming home to one’s robotic castle writ beautifully.

There is a strategic question about removing the suit while still outside of the protection of the building itself. If a flying villain popped up over the edge of the building at about 75% of the unharnessing, Tony would be at a significant tactical disadvantage. But JARVIS is probably watching out for any threats to avoid this possibility.

Another improvement would be if it did not need a specific landing spot. If, say…

  • The suit could just open to let him step out like a human-shaped elevator (this happens in a later model of the suit seen in The Avengers 2)
  • The suit was composed of fully autonomous components and each could simply fly off of him to their storage (This kind of happens with Veronica later in The Avengers 2)
  • If it was composed of self-assembling nanoparticles that flowed off of him, or, perhaps, reassembled into a tuxedo (If I understand correctly, this is kind-of how the suit currently works in the comic books.)

These would allow him to enact this same transition anywhere.

Iron Welding

Avengers-Underwater_welding01

Cut to the bottom of the Hudson River where some electrical “transmission lines” rest. Tony in his Iron Man supersuit has his palm-mounted repulsor rays configured such that they create a focused beam, capable of cutting through an iron pipe to reveal power cables within. Once the pipe casing is removed, he slides a circular device onto the cabling. The cuff automatically closes, screws itself tight, and expands to replace the section of casing. Dim white lights burn brighter as hospital-green rings glow brightly around the cable’s circumference. His task done, he underwater-flies away, flying up the southern tip of Manhattan to Stark Tower.

It’s quick scene that sets up the fact that they’re using Tony’s arc reactor technology to liberate Stark Tower from the electrical grid (incidentally implying that the Avengers will never locate a satellite headquarters anywhere in Florida. Sorry, Jeb.) So, since it’s a quick scene, we can just skip the details and interaction design issues, right?

Of course not. You know better from this blog.

Avengers-Underwater_welding02 Continue reading