Editor’s Note: Today’s guest post is penned by Lonny Brooks. Be sure and read his introduction post if you missed it when it was published.
The Black Panther film represents one of the most ubiquitous statements of Afrofuturist fashion and fashionable digital wearables to celebrate the Africana and Black imagination. The wearable criteria under Director Ryan Cooglar’s lead and that of the formidable talent of costume designer Ruth E. Carter took into account African tribal symbolism. The adinkra symbol, for “cooperation,” emblazoned across W’Kabi’s (played by Daniel Kaluuya) blanket embodies the role of the Border tribe where they lived in a small village tucked into the mountainous borderlands of Wakanda, disguised as farmers and hunters.
Beautiful interaction
Chris’ blog looks at the interactions with speculative technology, and here the interactions are marvelously subtle. They do not have buttons or levers, which might give away their true nature. To activate them, a user does what would come naturally, which is to hold the fabric before them, like a shield. (There might be a mental command as well, but of course we can’t perceive that.) The shield-like gesture activates the shield technology. It’s quick. It fits the material of the technology. You barely even have to be trained to use it. We never see the use case for when a wearer is incapacitated and can’t lift the cape into position, but there’s enough evidence in the rest of the film to expect it might act like Dr. Strange’s cape and activate its shield automatically.
But, for me, the Capes are more powerful not as models of interaction, but for what they symbolize.
The Dual Role of the Capes
The role of the Border Tribe is to create the illusion of agrarian ruggedness as a deception for outsiders that only tells of a placid, developing nation rather than the secret technologically advanced splendor of Wakanda’s lands. The Border Tribe is the keeper of Wakanda’s cloaking technology that hides the vast utopian advancement of Wakandan advantage.
The Border Tribe’s role is built into the fabric of their illustrious and enviably fashionable capes. The adinkra symbol of cooperation embedded into the cape reveals, by the final scenes of the Black Panther film, how the Border Tribe defenders wield their capes into a force field wall of energy to repel enemies.
Ironically we only see them at their most effective when Wakanda is undergoing a civil war between those loyal to Kilmonger who is determined to avenge his father’s murder and his own erasure from Wakandan collective memory, and those supporting King T’Challa. Whereas each Black Panther King has selected to keep Wakanda’s presence hidden literally under the cooperative shields of the Border Tribe, Kilmonger—an Oakland native and a potential heir to Wakandan monarchy—was orphaned and left in the U.S.
If this sounds familiar, consider the film as a grand allusion to the millions of Africans kidnapped and ripped from their tribal lineages and taken across the Atlantic as slaves. Their cultural heritage was purposefully erased, languages and tribal customs, memories lost to the colonial thirst for their unpaid and forced labor.
Kilmonger represents the Black Diaspora, former descendants of African homelands similarly deprived of their birthrights. Kilmonger wants the Black Diaspora to rise up in global rebellion with the assistance of Wakandan technical superiority. In opposition, King T’Challa aspires for a less vengeful solution. He wantsWakanda to come out to the world, and lead by example. We can empathize with both. T’Challa’s plan is fueled by virtue. Kilmonger’s is fueled by justice—redeploy these shields to protect Black people against the onslaught of ongoing police and state violence.
Double Consciousness and the Big Metaphor
The cape shields powered by the precious secret meteorite called Vibranium embodies what the scholar W.E.B. Dubois referred to as a double consciousness, where members of the Black Diaspora inhabit two selves.
Their own identity as individuals
The external perception of themselves as a members of an oppressed people incessantly facing potential erasure and brutality.
The cape shields and their cloaking technology cover the secret utopic algorithms that power Wakanda, while playing on the petty stereotypes of African nations as less-advanced collectives.
The final battle scene symbolizes this grand debate—between Kilmonger’s claims on Wakanda and assertion of Africana power, and King T’Challa’s more cooperative and, indeed, compliant approach working with the CIA. Recall that in its subterfuge and cloaking tactics, the CIA has undermined and toppled numerous freely-elected African and Latin American governments for decades. In this final showdown, we see W’Kabi’s cloaked soldiers run down the hill towards King T’Challa and stop to raise their shields cooperatively into defensive formation to prevent King T’Challa’s advance. King T’Challa jumps over the shields and the force of his movement causes the soldier’s shields to bounce away while simultaneously revealing their potent energy.
The flowing blue capes of the Border Tribe are deceptively enticing, while holding the key to Wakanda’s survival as metaphors for cloaking their entire civilization from being attacked, plundered, and erased. Wakanda and these capes represent an alternative history: What if African peoples had not experienced colonization or undergone the brutal Middle Passage to the Americas? What if the prosperous Black Greenwood neighborhood of Tulsa, Oklahoma had developed cape shield technology to defend themselves against a genocidal white mob in 1921? Or if the Black Panther Party had harnessed the power of invisible cloaking technology as part of their black beret ensemble?
Gallery Images: World Building with the Afrofuturist Podcast—Afro-Rithms From The Future game, Neuehouse, Hollywood, May 22, 2019 [Co-Game Designers, Eli Kosminsky and Lonny Avi Brooks, Afro-Rithms Librarian; Co-Game Designer and Seer Ahmed Best]
In the forecasting imagination game, Afro-Rithms From The Future, and the game event we played in 2019 in Los Angeles based on the future universe we created, we generated the question:
One participant responded with: “I was thinking of the notion of the invisibility cloak but also to have it be reversed. It could make you invisible and also more visible, amplifying what you normally” have as strengths and recognizing their value. Or as another player, states “what about a bodysuit that protects you from any kind of harm” or as the game facilitator adds “how about a bodysuit that repels emotional damage?!” In our final analysis, the cape shields have steadfastly protected Wakanda against the emotional trauma of colonization and partial erasure.
In this way the cape shields guard against emotional damage as well. Imagine how it might feel to wear a fashionable cloak that displays images of your ancestral, ethnic, and gender memories reminding you of your inherent lovability as multi-dimensional human being—and that can technologically protect you and those you love as well.
Black Lives Matter
Chris: Each post in the Black Panther review is followed by actions that support black lives.
To thank Lonny for his guest post, I offered to donate money in his name to the charity of his choice. He has selected Museum of Children’s Arts in Oakland. The mission of MOCHA is to ensure that the arts are a fundamental part of our community and to create opportunities for all children to experience the arts to develop creativity, promote a sense of belonging, and to realize their potential.
And, since it’s important to show the receipts, the receipt:
Thank you, Lonny, for helping to celebrate Black Panther and your continued excellent work in speculative futures and Afrofuturism. Wakanda forever!
When I saw King Tchalla’s brother pull his lip down to reveal his glowing blue, vibranium-powered Wakandan tattoo, the body modification evoked for me the palpable rush of ancestral memories and spiritual longing for a Black utopia, an uncolonized land and body that Black American spirituals have envisioned (what scholars call sonic utopias.)
The lip tattoo is a brilliant bit of worldbuilding. The Wakandan diaspora is, at this point in the movie, a sort of secret society. Having a glowing tattoo shows that the mark is genuine (one presumes it could only be produced with vibranium and therefore not easily forged). Placing it inside the lip means it is ordinarily concealed, and, because of the natural interface of the body, it is easy to reveal. Lastly, it must be a painful spot to tattoo, so shows by way of inference how badass the Wakandan culture is. But it’s more than good worldbuilding to me.
The Black Panther film tattoo electrifies my imagination because it combines both chemical augmentation and amplifies the African identity of being a Wakandan in this story. I think the film could have had even more backstory around the tattoo as a right of passage and development of it in the film. Is it embedded at birth? Or is there a coming of age ceremony associated with it? It would have been cool to see the lip tattoo as a smart tattoo with powers to communicate with other devices and even as a communication device to speak or subvocalize thoughts and desires.
How can we imagine the Wakandan tattoo for the future? I co-designed Afro-Rithms From The Future, an imagination game for creating a dynamic, engaging, and safe space for a community to imagine possible worlds using ordinary objects as inspirations to rethink existing organizational, institutional, and societal relationships. In our launch of the game at the Afrofutures Festival last year at the foresight consultancy Institute For The Future, the winner by declaration was Reina Robinson, a woman who imagined a tattoo that represented one’s history and could be scanned to receive reparation funds to redress and heal the trauma of slavery.
Doreen Garner is a tattoo artist in Brooklyn who acknowledges that tattooing is “a violent act,” but reframes it in her work as an act of healing. She guides her client-patients through this process. Garner began the Black Panther Tattoo Project in January 2019 on MLK Day. She views the Black Panther tattoo as reclaiming pride as solidarity through a shared image. It represents Black pride and “unapologetic energy that we all need to be expressing right now.” Tattooing is a meditative exercise for her as she makes “a lot of the same marks,” and fills in the same spaces for her Black Panther Tattoo project clientele. When folx are at a concert, party, or panel—and recognize their shared image—they can link up to share their experiences.
What if this were a smart tattoo where you could hear the tattoo as sound? Right now, the tech outfit Skin Motion can make your tattoo hearable “by pointing the camera on a mobile device at the tattoo,” where you’ll be able to hear the tattoo playback an audio recording.
Garner, speaking as a Black female tattoo artist, exhorts future artists, “don’t be held back” by thinking that it is a white, male-dominated profession. “White people did not invent tattooing as a practice, because it belongs to us.” They are not the masters. There are many masters of tattooing across cultures.
The Wakandan tattoo as an ancestral marker reflects a centuries-old tradition in African culture. In Black Panther we see the tattoo as a bold, embedded pillar of Wakandan unity, powerfully inviting us to imagine how tattoos may evolve in the future.
Black Futures Matter
Each post in the Black Panther review is followed by actions that you can take to support Black lives. For this post, support the Black Speculative Arts Movement (BSAM): Sign up for their updates. The organization sends email notifications about special launches, network actions, programs, and partnerships. Being connected to the network is one way to stay unified and support BSAM work. Look out for the launch of the California BSAM regional hub network soon. Listen to the Afrofuturist Podcast with host Ahmed Best as well where Black Futures Matter.
Upcoming BSAM event
On Aug. 17, join BSAM’s Look For Us in the Whirlwind event as it celebrates the Pan-African legacy of Marcus Garvey.
A Virtual Global Gathering of Afrofuturists and Pan-Afrikanists
This event is a global Pan-African virtual gathering to honour Marcus M. Garvey Jr.’s legacy. It will feature a keynote from Dr. Julius W. Garvey, the youngest son of Marcus and Amy Jacques Garvey.
Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.
Description
Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.
Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.
He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.
After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”
In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.
A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.
Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”
Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”
Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.
I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.
But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…
Some critiques, as it is
Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
And if he’s memorized it, why show the overlay at all?
Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
Why is the printed picture so unlike the still image where he asks for a hard copy?
Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
How might it be improved for 1982?
So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…
Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.
With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.
The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.
How might it be improved for 2020?
What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.
With that in mind, let’s talk about the display.
Display
To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.
If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.
The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.
In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.
This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.
Flat screen or volumetric projection?
Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.
But…
…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.
OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.
Inputs
To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.
Manual Tool
This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.
We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.
Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.
One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?
Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.
In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.
This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).
Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.
Assistant Tool
Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.
Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.
There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.
Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.
Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”
All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.
Agentive Tool
To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.
It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.
Scene
Interior. Deckard’s apartment. Night.
Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch and places the photo on the coffee table.
Deckard
Photo inspector.
The machine on top of a cluttered end table comes to life.
Deckard
Let’s look at this.
He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomalies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector.
Deckard
OK. Anyone hiding? Moving?
Photo inspector
No and no.
Deckard
Zoom to that arm and pin to the face.
He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue.
Deckard
What’s the confidence?
Photo inspector
95.
On the side of the screen the inspector overlays Leon’s police profile.
Deckard
Unpin.
Deckard lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table.
Deckard
New surface.
He turns the glass clockwise. The camera turns and he sees into a bedroom.
Deckard
How do we have this much inference?
Photo inspector
The convex mirror in the hall…
Deckard
Wait. Is that a foot? You said no one was hiding.
Photo inspector
The individual is not hiding. They appear to be sleeping.
Deckard rolls his eyes.
Deckard
Zoom to the face and pin.
The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face.
Deckard
That look like Zhora to you?
The inspector overlays her police file.
Photo inspector
63% of it does.
Deckard
Why didn’t you say so?
Photo inspector
My threshold is set to 66%.
Deckard
Give me a hard copy right there.
He raises his glass and finishes his drink.
This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.
We’re actually done with all of the artifacts from Doctor Strange. But there’s one last kind-of interface that’s worth talking about, and that’s when Strange assists with surgery on his own body.
After being shot with a soul-arrow by the zealot, Strange is in bad shape. He needs medical attention. He recovers his sling ring and creates a portal to the emergency room where he once worked. Stumbling with the pain, he manages to find Dr. Palmer and tell her he has a cardiac tamponade. They head to the operating theater and get Strange on the table.
When Strange passes out, his “spirit” is ejected from his body as an astral projection. Once he realizes what’s happened, he gathers his wits and turns to observe the procedure.
When Dr. Palmer approaches his body with a pericardiocentesis needle, Strange manifests so she can sense him and recommends that she aim “just a little higher.” At first she is understandably scared, but once he explains what’s happening, she gets back to business, and he acts as a virtual coach.
So this is going to take a few posts. You see, the next interface that appears in The Avengers is a video conference between Tony Stark in his Iron Man supersuit and his partner in romance and business, Pepper Potts, about switching Stark Tower from the electrical grid to their independent power source. Here’s what a still from the scene looks like.
So on the surface of this scene, it’s a communications interface.
But that chat exists inside of an interface with a conceptual and interaction framework that has been laid down since the original Iron Man movie in 2008, and built upon with each sequel, one in 2010 and one in 2013. (With rumors aplenty for a fourth one…sometime.)
So to review the video chat, I first have to talk about the whole interface, and that has about 6 hours of prologue occurring across 4 years of cinema informing it. So let’s start, as I do with almost every interface, simply by describing it and its components. Continue reading →
When his battalion of thralls are up and harvesting Vespene Gas working to stabilize the Tesseract, Loki sits down to check in with his boss’ two-thumbed assistant, an MCU-recurring weirdo who goes unnamed in the movie, but which the Marvel wiki assures me is called The Other.
To get into the teleconference, Loki sits down on the ground with the glaive in his right hand and the blue stone roughly in front of his heart. He closes his eyes, straightens his back, and as the stone glows, the walls around him seem to billow away and he sees the asteroidal meeting room where The Other has been on hold (listening to some annoying Chitauri Muzak no doubt).
When the camera first follows Klaatu into the interior of his spaceship, we witness the first gestural interface seen in the survey. To turn on the lights, Klaatu places his hands in the air before a double column of small lights imbedded in the wall to the right of the door. He holds his hand up for a moment, and then smoothly brings it down before these lights. In response the lights on the wall extinguish and an overhead light illuminates. He repeats this gesture on a similar double column of lights to the left of the door.
The nice thing to note about this gesture is that it is simple and easy to execute. The mapping also has a nice physical referent: When the hand goes down like the sun, the lights dim. When the hand goes up like the sun, the lights illuminate.
He then approaches an instrument panel with an array of translucent controls; like a small keyboard with extended, plastic keys. As before, he holds his hand a moment at the top of the controls before swiping his hand in the air toward the bottom of the controls. In response, the panels illuminate. He repeats this on a similar panel nearby.
Having activated all of these elements, he begins to speak in his alien tongue to a circular, strangely lit panel on the wall. (The film gives no indication as to the purpose of his speech, so no conclusions about its interface can be drawn.)
Gort also operates the translucent panels with a wave of his hand. To her credit, perhaps, Helen does not try to control the panels, but we can presume that, like the spaceship, some security mechanism prevents unauthorized control.
Missing affordances
Who knows how Klaatu perceives this panel. He’s an alien, after all. But for us mere humans, the interface is confounding. There are no labels to help us understand what controls what. The physical affordances of different parts of the panels imply sliding along the surface, touch, or turning, not gesture. Gestural affordances are tricky at best, but these translucent shapes actually signal something different altogether.
Overcomplicated workflow
And you have to wonder why he has to go through this rigmarole at all. Why must he turn on each section of the interface, one by one? Can’t they make just one “on” button? And isn’t he just doing one thing: Transmitting? He doesn’t even seem to select a recipient, so it’s tied to HQ. Seriously, can’t he just turn it on?
Why is this UI even here?
Or better yet, can’t the microphone just detect when he’s nearby, illuminate to let him know it’s ready, and subtly confirm when it’s “hearing” him? That would be the agentive solution.
Maybe it needs some lockdown: Power
OK. Fine. If this transmission consumes a significant amount of power, then an even more deliberate activation is warranted, perhaps the turning of a key. And once on, you would expect to see some indication of the rate of power depletion and remaining power reserves, which we don’t see, so this is pretty doubtful.
Maybe it needs some lockdown: Security
This is the one concern that might warrant all the craziness. That the interface has no affordance means that Joe Human Schmo can’t just walk in and turn it on. (In fact the misleading bits help with a plausible diversion.) The “workflow” then is actually a gestural combination that unlocks the interface and starts it recording. Even if Helen accidentally discovered the gestural aspect, there’s little to no way she could figure out those particular gestures and start intergalactic calls for help. And remembering that Klaatu is, essentially, a space ethics reconn cop, this level of security might make sense.