Disclosure (1994)

Our next 3D file browsing system is from the 1994 film Disclosure. Thanks to site reader Patrick H Lauke for the suggestion.

Like Jurassic Park, Disclosure is based on a Michael Crichton novel, although this time without any dinosaurs. (Would-be scriptwriters should compare the relative success of these two films when planning a study program.) The plot of the film is corporate infighting within Digicom, manufacturer of high tech CD-ROM drives—it was the 1990s—and also virtual reality systems. Tom Sanders, executive in charge of the CD-ROM production line, is being set up to take the blame for manufacturing failures that are really the fault of cost-cutting measures by rival executive Meredith Johnson.

The Corridor: Hardware Interface

The virtual reality system is introduced at about 40 minutes, using the narrative device of a product demonstration within the company to explain to the attendees what it does. The scene is nicely done, conveying all the important points we need to know in two minutes. (To be clear, some of the images used here come from a later scene in the film, but it’s the same system in both.)

The process of entangling yourself with the necessary hardware and software is quite distinct from interacting with the VR itself, so let’s discuss these separately, starting with the physical interface.

Tom wearing VR headset and one glove, being scanned. Disclosure (1994)

In Disclosure the virtual reality user wears a headset and one glove, all connected by cables to the computer system. Like most virtual reality systems, the headset is responsible for visual display, audio, and head movement tracking; the glove for hand movement and gesture tracking. 

There are two “laser scanners” on the walls. These are the planar blue lights, which scan the user’s body at startup. After that they track body motion, although since the user still has to wear a glove, the scanners presumably just track approximate body movement and orientation without fine detail.

Lastly, the user stands on a concave hexagonal plate covered in embedded white balls, which allows the user to “walk” on the spot.

Closeup of user standing on curved surface of white balls. Disclosure (1994)

Searching for Evidence

The scene we’re most interested in takes place later in the film, the evening before a vital presentation which will determine Tom’s future. He needs to search the company computer files for evidence against Meredith, but discovers that his normal account has been blocked from access.   He knows though that the virtual reality demonstrator is on display in a nearby hotel suite, and also knows about the demonstrator having unlimited access. He sneaks into the hotel suite to use The Corridor. Tom is under a certain amount of time pressure because a couple of company VIPs and their guests are downstairs in the hotel and might return at any time.

The first step for Tom is to launch the virtual reality system. This is done from an Indy workstation, using the regular Unix command line.

The command line to start the virtual reality system. Disclosure (1994)

Next he moves over to the VR space itself. He puts on the glove but not the headset, presses a key on the keyboard (of the VR computer, not the workstation), and stands still for a moment while he is scanned from top to bottom.

Real world Tom, wearing one VR glove, waits while the scanners map his body. Disclosure (1994)

On the left is the Indy workstation used to start the VR system. In the middle is the external monitor which will, in a moment, show the third person view of the VR user as seen earlier during the product demonstration.

Now that Tom has been scanned into the system, he puts on the headset and enters the virtual space.

The Corridor: Virtual Interface

“The Corridor,” as you’ve no doubt guessed, is a three dimensional file browsing program. It is so named because the user will walk down a corridor in a virtual building, the walls lined with “file cabinets” containing the actual computer files.

Three important aspects of The Corridor were mentioned during the product demonstration earlier in the film. They’ll help structure our tour of this interface, so let’s review them now, as they all come up in our discussion of the interfaces.

  1. There is a voice-activated help system, which will summon a virtual “Angel” assistant.
  2. Since the computers themselves are part of a multi-user network with shared storage, there can be more than one user “inside” The Corridor at a time.
    Users who do not have access to the virtual reality system will appear as wireframe body shapes with a 2D photo where the head should be.
  3. There are no access controls and so the virtual reality user, despite being a guest or demo account, has unlimited access to all the company files. This is spectacularly bad design, but necessary for the plot.

With those bits of system exposition complete, now we can switch to Tom’s own first person view of the virtual reality environment.

Virtual world Tom watches his hands rezzing up, right hand with glove. Disclosure (1994)

There isn’t a real background yet, just abstract streaks. The avatar hands are rezzing up, and note that the right hand wearing the glove has a different appearance to the left. This mimics the real world, so eases the transition for the user.

Overlaid on the virtual reality view is a Digicom label at the bottom and four corner brackets which are never explained, although they do resemble those used in cameras to indicate the preferred viewing area.

To the left is a small axis indicator, the three green lines labeled X, Y, and Z. These show up in many 3D applications because, silly though it sounds, it is easy in a 3D computer environment to lose track of directions or even which way is up. A common fix for the user being unable to see anything is just to turn 180 degrees around.

We then switch to a third person view of Tom’s avatar in the virtual world.

Tom is fully rezzed up, within cloud of visual static. Disclosure (1994)

This is an almost photographic-quality image. To remind the viewers that this is in the virtual world rather than real, the avatar follows the visual convention described in chapter 4 of Make It So for volumetric projections, with scan lines and occasional flickers. An interesting choice is that the avatar also wears a “headset”, but it is translucent so we can see the face.

Now that he’s in the virtual reality, Tom has one more action needed to enter The Corridor. He pushes a big button floating before him in space.

Tom presses one button on a floating control panel. Disclosure (1994)

This seems unnecessary, but we can assume that in the future of this platform, there will be more programs to choose from.

The Corridor rezzes up, the streaks assembling into wireframe components which then slide together as the surfaces are shaded. Tom doesn’t have to wait for the process to complete before he starts walking, which suggests that this is a Level Of Detail (LOD) implementation where parts of the building are not rendered in detail until the user is close enough for it to be worth doing.

Tom enters The Corridor. Nearby floor and walls are fully rendered, the more distant section is not complete. Disclosure (1994)

The architecture is classical, rendered with the slightly artificial-looking computer shading that is common in 3D computer environments because it needs much less computation than trying for full photorealism.

Instead of a corridor this is an entire multistory building. It is large and empty, and as Tom is walking bits of architecture reshape themselves, rather like the interior of Hogwarts in Harry Potter.

Although there are paintings on some of the walls, there aren’t any signs, labels, or even room numbers. Tom has to wander around looking for the files, at one point nearly “falling” off the edge of the floor down an internal air well. Finally he steps into one archway room entrance and file cabinets appear in the walls.

Tom enters a room full of cabinets. Disclosure (1994)

Unlike the classical architecture around him, these cabinets are very modern looking with glowing blue light lines. Tom has found what he is looking for, so now begins to manipulate files rather than browsing.

Virtual Filing Cabinets

The four nearest cabinets according to the titles above are

  1. Communications
  2. Operations
  3. System Control
  4. Research Data.

There are ten file drawers in each. The drawers are unmarked, but labels only appear when the user looks directly at it, so Tom has to move his head to centre each drawer in turn to find the one he wants.

Tom looks at one particular drawer to make the title appear. Disclosure (1994)

The fourth drawer Tom looks at is labeled “Malaysia”. He touches it with the gloved hand and it slides out from the wall.

Tom withdraws his hand as the drawer slides open. Disclosure (1994)

Inside are five “folders” which, again, are opened by touching. The folder slides up, and then three sheets, each looking like a printed document, slide up and fan out.

Axis indicator on left, pointing down. One document sliding up from a folder. Disclosure (1994)

Note the tilted axis indicator at the left. The Y axis, representing a line extending upwards from the top of Tom’s head, is now leaning towards the horizontal because Tom is looking down at the file drawer. In the shot below, both the folder and then the individual documents are moving up so Tom’s gaze is now back to more or less level.

Close up of three “pages” within a virtual document. Disclosure (1994)

At this point the film cuts away from Tom. Rival executive Meredith, having been foiled in her first attempt at discrediting Tom, has decided to cover her tracks by deleting all the incriminating files. Meredith enters her office and logs on to her Indy workstation. She is using a Command Line Interface (CLI) shell, not the standard SGI Unix shell but a custom Digicom program that also has a graphical menu. (Since it isn’t three dimensional it isn’t interesting enough to show here.)

Tom uses the gloved hand to push the sheets one by one to the side after scanning the content.

Tom scrolling through the pages of one folder by swiping with two fingers. Disclosure (1994)

Quick note: This is harder than it looks in virtual reality. In a 2D GUI moving the mouse over an interface element is obvious. In three dimensions the user also has to move their hand forwards or backwards to get their hand (or finger) in the right place, and unless there is some kind of haptic feedback it isn’t obvious to the user that they’ve made contact.

Tom now receives a nasty surprise.

The shot below shows Tom’s photorealistic avatar at the left, standing in front of the open file cabinet. The green shape on the right is the avatar of Meredith who is logged in to a regular workstation. Without the laser scanners and cameras her avatar is a generic wireframe female humanoid with a face photograph stuck on top. This is excellent design, making The Corridor usable across a range of different hardware capabilities.

Tom sees the Meredith avatar appear. Disclosure (1994)

Why does The Corridor system place her avatar here? A multiuser computer system, or even just a networked file server,  obviously has to know who is logged on. Unix systems in general and command line shells also track which directory the user is “in”, the current working directory. Meredith is using her CLI interface to delete files in a particular directory so The Corridor can position her avatar in the corresponding virtual reality location. Or rather, the avatar glides into position rather than suddenly popping into existence: Tom is only surprised because the documents blocked his virtual view.

Quick note: While this is plausible, there are technical complications. Command line users often open more than one shell at a time in different directories. In such a case, what would The Corridor do? Duplicate the wireframe avatar in each location? In the real world we can’t be in more than one place at a time, would doing so contradict the virtual reality metaphor?

There is an asymmetry here in that Tom knows Meredith is “in the system” but not vice versa. Meredith could in theory use CLI commands to find out who else is logged on and whether anyone was running The Corridor, but she would need to actively seek out that information and has no reason to do so. It didn’t occur to Tom either, but he doesn’t need to think about it,  the virtual reality environment conveys more information about the system by default.

We briefly cut away to Meredith confirming her CLI delete command. Tom sees this as the file drawer lid emitting beams of light which rotate down. These beams first erase the floating sheets, then the folders in the drawer. The drawer itself now has a red “DELETED” label and slides back into the wall.

Tom watches Meredith deleting the files in an open drawer. Disclosure (1994)

Tom steps further into the room. The same red labels appear on the other file drawers even though they are currently closed.

Tom watches Meredith deleting other, unopened, drawers. Disclosure (1994)

Talking to an Angel

Tom now switches to using the system voice interface, saying “Angel I need help” to bring up the virtual reality assistant. Like everything else we’ve seen in this VR system the “angel” rezzes up from a point cloud, although much more quickly than the architecture: people who need help tend to be more impatient and less interested in pausing to admire special effects.

The voice assistant as it appears within VR. Disclosure (1994)

Just in case the user is now looking in the wrong direction the angel also announces “Help is here” in a very natural sounding voice.

The angel is rendered with white robe, halo, harp, and rapidly beating wings. This is horribly clichéd, but a help system needs to be reassuring in appearance as well as function. An angel appearing as a winged flying serpent or wheel of fire would be more original and authentic (yes, really: ​​Biblically Accurate Angels) but users fleeing in terror would seriously impact the customer satisfaction scores.

Now Tom has a short but interesting conversation with the angel, beginning with a question:

  • Tom
  • Is there any way to stop these files from being deleted?
  • Angel
  • I’m sorry, you are not level five.
  • Tom
  • Angel, you’re supposed to protect the files!
  • Angel
  • Access control is restricted to level five.

Tom has made the mistake, as described in chapter 9 Anthropomorphism of the book, of ascribing more agency to this software program than it actually has. He thinks he is engaged in a conversational interface (chapter 6 Sonic Interfaces) with a fully autonomous system, which should therefore be interested in and care about the wellbeing of the entire system. Which it doesn’t, because this is just a limited-command voice interface to a guide.

Even though this is obviously scripted, rather than a genuine error I think this raises an interesting question for real world interface designers: do users expect that an interface with higher visual quality/fidelity will be more realistic in other aspects as well? If a voice interface assistant has a simple polyhedron with no attempt at photorealism (say, like Bit in Tron) or with zoomorphism (say, like the search bear in Until the End of the World) will users adjust their expectations for speech recognition downwards? I’m not aware of any research that might answer this question. Readers?

Despite Tom’s frustration, the angel has given an excellent answer – for a guide. A very simple help program would have recited the command(s) that could be used to protect files against deletion. Which would have frustrated Tom even more when he tried to use one and got some kind of permission denied error. This program has checked whether the user can actually use commands before responding.

This does contradict the earlier VR demonstration where we were told that the user had unlimited access. I would explain this as being “unlimited read access, not write”, but the presenter didn’t think it worthwhile to go into such detail for the mostly non-technical audience.

Tom is now aware that he is under even more time pressure as the Meredith avatar is still moving around the room. Realising his mistake, he uses the voice interface as a query language.

“Show me all communications with Malaysia.”
“Telephone or video?”
“Video.”

This brings up a more conventional looking GUI window because not everything in virtual reality needs to be three-dimensional. It’s always tempting for a 3D programmer to re-implement everything, but it’s also possible to embed 2D GUI applications into a virtual world.

Tom looks at a conventional 2D display of file icons inside VR. Disclosure (1994)

The window shows a thumbnail icon for each recorded video conference call. This isn’t very helpful, so Tom again decides that a voice query will be much faster than looking at each one in turn.

“Show me, uh, the last transmission involving Meredith.”

There’s a short 2D transition effect swapping the thumbnail icon display for the video call itself, which starts playing at just the right point for plot purposes.

Tom watches a previously recorded video call made by Meredith (right). Disclosure (1994)

While Tom is watching and listening, Meredith is still typing commands. The camera orbits around behind the video conference call window so we can see the Meredith avatar approach, which also shows us that this window is slightly three dimensional, the content floating a short distance in front of the frame. The film then cuts away briefly to show Meredith confirming her “kill all” command. The video conference recordings are deleted, including the one Tom is watching.

Tom is informed that Meredith (seen here in the background as a wireframe avatar) is deleting the video call. Disclosure (1994)

This is also the moment when the downstairs VIPs return to the hotel suite, so the scene ends with Tom managing to sneak out without being detected.

Virtual reality has saved the day for Tom. The documents and video conference calls have been deleted by Meredith, but he knows that they once existed and has a colleague retrieve the files he needs from the backup tapes. (Which is good writing: the majority of companies shown in film and TV never seem to have backups for files, no matter how vital.) Meredith doesn’t know that he knows, so he has the upper hand to expose her plot.

Analysis

How believable is the interface?

I won’t spend much time on the hardware, since our focus is on file browsing in three dimensions. From top to bottom, the virtual reality system starts as believable and becomes less so.

Hardware

The headset and glove look like real VR equipment, believable in 1994 and still so today. Having only one glove is unusual, and makes impossible some of the common gesture actions described in chapter 5 of Make It So, which require both hands.

The “laser scanners” that create the 3D geometry and texture maps for the 3D avatar and perform real time body tracking would more likely be cameras, but that would not sound as cool.

And lastly the walking platform apparently requires our user to stand on large marbles or ball bearings and stay balanced while wearing a headset. Uh…maybe…no. Apologetics fails me. To me it looks like it would be uncomfortable to walk on, almost like deterrent paving.

Software

The Corridor, unlike the 3D file browser used in Jurassic Park, is a special effect created for the film. It was a mostly-plausible, near future system in 1994, except for the photorealistic avatar. Usually this site doesn’t discuss historical context (the  “new criticism” stance), but I think in this case it helps to explain how this interface would have appeared to audiences almost two decades ago.

I’ll start with the 3D graphics of the virtual building. My initial impression was that The Corridor could have been created as an interactive program in 1994, but that was my memory compressing the decade. During the 1990s 3D computer graphics, both interactive and CGI, improved at a phenomenal rate. The virtual building would not have been interactive in 1994, was possible on the most powerful systems six years later in 2000, and looks rather old-fashioned compared to what the game consoles of the 21st C can achieve.

For the voice interface I made the opposite mistake. Voice interfaces on phones and home computing appliances have become common in the second decade of the 21st C, but in reality are much older. Apple Macintosh computers in 1994 had text-to-speech synthesis with natural sounding voices and limited vocabulary voice command recognition. (And without needing an Internet connection!) So the voice interface in the scene is believable.

The multi-user aspects of The Corridor were possible in 1994. The wireframe avatars for users not in virtual reality are unflattering or perhaps creepy, but not technically difficult. As a first iteration of a prototype system it’s a good attempt to span a range of hardware capabilities.

The virtual reality avatar, though, is not believable for the 1990s and would be difficult today. Photographs of the body, made during the startup scan, could be used as a texture map for the VR avatar. But live video of the face would be much more difficult, especially when the face is partly obscured by a headset.

How well does the interface inform the narrative of the story?

The virtual reality system in itself is useful to the overall narrative because it makes the Digicom company seem high tech. Even in 1994 CD-ROM drives weren’t very interesting.

The Corridor is essential to the tension of the scene where Tom uses it to find the files, because otherwise the scene would be much shorter and really boring. If we ignore the virtual reality these are the interface actions:

  • Tom reads an email.
  • Meredith deletes the folder containing those emails.
  • Tom finds a folder full of recorded video calls.
  • Tom watches one recorded video call.
  • Meredith deletes the folder containing the video calls.

Imagine how this would have looked if both were using a conventional 2D GUI, such as the Macintosh Finder or MS Windows Explorer. Double click, press and drag, double click…done.

The Corridor slows down Tom’s actions and makes them far more visible and understandable. Thanks to the virtual reality avatar we don’t have to watch an actor push a mouse around. We see him moving and swiping, be surprised and react; and the voice interface adds extra emotion and some useful exposition. It also helps with the plot, giving Tom awareness of what Meredith is doing without having to actively spy on her, or look at some kind of logs or recordings later on.

Meredith, though, can’t use the VR system because then she’d be aware of Tom as well. Using a conventional workstation visually distinguishes and separates Meredith from Tom in the scene.

So overall, though the “action” is pretty mundane, it’s crucial to the plot, and the VR interface helps make this interesting and more engaging.

How well does the interface equip the character to achieve their goals?

As described in the film itself, The Corridor is a prototype for demonstrating virtual reality. As a file browser it’s awful, but since Tom has lost all his normal privileges this is the only system available, and he does manage to eventually find the files he needs.

At the start of the scene, Tom spends quite some time wandering around a vast multi-storey building without a map, room numbers, or even coordinates overlaid on his virtual view. Which seems rather pointless because all the files are in one room anyway. As previously discussed for Johnny Mnemonic, walking or flying everywhere in your file system seems like a good idea at first, but often becomes tedious over time. Many actual and some fictional 3D worlds give users the ability to teleport directly to any desired location.

Then the file drawers in each cabinet have no labels either, so Tom has to look carefully at each one in turn. There is so much more the interface could be doing to help him with his task, and even help the users of the VR demo learn and explore its technology as well.

Contrast this with Meredith, who uses her command line interface and 2D GUI to go through files like a chainsaw.

Tom becomes much more efficient with the voice interface. Which is just as well, because if he hadn’t, Meredith would have deleted the video conference recordings while he was still staring at virtual filing cabinets. However neither the voice interface nor the corresponding file display need three dimensional graphics.

There is hope for version 2.0 of The Corridor, even restricting ourselves to 1994 capabilities. The first and most obvious is to copy 2D GUI file browsers, or the 3D file browser from Jurassic Park, and show the corresponding text name next to each graphical file or folder object. The voice interface is so good that it should be turned on by default without requiring the angel. And finally add some kind of map overlay with a you are here moving dot, like the maps that players in 3D games such as Doom could display with a keystroke.

Film making challenge: VR on screen

Virtual reality (or augmented reality systems such as Hololens) provide a better viewing experience for 3D graphics by creating the illusion of real three dimensional space rather than a 2D monitor. But it is always a first person view and unlike conventional 2D monitors nobody else can usually see what the VR user is seeing without a deliberate mirroring/debugging display. This is an important difference from other advanced or speculative technologies that film makers might choose to include. Showing a character wielding a laser pistol instead of a revolver or driving a hover car instead of a wheeled car hardly changes how to stage a scene, but VR does.

So, how can we show virtual reality in film?

There’s the first-person view corresponding to what the virtual reality user is seeing themselves. (Well, half of what they see since it’s not stereographic, but it’s cinema VR, so close enough.) This is like watching a screencast of someone else playing a first person computer game, the original active experience of the user becoming passive viewing by the audience. Most people can imagine themselves in the driving seat of a car and thus make sense of the turns and changes of speed in a first person car chase, but the film audience probably won’t be familiar with the VR system depicted and will therefore have trouble understanding what is happening. There’s also the problem that viewing someone else’s first-person view, shifting and changing in response to their movements rather than your own, can make people disoriented or nauseated.

A third-person view is better for showing the audience the character and the context in which they act. But not the diegetic real-world third-person view, which would be the character wearing a geeky headset and poking at invisible objects. As seen in Disclosure, the third person view should be within the virtual reality.

But in doing that, now there is a new problem: the avatar in virtual reality representing the real character. If the avatar is too simple the audience may not identify it with the real world character and it will be difficult to show body language and emotion. More realistic CGI avatars are increasingly expensive and risk falling into the Uncanny Valley. Since these films are science fiction rather than factual, the easy solution is to declare that virtual reality has achieved the goal of being entirely photorealistic and just film real actors and sets. Adding the occasional ripple or blur to the real world footage to remind the audience that it’s meant to be virtual reality, again as seen in Disclosure, is relatively cheap and quick.
So, solving all these problems results in the cinematic trope we can call Extradiegetic Avatars, which are third-person, highly-lifelike “renderings” of characters, with a telltale Hologram Projection Imperfection for audience readability, that may or may not be possible within the world of the film itself.

Browsing Files in 3D

Be forewarned—massive spoilers ahead. (The graphic shows the Millennium Falcon sporting a massive spoiler.)

What’s this all about?

The origin story here is that I wanted to review Hackers, a film I enjoy and Chris describes as “awesome/ly bad”. However, Hackers isn’t science fiction. Well, I could argue that it is set in an alternate reality where computer hackers are all physically attractive with fashionable tastes in music and clothing, but that isn’t enough. The film was set firmly in the present day (of 1995) and while the possibilities of computer hacking may be exaggerated for dramatic purposes, all the computers themselves are quite conventional for the time. (And therefore appear quaint and outdated today.)

With the glorious exception of the three dimensional file storage system on the “Gibson” mainframe. This fantastic combination of hardware and software was clearly science fiction then, and remains so today. While one futuristic element is not enough to justify a full review of Hackers, it did start us thinking. The film Jurassic Park also has a 3D file system navigator, which wasn’t covered in depth by either the book or the online review. And when Chris reached out to the website readers, they provided more examples of 3D file systems being added to otherwise mundane computer systems.

So what we have here is a review of a particular interface trope rather than an entire film or TV show: the three dimensional file browsing and navigation interface.

Scope

This review is specifically limited to interfaces that are recognisably file systems, where people search for and manipulate individual documents or programs. Cyberspace, a 3D representation of the Internet or World Wide Web, is too broad a topic, and better covered in individual reviews such as those for Ghost in the Shell and Johnny Mnemonic.

I also originally intended to only include non-science fiction films and shows but Jurassic Park is an exception. Jurassic Park has been reviewed, both in the book and on the website, but the 3D file system was a comparatively minor element. It is included here as a well known example for comparison.

The SciFiInterfaces readership also provided examples of research papers for 3D file system browsing and navigation—rather more numerous than actual production systems, even today. These will inform the reviews but not be discussed individually.

Because we are reviewing a topic, not a particular film or TV show, the usual plot summaries will be shortened to just those aspects that involve the 3D file system. As a bonus, we can also compare and contrast the different interfaces and how they are used. The worlds of Ghost in the Shell and Johnny Mnemonic are so different that it would be unfair to judge individual interfaces against each other, but for this review we are considering 3D file systems that have been grafted onto otherwise contemporary computer systems, and used by unaugmented human beings to perform tasks familiar to most computer users.

Sources

Having decided on our topic and scope, the properties for review are three films and one episode of a TV show.

Jurassic Park, 1993

“I know this!” and Jurassic Park is so well known that I assume that you do too. We will look specifically at the 3D file system that is used by Lex in the control room to reactivate the park systems.

Disclosure, 1994

This film about corporate infighting includes a virtual reality system, complete with headset, glove, laser trackers, and walking surface, which is used solely to look for particular files.

Hackers, 1995

As mentioned in the introduction this film revolves around the hacking of a Gibson mainframe, which has a file system that is both physically three dimensional and represented on computer screens as a 3D browser.

A bar chart. The x-axis is every year between and including 1902 to 2022. The y-axis, somewhat humorously, shows 2-place decimal values up to 1. Three bars at 1.00 appear at 1993, 1994, and 1995. One also appears in 2016, but has an arrow pointing back to the prior three with a label, “(But really, this one is referencing those.)”

All three of these films date from the 1990s, which seems to have been the high point for 3D file systems, both fictional and in real world research.

Community, season 6 episode 2, “Lawnmower Maintenance and Postnatal Care” (2016)

In this 21st century example, the Dean of a community college buys an elaborate virtual reality system. He spends some of his time within VR looking for, and then deleting, a particular file. 

Clockwise from top left: Jurassic Park (1993), Disclosure (1994), Hackers (1995), Community (2016)

And one that almost made it

File browsing in two dimensions is so well established in general-purpose computer interfaces that the metaphor can be used in other contexts. In the first Iron Man film, at around 52 minutes, Tony Stark is designing his new suit with an advanced 3D CAD system that uses volumetric projection and full body gesture recognition and tracking. When Tony wants to delete a part (the head) from the design, he picks it up and throws it into a trashcan.

Tony deletes a component from the design by dropping it into a trashcan. Iron Man (2008)

I’m familiar with a number of 2D and 3D design and drawing applications and in all of them a deleted part quietly vanishes. There’s no more need for visual effects than when we press the delete key while typing.

In the film, though, it is not Tony who needs to know that the part has been deleted, but the audience. We’ve already seen the design rendering moved from one computer to another with an arm swing, so if the head disappeared in response to a gesture, that could have been interpreted as it being emailed or otherwise sent somewhere else. The trashcan image and action makes it clear to the audience what is happening.

So, that’s the set of examples we’ll be using across this series of posts. But before we get into the fiction, in the next post we need to talk about how this same thing is handled in the real world.

Vibranium-based Cape Shields

Editor’s Note: Today’s guest post is penned by Lonny Brooks. Be sure and read his introduction post if you missed it when it was published.

The Black Panther film represents one of the most ubiquitous statements of Afrofuturist fashion and fashionable digital wearables to celebrate the Africana and Black imagination. The wearable criteria under Director Ryan Cooglar’s lead and that of the formidable talent of costume designer Ruth E. Carter took into account African tribal symbolism. The adinkra symbol, for “cooperation,” emblazoned across W’Kabi’s (played by Daniel Kaluuya) blanket embodies the role of the Border tribe where they lived in a small village tucked into the mountainous borderlands of Wakanda, disguised as farmers and hunters.

Beautiful interaction

Chris’ blog looks at the interactions with speculative technology, and here the interactions are marvelously subtle. They do not have buttons or levers, which might give away their true nature. To activate them, a user does what would come naturally, which is to hold the fabric before them, like a shield. (There might be a mental command as well, but of course we can’t perceive that.) The shield-like gesture activates the shield technology. It’s quick. It fits the material of the technology. You barely even have to be trained to use it. We never see the use case for when a wearer is incapacitated and can’t lift the cape into position, but there’s enough evidence in the rest of the film to expect it might act like Dr. Strange’s cape and activate its shield automatically.

But, for me, the Capes are more powerful not as models of interaction, but for what they symbolize.

The Dual Role of the Capes

The role of the Border Tribe is to create the illusion of agrarian ruggedness as a deception for outsiders that only tells of a placid, developing nation rather than the secret technologically advanced splendor of Wakanda’s lands. The Border Tribe is the keeper of Wakanda’s cloaking technology that hides the vast utopian advancement of Wakandan advantage. 

The Border Tribe’s role is built into the fabric of their illustrious and enviably fashionable capes. The adinkra symbol of cooperation embedded into the cape reveals, by the final scenes of the Black Panther film, how the Border Tribe defenders wield their capes into a force field wall of energy to repel enemies. 

Ironically we only see them at their most effective when Wakanda is undergoing a civil war between those loyal to Kilmonger who is determined to avenge his father’s murder and his own erasure from Wakandan collective memory, and those supporting King T’Challa. Whereas each Black Panther King has selected to keep Wakanda’s presence hidden literally under the cooperative shields of the Border Tribe, Kilmonger—an Oakland native and a potential heir to Wakandan monarchy—was orphaned and left in the U.S. 

If this sounds familiar, consider the film as a grand allusion to the millions of Africans kidnapped and ripped from their tribal lineages and taken across the Atlantic as slaves. Their cultural heritage was purposefully erased, languages and tribal customs, memories lost to the colonial thirst for their unpaid and forced labor. 

Kilmonger represents the Black Diaspora, former descendants of African homelands similarly deprived of their birthrights. Kilmonger wants the Black Diaspora to rise up in global rebellion with the assistance of Wakandan technical superiority. In opposition, King T’Challa aspires for a less vengeful solution. He wantsWakanda to come out to the world, and lead by example. We can empathize with both. T’Challa’s plan is fueled by virtue. Kilmonger’s is fueled by justice—redeploy these shields to protect Black people against the onslaught of ongoing police and state violence. 

W. E. B. Du Bois in 1918
(image in the public domain)

Double Consciousness and the Big Metaphor

The cape shields powered by the precious secret meteorite called Vibranium embodies what the scholar W.E.B. Dubois referred to as a double consciousness, where members of the Black Diaspora inhabit two selves.

  1. Their own identity as individuals
  2. The external perception of themselves as a members of an oppressed people incessantly facing potential erasure and brutality.

The cape shields and their cloaking technology cover the secret utopic algorithms that power Wakanda, while playing on the petty stereotypes of African nations as less-advanced collectives. 

The final battle scene symbolizes this grand debate—between Kilmonger’s claims on Wakanda and assertion of Africana power, and King T’Challa’s more cooperative and, indeed, compliant approach working with the CIA. Recall that in its subterfuge and cloaking tactics, the CIA has undermined and toppled numerous freely-elected African and Latin American governments for decades. In this final showdown, we see W’Kabi’s cloaked soldiers run down the hill towards King T’Challa and stop to raise their shields cooperatively into defensive formation to prevent King T’Challa’s advance. King T’Challa jumps over the shields and the force of his movement causes the soldier’s shields to bounce away while simultaneously revealing their potent energy. 

The flowing blue capes of the Border Tribe are deceptively enticing, while holding the key to Wakanda’s survival as metaphors for cloaking their entire civilization from being attacked, plundered, and erased. Wakanda and these capes represent an alternative history: What if African peoples had not experienced colonization or undergone the brutal Middle Passage to the Americas? What if the prosperous Black Greenwood neighborhood of Tulsa, Oklahoma had developed cape shield technology to defend themselves against a genocidal white mob in 1921? Or if the Black Panther Party had harnessed the power of invisible cloaking technology as part of their black beret ensemble? 

Gallery Images: World Building with the Afrofuturist Podcast—Afro-Rithms From The Future game, Neuehouse, Hollywood, May 22, 2019 [Co-Game Designers, Eli Kosminsky and Lonny Avi Brooks, Afro-Rithms Librarian; Co-Game Designer and Seer Ahmed Best]

In the forecasting imagination game, Afro-Rithms From The Future, and the game event we played in 2019 in Los Angeles based on the future universe we created, we generated the question:

What would be an article of fashion that would give you more Black Feminist leadership and more social justice?

One participant responded with: “I was thinking of the notion of the invisibility cloak but also to have it be reversed. It could make you invisible and also more visible, amplifying what you normally” have as strengths and recognizing their value. Or as another player, states “what about a bodysuit that protects you from any kind of harm” or as the game facilitator adds “how about a bodysuit that repels emotional damage?!” In our final analysis, the cape shields have steadfastly protected Wakanda against the emotional trauma of colonization and partial erasure.

In this way the cape shields guard against emotional damage as well. Imagine how it might feel to wear a fashionable cloak that displays images of your ancestral, ethnic, and gender memories reminding you of your inherent lovability as multi-dimensional human being—and that can technologically protect you and those you love as well.


Black Lives Matter

Chris: Each post in the Black Panther review is followed by actions that support black lives. 

To thank Lonny for his guest post, I offered to donate money in his name to the charity of his choice. He has selected Museum of Children’s Arts in Oakland. The mission of MOCHA is to ensure that the arts are a fundamental part of our community and to create opportunities for all children to experience the arts to develop creativity, promote a sense of belonging, and to realize their potential. 

And, since it’s important to show the receipts, the receipt:

Thank you, Lonny, for helping to celebrate Black Panther and your continued excellent work in speculative futures and Afrofuturism. Wakanda forever!

Wakandan Med Table

When Agent Ross is shot in the back during Klaue’s escape from the Busan field office, T’Challa stuffs a kimoyo bead into the wound to staunch the bleeding, but the wounds are still serious enough that the team must bring him back to Wakanda for healing. They float him to Shuri’s lab on a hover-stretcher.

Here Shuri gets to say the juicy line, “Great. Another white boy for us to fix. This is going to be fun.
Sorry about the blurry screen shot, but this is the most complete view of the bay.

The hover-stretcher gets locked into place inside a bay. The bay is a small room in the center of Shuri’s lab, open on two sides. The walls are covered in a gray pattern suggesting a honeycomb. A bas-relief volumetric projection displays some medical information about the patient like vital signs and a subtle fundus image of the optic nerve.

Shuri holds her hand flat and raises it above the patient’s chest. A volumetric display of 9 of his thoracic vertebrae rises up in response. One of the vertebrae is highlighted in a bright red. A section of the wall display displays the same information in 2D, cyan with orange highlights. That display section slides out from the wall to draw observer’s attentions. Hexagonal tiles flip behind the display for some reason, but produce no change in the display.

Shuri reaches her hands up to the volumetric vertebrae, pinches her forefingers and thumbs together, and pull them apart. In response, the space between the vertebrae expands, allowing her to see the top and bottom of the body of the vertebra.

She then turns to the wall display, and reading something there, tells the others that he’ll live. Her attention is pulled away with the arrival of Wakabe, bringing news of Killmonger. We do not see her initiate a treatment in the scene. We have to presume that she did it between cuts. (There would have to be a LOT of confidence in an AI’s ability to diagnose and determine treatment before they would let Griot do that without human input.)

We’ll look more closely at the hover-stretcher display in a moment, but for now let’s pause and talk about the displays and the interaction of this beat.

A lab is not a recovery room

This doesn’t feel like a smart environment to hold a patient. We can bypass a lot of the usual hospital concerns of sterilization (it’s a clean room) or readily-available equipment (since they are surrounded by programmable vibranium dust controlled by an AGI) or even risk of contamination (something something AI). I’m mostly thinking about the patient having an environment that promotes healing: Natural light, quiet or soothing music, plants, furnishing, and serene interiors. Having him there certainly means that Shuri’s team can keep an eye on him, and provide some noise that may act as a stimulus, but don’t they have actual hospital rooms in Wakanda? 

Why does she need to lift it?

The VP starts in his chest, but why? If it had started out as a “translucent skin” illusion, like we saw in Lost in Space (1998, see below), then that might make sense. She would want to lift it to see it in isolation from the distracting details of the body. But it doesn’t start this way, it starts embedded within him?!

The “translucent skin” display from Lost in Space (1998)

It’s a good idea to have a representation close to the referent, to make for easy comparison between them. But to start the VP within his opaque chest just doesn’t make sense.

This is probably the wrong gesture

In the gestural interfaces chapter of  Make It So, I described a pidgin that has been emerging in sci-fi which consisted of 7 “words.” The last of these is “Pinch and Spread to Scale.” Now, there is nothing sacred about this gestural language, but it has echoes in the real world as well. For one example, Google’s VR painting app Tilt Brush uses “spread to scale.” So as an increasingly common norm, it should only be violated with good reason. In Black Panther, Shuri uses spread to mean “spread these out,” even though she starts the gesture near the center of the display and pulls out at a 45° angle. This speaks much more to scaling than to spreading. It’s a mismatch and I can’t see a good reason for it. Even if it’s “what works for her,” gestural idiolects hinder communities of practice, and so should be avoided.

Better would have been pinching on one end of the spine and hooking her other index finger to spread it apart without scaling. The pinch is quite literal for “hold” and the hook quite literal for “pull.” This would let scale be scale, and “hook-pull” to mean “spread components along an axis.”

Model from https://justsketch.me/

If we were stuck with the footage of Shuri doing the scale gesture, then it would have made more sense to scale the display, and fade the white vertebrae away so she could focus on the enlarged, damaged one. She could then turn it with her hand to any arbitrary orientation to examine it.

An object highlight is insufficient

It’s quite helpful for an interface that can detect anomalies to help focus a user’s attention there. The red highlight for the damaged vertebrae certainly helps draw attention. Where’s the problem? Ah, yes. There’s the problem. But it’s more helpful for the healthcare worker to know the nature of the damage, what the diagnosis is, to monitor the performance of the related systems, and to know how the intervention is going. (I covered these in the medical interfaces chapter of Make It So, if you want to read more.) So yes, we can see which vertebra is damaged, but what is the nature of that damage? A slipped disc should look different than a bone spur, which should look different than one that’s been cracked or shattered from a bullet. The thing-red display helps for an instant read in the scene, but fails on close inspection and would be insufficient in the real world.

This is not directly relevant to the critique, but interesting that spinal VPs have been around since 1992. Star Trek: The Next Generation, “Ethics” (Season 5, Episode 16).

Put critical information near the user’s locus of attention

Why does Shuri have to turn and look at the wall display at all? Why not augment the volumetric projection with the data that she needs? You might worry that it could obscure the patient (and thereby hinder direct observations) but with an AGI running the show, it could easily position those elements to not occlude her view.

Compare this display, which puts a waveform directly adjacent to the brain VP. Firefly, “Ariel” (Episode 9, 2002).

Note that Shuri is not the only person in the room interested in knowing the state of things, so a wall display isn’t bad, but it shouldn’t be the only augmentation.

Lastly, why does she need to tell the others that Ross will live? if there was signifcant risk of his death, there should be unavoidable environmental signals. Klaxons or medical alerts. So unless we are to believe T’Challa has never encountered a single medical emergency before (even in media), this is a strange thing for her to have to say. Of course we understand she’s really telling us in the audience that we don’t need to wonder about this plot development any more, but it would be better, diegetically, if she had confirmed the time-to-heal, like, “He should be fine in a few hours.”

Alternatively, it would be hilarious turnabout if the AI Griot had simply not been “trained” on data that included white people, and “could not see him,” which is why she had to manually manage the diagnosis and intervention, but that would have massive impact on the remote piloting and other scenes, so isn’t worth it. Probably.

Thoughts toward a redesign

So, all told, this interface and interaction could be much better fit-to-purpose. Clarify the gestural language. Lose the pointless flipping hexagons. Simplify the wall display for observers to show vitals, diagnosis and intervention, as well as progress toward the goal. Augment the physician’s projection with detailed, contextual data. And though I didn’t mention it above, of course the bone isn’t the only thing damaged, so show some of the other damaged tissues, and some flowing, glowing patterns to show where healing is being done along with a predicted time-to-completion.

Stretcher display

Later, when Ross is fully healed and wakes up, we see a shot of of the med table from above. Lots of cyan and orange, and *typography shudder* stacked type. Orange outlines seem to indicate controls, tough they bear symbols rather than full labels, which we know is better for learnability and infrequent reuse. (Linguist nerds: Yes, Wakandan is alphabetic rather than logographic.)

These feel mostly like FUIgetry, with the exception of a subtle respiration monitor on Ross’ left. But it shows current state rather than tracked over time, so still isn’t as helpful as it could be.

Then when Ross lifts his head, the hexagons begin to flip over, disabling the display. What? Does this thing only work when the patient’s head is in the exact right space? What happens when they’re coughing, or convulsing? Wouldn’t a healthcare worker still be interested in the last-recorded state of things? This “instant-off” makes no sense. Better would have been just to let the displays fade to a gray to indicate that it is no longer live data, and to have delayed the fade until he’s actually sitting up.

All told, the Wakandan medical interfaces are the worst of the ones seen in the film. Lovely, and good for quick narrative hit, but bad models for real-world design, or even close inspection within the world of Wakanda.


MLK Day Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Today is Martin Luther King Day. Normally there would be huge gatherings and public speeches about his legacy and the current state of civil rights. But the pandemic is still raging, and with the Capitol in Washington, D.C. having seen just last week an armed insurrection by supporters of outgoing and pouty loser Donald Trump, (in case that WP article hasn’t been moved yet, here’s the post under its watered-down title) worries about additional racist terrorism and violence.

So today we celebrate virtually, by staying at home, re-experiening his speeches and letters, and listening to the words of black leaders and prominent thinkers all around us, reminding us of the arc of the moral universe, and all the work it takes to bend it toward justice.

With the Biden team taking the reins on Wednesday, and Kamala Harris as our first female Vice President of color, things are looking brighter than they have in 4 long, terrible years. But Trump would have gotten nowhere if there hadn’t been a voting block and party willing to indulge his racist fascism. There’s still much more to do to dismantle systemic racism in the country and around the world. Let’s read, reflect, and use whatever platforms and resources we are privileged to have, act.

Eye of Agamotto (1 of 5)

This is one of those sci-fi interactions that seems simple when you view it, but then on analysis it turns out to be anything but. So set aside some time, this analysis will be one of the longer ones even broken into four parts.

The Eye of Agamotto is a medallion that (spoiler) contains the emerald Time Infinity Stone, held on by a braided leather strap. It is made of brass, about a hand’s breadth across, in the shape of a stylized eye that is covered by the same mystical sigils seen on the rose window of the New York Sanctum, and the portal door from Kamar-Taj to the same.

Eye-of-Agamoto-glyph.png
World builders may rightly ask why this universe-altering artifact bears a sigil belonging to just one of the Sanctums.

We see the Eye used in three different places in the film, and in each place it works a little differently.

  • The Tibet Mode
  • The Hong Kong Modes
  • The Dark Dimension Mode

The Tibet Mode

When the film begins, the Eye is under the protection of the Masters of the Mystic Arts in Kamar-Taj, where there’s even a user manual. Unfortunately it’s in mysticalese (or is it Tibetan? See comments) so we can’t read it to understand what it says. But we do get a couple of full-screen shots. Are there any cryptanalysists in the readership who can decipher the text?

Eye-of-Agamoto02.png
They really should put the warnings before the spells.

The power button

Strange opens the old tome and reads “First, open the eye of Agamotto.” The instructions show him how to finger-tut a diamond shape with both hands and spread them apart. In response the lid of the eye opens, revealing a bright green glow within. At the same time the components of the sigil rotate around the eye until they become an upper and lower lid. The green glow of this “on state” persists as long as Strange is in time manipulation mode.

Eye-of-Agamoto-opening.gif

Once it’s turned on, he puts the heels of his palms together, fingers splayed out, and turns them clockwise to create a mystical green circle in the air before him. At the same time two other, softer green bands spin around his forearm and elbow. Thrusting his right hand toward the circle while withdrawing his left hand behind the other, he transfers control of the circle to just his right hand, where it follows the position of his palm and the rotation of his wrist as if it was a saucer mystically glued there.

Eye-of-Agamoto-Saucer.gif

Then he can twist his wrist clockwise while letting his fingers close to a fist, and the object on which he focuses ages. When he does this to an apple, we see it with progressively more chomps out of it until it is a core that dries and shrivels. Twisting his wrist counter clockwise, the focused object reverses aging, becoming younger in staggered increments. With his middle finger upright, the object reverts to its “natural” age.

Eye-of-Agamoto-apple.gif

Pausing and playing

At one point he wants to stop practicing with the apple and try it on the tome whose pages were ripped out. He relaxes his right hand and the green saucer disappears, allowing him to manipulate it and a tome without changing their ages. To reinstate the saucer, he extends his fingers out and gives his hand a shake, and it fades back into place.

Tibet Mode Analysis: The best control type

The Eye has a lot of goodness to it. Time has long been mapped to circles in sun dials and clock faces, so the circle controls fit thematically quite well. The gestural components make similar sense. The direction of wrist twist coincides with the movement of clock hands, so it feels familiar. Also we naturally look at and point at objects of focus, so using the extended arm gesture combined with gaze monitoring fits the sense of control. Lastly, those bands and saucers look really cool, both mystical in pattern and vaguely technological with the screen-green glow.

Readers of the blog know that it rarely just ends after compliments. To discuss the more challenging aspects of this interaction with the Eye, it’s useful to think of it as a gestural video scrubber for security footage, with the hand twist working like a jog wheel. Not familiar with that type of control? It’s a specialized dial, often used by video editors to scroll back and forth over video footage, to find particular sequences or frames. Here’s a quick show-and-tell by YouTube user BrainEatingZombie.

Is this the right kind of control?

There are other options to consider for the dial types of the Eye. What we see in the movie is a jog dial with hard stops, like you might use for an analogue volume control. The absolute position of the control maps to a point in a range of values. The wheel stops at the extents of the values: for volume controls, complete silence on one end and max volume at the other.

But another type is a shuttle wheel. This kind of dial has a resting position. You can turn it clockwise or counterclockwise, and when you let go, it will spring back to the resting position. While it is being turned, it enacts a change. The greater the turn, the faster the change. Like a variable fast-forward/reverse control. If we used this for a volume control: a small turn to the left means, “Keep lowering the volume a little bit as long as I hold the dial here.” A larger turn to the left means, “Get quieter faster.” In the case of the Eye, Strange could turn his hand a little to go back in time slowly, and fully to reverse quickly. This solves some mapping problems (discussed below) but raises new issues when the object just doesn’t change that much across time, like the tome. Rewinding the tome, Strange would start slow, see no change, then gradually increase speed (with no feedback from the tome to know how fast he was going) and suddenly he’d fly way past a point of interest. If he was looking for just the state change, then we’ve wasted his time by requiring him to scroll to find it. If he’s looking for details in the moment of change, the shuttle won’t help him zoom in on that detail, either.

jogdials.png

There are also free-spin jog wheels, which can specify absolute or relative values, but since Strange’s wrist is not free-spinning, this is a nonstarter to consider. So I’ll make the call and say what we see in the film, the jog dial, is the right kind of control.

So if a jog dial is the right type of dial, and you start thinking of the Eye in terms of it being a video scrubber, it’s tackling a common enough problem: Scouring a variable range of data for things of interest. In fact, you can imagine that something like this is possible with sophisticated object recognition analyzing security footage.

  • The investigator scrubs the video back in time to when the Mona Lisa, which since has gone missing, reappears on the wall.
  • INVESTIGATOR
  • Show me what happened—across all cameras in Paris—to that priceless object…
  • She points at the painting in the video.
  • …there.

So, sure, we’re not going to be manipulating time any…uh…time soon, but this pattern can extend beyond magic items a movie.

The scrubber metaphor brings us nearly all the issues we have to consider.

  • What are the extents of the time frame?
  • How are they mapped to gestures?
  • What is the right display?
  • What about the probabilistic nature of the future?

What are the extents of the time frame?

Think about the mapping issues here. Time goes forever in each direction. But the human wrist can only twist about 270 degrees: 90° pronation (thumb down) and 180° supination (thumb away from the body, or palm up). So how do you map the limited degrees of twist to unlimited time, especially considering that the “upright” hand is anchored to now?

The conceptually simplest mapping would be something like minutes-to-degree, where full pronation of the right hand would go back 90 minutes and full supination 2 hours into the future. (Noting the weirdness that the left hand would be more past-oriented and the right hand more future-oriented.) Let’s call this controlled extents to distinguish it from auto-extents, discussed later.

What if -90/+180 minutes is not enough time to entail the object at hand? Or what if that’s way too much time? The scale of those extents could be modified by a second gesture, such as the distance of the left hand from the right. So when the left hand was very far back, the extents might be -90/+180 years. When the left hand was touching the right, the extents might be -90/+180 milliseconds to find detail in very fast moving events. This kind-of backworlds the gestures seen in the film.

Eye-of-Agamotto-scales.png

That’s simple and quite powerful, but doesn’t wholly fit the content for a couple of reasons. The first is that the time scales can vary so much between objects. Even -90/+180 years might be insufficient. What if Strange was scrubbing the timeline of a Yareta plant (which can live to be 3,000 years old) or a meteorite? Things exist in greatly differing time scales. To solve that you might just say OK, let’s set the scale to accommodate geologic or astronomic time spans. But now to select meaningfully between the apple and the tome his hand must move mere nanometers and hard for Strange to get right. A logarithmic time scale to that slider control might help, but still only provides precision at the now end of the spectrum.

If you design a thing with arbitrary time mapping you also have to decide what to do when the object no longer exists prior to the time request. If Strange tried to turn the apple back 50 years, what would be shown? How would you help him elegantly focus on the beginning point of the apple and at the same time understand that the apple didn’t exist 50 years ago?

So letting Strange control the extents arbitrarily is either very constrained or quite a bit more complicated than the movie shows.

Could the extents be automatically set per the focus?

Could the extents be set automatically at the beginning and end of the object in question? Those can be fuzzy concepts, but for the apple there are certainly points in time at which we say “definitely a bud and not a fruit” and “definitely inedible decayed biomass.” So those could be its extents.

The extents for the tome are fuzzier. Its beginning might be when its blank vellum pages were bound and its cover decorated. But the future doesn’t have as clean an endpoint. Pages can be torn out. The cover and binding could be removed for a while and the pages scattered, but then mostly brought together with other pages added and rebound. When does it stop being itself? What’s its endpoint? Suddenly the Eye has to have a powerful and philosophically advanced AI just to reconcile Theseus’ paradox for any object it was pointed at, to the satisfaction of the sorcerer using it and in the context in which it was being examined. Not simple and not in evidence.

ShipofTheseus.png

Auto-extents could also get into very weird mapping. If an object were created last week, each single degree of right-hand-pronation would reverse time by about 2 hours; but if was fated to last a millennium, each single degree of right-hand-supination would advance time by about 5 years. And for the overwhelming bulk of that display, the book wouldn’t change much at all, so the differences in the time mapping between the two would not be apparent to the user and could cause great confusion.

So setting extents automatically is not a simple answer either. But between the two, starting with the extents automatically saves him the work of finding the interesting bits. (Presuming we can solve that tricky end-point problem. Ideas?) Which takes us to the question of the best display, which I’ll cover in the next post.

Cyberspace: Navigation

Cyberspace is usually considered to be a 3D spatial representation of the Internet, an expansion of the successful 2D desktop metaphor. The representation of cyberspace used in books such as Neuromancer and Snow Crash, and by the film Hackers released in the same year, is an abstract cityscape where buildings represent organisations or individual computers, and this what we see in Johnny Mnemonic. How does Johnny navigate through this virtual city?

Gestures and words for flying

Once everything is connected up, Johnny starts his journey with an unfolding gesture. He then points both fingers forward. From his point of view, he is flying through cyberspace. He then holds up both hands to stop.

jm-31-navigation-animated

Both these gestures were commonly used in the prototype VR systems of 1995. They do however conflict with the more common gestures for manipulating objects in volumetric projections that are described in Make It So chapter 5. It will be interesting to see which set of gestures is eventually adopted, or whether they can co-exist.

Later we will see Johnny turn and bank by moving his hands independently.

jm-31-navigation-f Continue reading

Cyberspace: the hardware

And finally we come to the often-promised cyberspace search sequence, my favourite interface in the film. It starts at 36:30 and continues, with brief interruptions to the outside world, to 41:00. I’ll admit there are good reasons not to watch the entire film, but if you are interested in interface design, this will be five minutes well spent. Included here are the relevant clips, lightly edited to focus on the user interfaces.

Click to see video of The cyberspace search.

Click to see Board conversation, with Pharmakom tracker and virus

First, what hardware is required?

Johnny and Jane have broken into a neighbourhood computer shop, which in 2021 will have virtual reality gear just as today even the smallest retailer has computer mice. Johnny clears miscellaneous parts off a table and then sits down, donning a headset and datagloves.

jm-30-hardware-a

Headset

Headsets haven’t really changed much since 1995 when this film was made. Barring some breakthrough in neural interfaces, they remain the best way to block off the real world and immerse a user into the virtual world of the computer. It’s mildly confusing to a current day audience to hear Johnny ask for “eyephones”, which in 1995 was the name of a particular VR headset rather than the popular “iPhone” of today. Continue reading

Talking to a Puppet

As mentioned, Johnny in the last phone conversation in the van is not talking to the person he thinks he is. The film reveals Takahashi at his desk, using his hand as if he were a sock puppeteer—but there is no puppet. His desk is emitting a grid of green light to track the movement of his hand and arm.

jm-22-puppet-call-c

The Make It So chapter on gestural interfaces suggests Takahashi is using his hand to control the mouth movements of the avatar. I’d clarify this a bit. Lip synching by human animators is difficult even when not done in real time, and while it might be possible to control the upper lip with four fingers, one thumb is not enough to provide realistic motion of the lower lip. Continue reading

Brain Upload

Once Johnny has installed his motion detector on the door, the brain upload can begin.

3. Building it

Johnny starts by opening his briefcase and removing various components, which he connects together into the complete upload system. Some of the parts are disguised, and the whole sequence is similar to an assassin in a thriller film assembling a gun out of harmless looking pieces.

jm-6-uploader-kit-a

It looks strange today to see a computer system with so many external devices connected by cables. We’ve become accustomed to one piece computing devices with integrated functionality, and keyboards, mice, cameras, printers, and headphones that connect wirelessly.

Cables and other connections are not always considered as interfaces, but “all parts of a thing which enable its use” is the definition according to Chris. In the early to mid 1990s most computer user were well aware of the potential for confusion and frustration in such interfaces. A personal computer could have connections to monitor, keyboard, mouse, modem, CD drive, and joystick – and every single device would use a different type of cable. USB, while not perfect, is one of the greatest ever improvements in user interfaces. Continue reading

Gestural Spheres

While working on some other material this weekend, I just noticed two unusual, but similar gestures from different movies in 2015, which are gestures on the outside of spheres.

First, the Something control sphere from Tomorrowland.

sphere_gesture_tomorrowland_tight

And, the core memories in Inside Out.

sphere_gesture_insideout_tight

The gestures are subtly different (Tomorrowland is full palm, Inside Out is two fingers) and their meanings are different (Tomorrowland is to shift direction of travel of the time camera, Inside Out is to scrub the time itself) but they are a nice gestural rhyme of each other.

The Inside Out image reminds me that I really, really need to do a full retrospective of interfaces in Pixar movies, because they are quite extraordinary in the aggregate.