Sci-fi Spacesuits: Protecting the Wearer from the Perils of Space

Space is incredibly inhospitable to life. It is a near-perfect vacuum, lacking air, pressure, and warmth. It is full of radiation that can poison us, light that can blind and burn us, and a darkness that can disorient us. If any hazardous chemicals such as rocket fuel have gotten loose, they need to be kept safely away. There are few of the ordinary spatial clues and tools that humans use to orient and control their position. There are free-floating debris that range from to bullet-like micrometeorites to gas and rock planets that can pull us toward them to smash into their surface or burn in their atmospheres. There are astronomical bodies such as stars and black holes that can boil us or crush us into a singularity. And perhaps most terrifyingly, there is the very real possibility of drifting off into the expanse of space to asphyxiate, starve (though biology will be covered in another post), freeze, and/or go mad.

The survey shows that sci-fi has addressed most of these perils at one time or another.

Alien (1976): Kane’s visor is melted by a facehugger’s acid.

Interfaces

Despite the acknowledgment of all of these problems, the survey reveals only two interfaces related to spacesuit protection.

Battlestar Galactica (2004) handled radiation exposure with simple, chemical output device. As CAG Lee Adama explains in “The Passage,” the badge, worn on the outside of the flight suit, slowly turns black with radiation exposure. When the badge turns completely black, a pilot is removed from duty for radiation treatment.

This is something of a stretch because it has little to do with the spacesuit itself, and is strictly an output device. (Nothing that proper interaction requires human input and state changes.) The badge is not permanently attached to the suit, and used inside a spaceship while wearing a flight suit. The flight suit is meant to act as a very short term extravehicular mobility unit (EMU), but is not a spacesuit in the strict sense.

The other protection related interface is from 2001: A Space Odyssey. As Dr. Dave Bowman begins an extravehicular activity to inspect seemingly-faulty communications component AE-35, we see him touch one of the buttons on his left forearm panel. Moments later his visor changes from being transparent to being dark and protective.

We should expect to see few interfaces, but still…

As a quick and hopefully obvious critique, Bowman’s function shouldn’t have an interface. It should be automatic (not even agentive), since events can happen much faster than human response times. And, now that we’ve said that part out loud, maybe it’s true that protection features of a suit should all be automatic. Interfaces to pre-emptively switch them on or, for exceptional reasons, manually turn them off, should be the rarity.

But it would be cool to see more protective features appear in sci-fi spacesuits. An onboard AI detects an incoming micrometeorite storm. Does the HUD show much time is left? What are the wearer’s options? Can she work through scenarios of action? Can she merely speak which course of action she wants the suit to take? If a wearer is kicked free of the spaceship, the suit should have a homing feature. Think Doctor Strange’s Cloak of Levitation, but for astronauts.

As always, if you know of other examples not in the survey, please put them in the comments.

Section No6’s crappy sniper tech

GitS-Drone_gunner-01

GitS-Drone_gunner-12

Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.

GitS-profile-06

The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.

GitS-Drone_gunner-06

GitS-Drone_gunner-07

These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.

These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.

GitS-Drone_gunner-02

This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?

Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.

This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.

Helmet transition

Barbarella-008

The spherical helmet that Barbarella wears with her environmental suit can change from completely reflective to completely translucent. To do so she reaches with both hands and touches controls (never shown) on the back of the helmet to activate the transition. Over 40 seconds the reflectivity withdraws into the base of the helmet, revealing Barbarella’’s face through the glass.

Direct exposure to most of the electromagnetic spectrum is dangerous. To avoid Barbarella’s accidentally frying her own head in space, the suit must be designed against accidental activation. The strategy shown is called two-hand trip, which requires two hands to touch different controls at the same time to start the process. This is most often used in machines where you want hands out of the way of processes that could pinch or cut, but that aren’t dangerous after the process begins. In this case it’s less about mechanical danger than the risks with exposure.

Another strategy would be to use two-hand control, which would require constant contact during the transition. But since this transition is so slow (and presuming there is some undo mechanism that we never see) having this “two-hand trip” is not disastrous. If something or someone accidentally tripped it, she has more than enough time to recover.

On the other hand, 40 seconds is a long time for anyone to wait for things in the days of switchable glass. If your Barbarella was less dreamy-eyed & patient than this one, you might have to make a different tradeoff.

Barbarella-012

Neuro-Visor

The second interface David has to monitor those in hypersleep is the Neuro-Visor, a helmet that lets him perceive their dreams. The helmet is round, solid, and white. The visor itself is yellow and back-lit. The yellow is the same greenish-yellow underneath the hypersleep beds and clearly establishes the connection between the devices to a new user. When we see David’s view from inside the visor, it is a cinematic, fully-immersive 3D projection of events in her dreams, that is presented in the “spot elevations” style that is predominant throughout the film (more on this display technique later).

Later in the movie we see David using this same helmet to communicate with Weyland who is in a hypersleep chamber, but Weyland is somehow conscious enough to have a back-and-forth dialogue with David. We don’t see either David’s for Weyland’s perspective in the scene.

David communicated with Weyland.

As an interface, the helmet seems straightforward. He has one Neuro-Visor for all the hypersleep chambers, and to pair the device to a particular one, he simply touches the surface of the chamber near the hyper sleeper’s head. Cyan interface elements on that translucent interface confirm the touch and presumably allow some degree of control of the visuals. To turn the Neuro-Visor off, he simply removes it from his head. These are simple and intuitive gestures that makes the Neuro-Visor one of the best and most elegantly designed interfaces in the movie.