Deckard’s Photo Inspector

Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.

Description

Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.

Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.

Deckard does digital forensics, looking for a lead.

He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.

If this is distracting you from reading, YOU SEE MY POINT.

After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”

In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.

A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.

Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”

Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”

Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.

This image helps lead him to Zhora.

I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.

But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…

Some critiques, as it is

  • Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
  • It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
  • It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
  • And if he’s memorized it, why show the overlay at all?
  • Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
  • Why is the printed picture so unlike the still image where he asks for a hard copy?
  • Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
The photo inspector: My interface is up HERE, Rick.

How might it be improved for 1982?

So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…

Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.

Rendered in glorious 4:3 NTSC dimensions.

With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.

The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.

How might it be improved for 2020?

What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.

With that in mind, let’s talk about the display.

Display

To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.

If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.

The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.

Modification of a pair of images found on Evermotion
  • In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
  • In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
  • The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.

This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.

Flat screen or volumetric projection?

Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.

But…

Also seriously who wants a lamp embedded in a headrest?

…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.


OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.

Inputs

To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.

Manual Tool

This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.

We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.

Special edition made possible by our sponsor, Tom Nook.
(I hope we can pay this loan back.)

Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.

One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?

Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.

In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.

This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).

Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.

Tipping the virtual drone to the right.

Assistant Tool

Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.

Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.

There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.

Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.

*Left: The convex mirror in Leon’s 21st century apartment.
Right: The convex mirror in Arnolfini’s 15th century apartment

Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”

All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.

Agentive Tool

To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.

It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.

Though I’ve never figured out why she has a snake tattoo here (and it seems really important to the plot) but then when Deckard finally meets her, it has disappeared.

Scene

  • Interior. Deckard’s apartment. Night.
  • Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch and places the photo on the coffee table.
  • Deckard
  • Photo inspector.
  • The machine on top of a cluttered end table comes to life.
  • Deckard
  • Let’s look at this.
  • He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomalies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector.
  • Deckard
  • OK. Anyone hiding? Moving?
  • Photo inspector
  • No and no.
  • Deckard
  • Zoom to that arm and pin to the face.
  • He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue.
  • Deckard
  • What’s the confidence?
  • Photo inspector
  • 95.
  • On the side of the screen the inspector overlays Leon’s police profile.
  • Deckard
  • Unpin.
  • Deckard lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table.
  • Deckard
  • New surface.
  • He turns the glass clockwise. The camera turns and he sees into a bedroom.
  • Deckard
  • How do we have this much inference?
  • Photo inspector
  • The convex mirror in the hall…
  • Deckard
  • Wait. Is that a foot? You said no one was hiding.
  • Photo inspector
  • The individual is not hiding. They appear to be sleeping.
  • Deckard rolls his eyes.
  • Deckard
  • Zoom to the face and pin.
  • The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face.
  • Deckard
  • That look like Zhora to you?
  • The inspector overlays her police file.
  • Photo inspector
  • 63% of it does.
  • Deckard
  • Why didn’t you say so?
  • Photo inspector
  • My threshold is set to 66%.
  • Deckard
  • Give me a hard copy right there.
  • He raises his glass and finishes his drink.

This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.

It’s the (drone) ethics

Today, a post touching on some of the ethical issues at hand with drone technology.

drone-week

Much of the debate today and in science fiction about drone technologies rightly focuses on ethics. To start, it is valuable to remember that drones are merely a technology like any other. While the technology’s roots have been driven by military research and military applications, like, say, the internet, the examples in the prior posts demonstrate that the technology can be so much more. But of course it’s not that simple.

Hang on, it’s going to get nerdy. But that’s why you come to this blog, innit?

Where drones become particularly challenging to assess in an ethical context is in their blurring of the lines of agency and accountability. As such, we must consider the ethics of the relationship between user/creator and the technology/device itself. A gun owner doesn’t worry about the gun…itself…turning on him or her. However, in Oblivion, for instance, Tom Cruise’s character flies and repairs the Predator-esque drones but then has them turn on him when their sensors see him as a threat.

image05

While not obviously not an autonomous strong artificial intelligence, a real-world drone alternates between being operated manually and autonomously. Even a hobbyist with a commercial quadcopter can feel this awkward transition when they switch from flying their drone like an RC plane to preprogramming it to fly a pattern or take photos. This transition in agency has serious implications.

Anonymity

When you see someone standing in park, looking at a handheld control or staring up into the sky at their quadcopter buzzing around, the actions of that device are attributed to them. But seeing a drone flying without a clear operator in sight gives any observer a bit of the creeps. Science fiction’s focus on military drones has meant that the depictions can bypass questions about who owns and operates them—it is always the state or the military. But as consumer drones become increasingly available it will become unclear to whom or what we can attribute the actions of a drone. Just this year there was serious public concern when drones were spotted flying around historical landmarks in Paris because their control was entirely anonymous. Should drones have physical or digital identification to service a “license plate” of sorts that could link any actions of the drone to its owner? Most probably. But the bad guys will just yank them off.

The author (in blue) and Chris Noessel (in black), at InfoCamp.

The author (in blue) and Chris Noessel (in black), at InfoCamp.

Gap between agent and outcome

Many researchers have explored the difference between online behaviour and in-person behavior. (There’s a huge body of research here. Contact me if you want more information.) People have a lot less problem typing a vitriolic tweet when they don’t have to face the actual impact of that tweet on the receiver. Will drones have a similar effect? Unlike a car, where the driver is physically within the device and operating it by hand and foot, a drone might be operated a distance of hundreds of feet (or even thousands of miles, for military drones). Will this physical distance, mediated by a handheld controller, a computer interface, or a smartphone application change how the operator behaves? For instance, research has found that these operators, thousands of miles away from combat and working with a semi autonomous technology, are in fact at risk for PTSD. They still feel connected and in enough control of the drone that its effects have a significant impact on their mental health.

However, as drones become more ubiquitous, their applications become more diverse (and mundane), and their agency increases, will this connection remain as strong? If Amazon has tens of thousands of drones flying from house to house, do the technicians managing their flight paths feel the ethical implications of one crashing through a window the same as if they had accidentally knocked a baseball through? If I own a drone and program it to pick up milk from the store, do I feel fully responsible for its behaviour as part of that flight pattern Or does the physical distance, the intangible interface and the mediating technology (the software and hardware purchased from a drone company) disassociate the agent from the effect of the technology?Just imagine the court cases.

From Breaking Defense: As drone operations increase, the military is researching effective interfaces to support human operators

From Breaking Defense: As drone operations increase, the military is researching effective interfaces to support human operators

What are the ethical consequences of the different levels of agency an operator can provide to the drone? What are the moral consequences of increased anonymity in the use of these drone technologies? The U.S. military designs computer interfaces to help its drone pilots make effective decisions to achieve mission targets. Can science fiction propose designs, interfaces and experiences that help users of drone technologies achieve ethical missions?

Come on, sci-fi. Show the way.

People don’t know what to make of this technology. It’s currently the domain of the military, big technology companies, a few startups, and a hobbyist community generally ignored outside of beautiful, drone-filmed YouTube videos. Science fiction is a valuable (and enjoyable) tool for understanding technology and envisioning implications of new technologies. Science fiction should be pushing the limits of how drone technologies will change our world, not just exaggerating today’s worst applications.

As journalist and robotics researcher P.W. Singer puts it in his TED talk, “As exciting as a drone for search-and-rescue is, or environmental monitoring or filming your kids playing soccer, that’s still doing surveillance. It’s what comes next, using it in ways that no one’s yet imagined, that’s where the boom will be.

A definitive list of sci-fi drones (in progress)

In chats with friends and followers of the blog about sci-fi Drone Week, folks seem interested in coming up with a definitive list of sci-fi drones in movies and TV shows. While we might get there eventually by reviewing the movies and TV shows in which they appear, let’s beat that to the punch by creating the list FROM OUR MINDS.

Criteria:

  • In a sci-fi movie or television show
  • Is not simply a representation of a real-world drone
  • Is mobile (in the air, on the ground, or in space)
  • Appears (or defined as) technolgoical, not biological
  • Does not have a sentient controller/pilot aboard
  • Does not possess strong/general artificial intelligence
  • Either automous or remotely-controlled

Here’s what I’ve collected so far. Add more in the comments if you think of them.

Aerial (UAVs)

  • Star Wars: Episode IV – A New Hope (1977)
    • The lightsaber training orb
    • The mouse-bot that Chewbacca scares
  • Flash Gordon (1980) the bot in Ming’s chamber
  • Viper Probe Droid seen on Hoth in Star Wars: Episode V – The Empire Strikes Back (1980)
  • Terminator diegesis (1984–)
    • Aerostats
    • Moto-Terminators (Salvation)
    • Aerostats (Salvation)
    • Hydrobots (Salvation)
  • Star Trek: The Next Generation (1987)
  • They Live (1988) but only when you wore the glasses
  • Back to the Future II (1989) had USA Today drones (unclear if they’re AI, but benefit of the doubt?)
  • Babylon 5’s (1994) camera drones
  • Stargate diegesis (1997–)
    • (early versions of the) Replicators
    • Kinos
    • World-testing UAVs
    • S4E2 The Other Side was all about drone warfare
  • Star Wars: Episode I – The Phantom Menace (1999) had “holocameras” following the podraces
  • Farscape (1999) has adorable little DRDs
  • Dark Angel (2000) had Police Hover Drones that the titular character got to surf.
  • The Incredibles (2004) Syndrome controls a few drones to do his bidding
  • Stealth (2005) (prior to the lightning strike that gives it strong general intelligence
  • Sleep Dealer (2008)
  • Wall•E (2008) SO many, though the level of their AI might disclude some
  • Skyline (2010) (has both alien drones and real world human drones)
  • The topography “pups” of Prometheus (2012)
  • Robocop (2013) (has both aerial and ground)
  • Agents of S.H.I.E.L.D. (2013–) (Seriously, this show has a thing for them)
  • Star Trek: Insurrection’s (1998) transporter/transponder drones
  • Battleship (2012) battle bots
  • Hunger Games (2012) delivery droids
  • Elysium (2013)
  • Drone (2013)
  • Oblivion’s numbered, Tet-tech drones (2014)
  • Captain America: The Winter Soldier’s city-sized drones (2014)
  • Chappie (2015) (aerial-capable, but mostly ground)

Ground

  • The floor-sweeping robots from The Fifth Element (1995)
  • The Robot from the Lost in Space movie (1998), which Will could remotely control
  • The Spyders of Minority Report (2002)
  • The remotely-controlled robots in Surrogates (2009)
  • Microbots from Big Hero 6 (2014) (a swarm of drones)

Thanks to the following suggestors for the initial list: Kelley Strang, John Danuil, Devin Hartnett, Derek Eclavea, Lane Bourn, Wally Pfingsten & kedamono.

Almost but not quite

Some suggestions seem like they would be perfect candidates, but for some reason skim the definition.

  • Bit from Tron (1982) may have been limited to yes or no answers, but was an AI
  • Machine gun drones from the deleted scenes of Aliens (1986) can only swivel, not move
  • Dreadnought from Star Trek: Voyager (1995)(VOY, Dreadnought) it’s AI
  • Jarvis from the cinematic Iron Man/Avengers diegesis, also strong AI

Keep them coming in the comments. What did we forget?

Future uses of drones

Chris: Oh my drone it’s DRONE WEEK! Wait…what’s drone week?

Recently I was invited to the InfoCamp unConference at Berkeley where among the awesome and inspiring presentations, I sat in on Peter Swigert’s workshop on drones. Since the blog was deep in Oblivion, Pete and I agreed to coauthor a series of posts on this phenomenon, and also to set the record a little more straight for sci-fi fans and authors on the real-world state of drones.

Today, a post on some cool and totally not evil speculative uses of drones.

drone-week

While drones are being used for positive purposes already, there will undoubtedly be myriad new applications as the technology develops and more people engage with its possibilities and implications. A few options include:

Voting drones

While purely online voting seems a long way off in the United States, drones could provide a physical link to voters but remove the logistical challenges of getting time off to travel to a polling station (a tactic that suppresses voting turnout and wastes time and resources.) Drones could go from house to house, authenticate by taking a photo of an individual’s face, their ID, and even their thumbprint, and a citizen could place their vote directly into the drone.

…much to the suppressors’ night terrors.

…much to the suppressors’ night terrors.

Weather management drones

With climate change likely to cause increasing challenges in weather, drones could be used as safe, effective tools for weather management. For instance, drones could seed clouds to promote rain. Drones equipped with weather sensors searching for the exact right place to release their payload could be more accurate and effective than current rocket based solutions. A cloud of drones with small parabolic lenses could cool an area by reflecting light away or warm one by concentrating it.

Biological replacement drones

Could drones replace certain species in the ecosystem? The collapse of bee communities in many parts of the world has been a major threat for agriculture and, if it needs to be said, most of human life on the planet. Should the worst happen to our bee friends in the future, could micro-drones serve the some pollination function as bees? In places where keystone species have gone extinct or can’t be maintained, could a drone be developed to automatically serve the same function? For instance, elephants knock down trees and create clearings in certain patterns, facilitating the transition from jungle to grassland. A drone could fly continuously, looking for patterns in the landscape or specific trees that an elephant would normally knock down, and either mark the trees for human removal or be constructed to damage the tree itself. Could these patterns and behaviors be used in terraforming new planets as well?

Little Johnny, it’s time you knew about the birds and the drones.

Little Johnny, it’s time you knew about the birds and the drones.

Avalanche prevention drones

Drones could scout avalanche prone areas and use computer vision and snow sampling to identify possible avalanches, and bring their own explosive payload to detonate preventative avalanches. (A practice done—dangerously—by humans today.) Backcountry skiers could rent time on a resort or park management’s drone to get a first person view of the terrain in real time before hitting the slopes.

Amber alert drones

Drones could be trained in facial recognition or even smell tracking (wouldn’t a bloodhound be more effective if it could fly and smell from the air?) to search for missing children. They could also serve as a notification system as they search for the child, broadcasting real time information to both investigators and citizens in the area.

Musical performance drones

Drones could provide on demand musical performances. No bandshell in a park? Just fly in some drones with speakers; the drones could align themselves appropriately to get the best acoustics for the setting.

“Well, mayhaps something a little less…dubsteppy?”

“Well, mayhaps something a little less…dubsteppy?”

While some ideas may seem absurd (and Chris’ comps up there turn that up to 11), science fiction can provide an interesting testing ground for what these systems might look like if implemented. What are the implications of these ideas? What impact would they have on society? On our last Drone Week post tomorrow we’ll discuss some of the ethical considerations underneath all of this.

Actual drones for not-evil

Chris: Day 2 of mighty mighty DRONE WEEK! Wait…what’s drone week?

Recently I was invited to the InfoCamp unConference at Berkeley where among the awesome and inspiring presentations, I sat in on Peter Swigert’s workshop on drones. Since the blog was deep in Oblivion, Pete and I agreed to coauthor a series of posts on this phenomenon, and also to set the record a little more straight for sci-fi fans and authors on the real-world state of drones.

Today Drone Week continues with the not-so-scary world of actual drones.

drone-week

Delivery drones

While capitalism is a neutral force at best, the speculative drones that Amazon and Google are working on stand to make delivery faster and more direct. While these projects are undoubtedly driven by possible profits, Google suggests that rapid delivery by drone will also have significant social benefits. Astro Teller, director at Google X, suggests that on-demand drop off and pick up we of goods will let us need to own less. “It would help move us from an ownership society to an access society. We would have more of a community feel to the things in our lives. And what if we could do that and lower the noise pollution and lower the carbon footprint, while we improve the safety of having these things come to you?”

Parcel delivery by drone is reminiscent of early proposals for mail delivery by parachute, seen here in a 1921 edition of Popular Mechanic.

Parcel delivery by drone is reminiscent of early proposals for mail delivery by parachute, seen here in a 1921 edition of Popular Mechanic.

Agricultural drones

Drones are already being used to help farmers monitor their crops. Companies like PrecisionHawk or senseFly offer aerial imagery capture and analysis of crop growth and health. Drones can cover much larger areas than on the ground monitoring and require minimal upfront costs or investments. With rising population to feed and climate change and soil degradation to combat, drones can be a valuable tool in increasing agricultural yields.

An example of image processing from aerial imagery taken by drone from HoneyComb, one of many companies offering drone services for agriculture

An example of image processing from aerial imagery taken by drone from HoneyComb, one of many companies offering drone services for agriculture

Medical drones

An emergency drone that carries a defibrillator has been developed and is currently in testing. The drone could be dispatched by emergency services and arrive to the site of a cardiac arrest faster than any ambulance, and “includes a webcam and loudspeaker and allows remote doctors to walk people on the scene through the process of attaching the electrodes and preparing the defibrillator.

Similarly, Doctors Without Borders is experimenting with drones to rapidly transport patient samples to fight tuberculosis epidemics in parts of Papua New Guinea where road transport is too slow.

Image from FastCo.Exist.

Image from FastCo.Exist.

Archeological drones

Drones are also being used for a variety of archaeological projects. They are a cheap method of capturing images to build 3D models of ruins. “In remote northwestern New Mexico, archaeologists are using drones outfitted with thermal-imaging cameras to track the walls and passages of a 1,000-year-old Chaco Canyon settlement, now buried beneath the dirt. In the Middle East, researchers have employed them to guard against looting.” And in the Yucatan peninsula, drones provided a cost effective solution to flying over dense, remote jungles, and identified previously undiscovered Mayan ruins.

Peruvian archaeologists command a drone to search for architectural ruins. (New York Times)

Peruvian archaeologists command a drone to search for architectural ruins. (New York Times)

Of course these are all cool and useful models of non-military uses of drones. But it can get cooler. In the next post, we’ll look at some speculative future uses of drones. Hollywood, get out your pens, or whatever it is you write with these days.

Military Drones

Chris: Oh my drone it’s DRONE WEEK! Wait…what’s drone week?

Recently I was invited to the InfoCamp unConference at Berkeley where among the awesome and inspiring presentations, I sat in on Peter Swigert’s workshop on drones. Since the blog was deep in Oblivion, Pete and I agreed to coauthor a series of posts on this phenomenon, and also to set the record a little more straight for sci-fi fans and authors on the real-world state of drones.

Today, a first post on the scary, scary world of sci-fi drones.

drone-week

Unmanned (either manual or automated) aerial vehicles, or drones, have become increasingly common in science fiction, likely a reflection of their increasing role in today’s society. While the future of drone technologies and their role in society are yet to be determined, science fiction has been conservative in its speculation. Most depictions of drones tend to suppose an expansion of the current military usage of drone technologies.

Sci-fi: Drones are scary, m’kay

The 2014 remake of Paul Verhoeven’s Robocop has drones that are clearly extensions of modern military drones: wicked Stealth-shaped things with perfect maneuverability for gunning down citizens.

Being welcomed as liberators.

Being welcomed as liberators.

Oblivion takes a similar approach, as drones are fully autonomous, big, scary technospheres used primarily for surveillance, monitoring, and firepower.

Jack facing a drone in bondage.

Jack facing a drone in bondage.

In Captain America: The Winter Soldier, the drones are gargantuan floating things, each capable of monitoring thousands of square miles at a time, but the concept is the same: drones are a military technology used for surveillance and violence.

The Falcon, for scale.

The Falcon, for scale.

More recently, Chappie features a very scary military drone who ends up being driven by a psychotic operator against the eponymous, peaceful robot. [Nobody has responded to emails for a screener, so no lovely screen caps for us. -Ed.]

That all you got?

These representations of drones (and many others in the genre) are failures of the creativity of science fiction. While they embody well-founded concerns about drones, they’re monotone, and don’t match the creativity of drone makers right here in the real world.

Hey, I get it. It’s hard to know what drones will mean to future people. Drones combine a whole set of complex technologies that people didn’t know what to do with when they first came out as individual technologies: planes, cameras, GPS, and computers. Sometimes even the military applications aren’t clear. For instance, when the Wright Brothers discussed patenting their approach to the plane in Great Britain and helping the government develop military uses, they were rebuffed, as “Their Lordships are of the opinion that they [airplanes] would not be of any practical use to the Naval Service.”

Additionally, there is an understandable psychological horror at the flying thing that either houses an inhuman machine intelligence or arguably worse, that houses some distant human’s eyes and ears but without their stake in the locale or consequences. Blasted-earth, collateral damage, and horrible mistakes don’t seem to mean as much when the perpetrator can just turn off the monitor and not think about it. So…yes. That military part is pretty scary.

But science fiction films seem particularly confused about how to represent this technology and limited by this military thinking. This was true even before modern military drones were in use; as XKCD notes, The Terminator would have been a much shorter film if it had been developed after Predator drones were around.

From XKCD.com: Our modern military can more effectively abstract the purposes of a drone soldier and design it like a plane, whereas old sci-fi depictions like The Terminator envisioned robotic humans.

From XKCD.com: Our modern military can more effectively abstract the purposes of a drone soldier and design it like a plane, whereas old sci-fi depictions like The Terminator envisioned robotic humans.

But even as science fiction has tackled modern drones, it still builds on military models rather than considering civilian contexts. For instance, the “nanobots” of the 2008 remake of The Day the Earth Stood Still or 2009’s G.I. Joe: Rise of the Cobra suggest what might happen as drone technologies become miniaturized. OK, yes, the military is currently working on this. After all, why send a multimillion dollar Predator that can be shot down when thousands of small drones could more effectively infiltrate enemy territory and conduct operations? That said, this technology doesn’t have to be used in a military context. A micro-sized drone that could identify and poison an enemy’s water supply might be adapted to unclog an artery or kill a cancer cell just as easily.

From Gizmag: British soldiers have tested these Black Hornet Nano UAVs [Image: © Crown copyright]

From Gizmag: British soldiers have tested these Black Hornet Nano UAVs [Image: © Crown copyright]

But even as sci-fi catches up to modern military designs, that’s still only one set of effects for which drones have and might be put to use. In the next post we’ll take a pass at painting the rest of the non-military picture.

Drone Programmer

image04

One notable hybrid interface device, with both physical and digital aspects, is the Drone Programmer. It is used to encode key tasks or functions into the drone. Note that it is seen only briefly—so we’re going off very little information. It facilitates a crucial low-level reprogramming of Drone 172.

This device is a handheld item, grasped on the left, approximately 3 times as wide as it is tall. Several physical buttons are present, but are unused in the film: aside from grasping, all interaction is done through use of a small touchscreen with enough sensitivity to capture fingertip taps on very small elements.

Jack uses the Programmer while the drone is disabled. When he pulls the cord out of the drone, the drone restarts and immediately begins to try and move/understand its surroundings.

image05

When Drone 172 is released from the Programmer cable, it is in a docile and inert state…

Continue reading

The Drone

image01

Each drone is a semi-autonomous flying robot armed with large cannons, heavy armor, and a wide array of sensor systems. When in flight mode, the weapon arms retract. The arms extend when the drone senses a threat.

image02

Each drone is identical in make and temperament, distinguishable only by large white numbers on its “face”. The armored shell is about a meter in diameter (just smaller than Jack). Internal power is supplied by a small battery-like device that contains enough energy to start a nuclear explosion inside of a sky-scraper-sized hydrogen distiller. It is not obvious whether the weapons are energy or projectile-based.

The HUD

The Drone Interface is a HUD that shows the drone’s vision and secondary information about its decision making process. The HUD appears on all video from the Drone’s primary camera. Labels appear in legible human English.

Video feeds from the drone can be in one of several modes that vary according to what kind of searching the drone is doing. We never see the drone use more than one mode at once. These modes include visual spectrum, thermal imaging, and a special ‘tracking’ mode used to follow Jack’s bio signature.

Occasionally, we also see the Drone’s primary objective on the HUD. These include an overlay on the main view that says “TERMINATE” or “CLEAR”.

image00 Continue reading

A Deadly Pattern

The Drones’ primary task is to patrol the surface for threats, then eliminate those threats. The drones are always on guard, responding swiftly and violently against anything they do perceive as a threat.

image02

During his day-to-day maintenance, Jack often encounters active drones. Initially, the drones always regard him as a threat, and offer him a brief window of time speak his name and tech number (for example, “Jack, Tech 49”) to authenticate. The drone then compares this speech against some database, shown on their HUD as a zoomed-in image of Jack’s mouth and a vocal frequency.

vlcsnap-2015-02-03-22h07m47s249

Occasionally, we see that Jack’s identification doesn’t immediately work. In those cases, he’s given a second chance by the drone to confirm his identity. Continue reading

Topography “Pups”

The “pups,” as low-grade sociopath and geologist Fifield calls them, are a set of spheres that float around and spatially map the surface contours of a given space in real-time.

Prometheus-202

To activate them, Fifield twists their hemispheres 90 degrees along their equator, and they begin to glow red along two red rings.

When held up for a few seconds, they rise to the vertical center of the space they are in, and begin to fly in different directions, shining laser in a coronal ring as they go.

Prometheus-118

In this way they scan the space and report what they detect of the internal topography back to the ship, where it is reconstructed in 3D in real time. The resulting volumetric map features not just the topography, but icons (yellow rotating diamonds with last initials above them) to represent the locations of individual scientists and of course the pups themselves.

Prometheus-188

The pups continue forward along the axis of a space until they find a door, at which they will wait until they are let inside. How they recognize doors in alien architecture is a mystery. But they must, or the first simple dead-end or burrow would render it inert.

The pups are simple, and for that they’re pretty cool. Activation by twist-and-lift is easy through the constraints of the environment suits, easy to remember, and quick to execute, but deliberate enough not to be performed accidentally. Unfortunately we never see how they are retreived, but it raises some interesting interaction design challenges.