In Starship Troopers, after Ibanez explains that the new course she plotted for the Rodger Young (without oversight, explicit approval, or notification to superiors) is “more efficient this way,” Barcalow walks to the navigator’s chair, presses a few buttons, and the computer responds with a blinking-red Big Text Label reading “COURSE OPTIMAL” and a spinning graphic of two intersecting grids.
Yep, that’s enough for a screed, one addressed first to sci-fi writers.
A plea to sci-fi screenwriters: Change your mental model
Think about this for a minute. In the Starship Troopers universe, Barcalow can press a button to ask the computer to run some function to determine if a course is good (I’ll discuss “good” vs. “optimal” below). But if it could do that, why would it wait for the navigator to ask it after each and every possible course? Computers are built for this kind of repetition. It should not wait to be asked. It should just do it. This interaction raises the difference between two mental models of interacting with a computer: the Stoic Guru and the Active Academy.
![A-writer](https://i0.wp.com/scifiinterfaces.com/wp-content/uploads/2014/06/a-writer.png?resize=545%2C422)
Stoic Guru vs. Active Academy
This movie was written when computation cycles may have seemed to be a scarce resource. (Around 1997 only IBM could afford a computer and program combination to outthink Kasparov.) Even if computation cycles were scarce, navigating the ship safely would be the second most important non-combat function it could possibly do, losing out only to safekeeping its inhabitants. So I can’t see an excuse for the stoic-guru-on-the-hill model of interaction here. In this model, the guru speaks great truth, but only when asked a direct question. Otherwise it sits silently, contemplating whatever it is gurus contemplate, stoically. Computers might have started that way in the early part of the last century, but there’s no reason they should work that way today, much less by the time we’re battling space bugs between galaxies.
A better model for thinking about interaction with these kinds of problems is as an active academy, where a group of learned professors is continually working on difficult questions. For a new problem—like “which of the infinite number of possible courses from point A to point B is optimal?”—they would first discuss it among themselves and provide an educated guess with caveats, and continue to work on the problem afterward, continuously, contacting the querant when they found a better answer or when new information came in that changed the answer. (As a metaphor for agentive technologies, the active academy has some conceptual problems, but it’s good enough for purposes of this article.)
![guruacademy](https://i0.wp.com/scifiinterfaces.com/wp-content/uploads/2014/06/guruacademy.png?resize=545%2C238)
Consider this model as you write scenes. Nowadays computation is rarely a scarce resource in your audience’s lives. Most processors are bored, sitting idly and not living up to their full potential. Pretending computation is scarce breaks believability. If ebay can continuously keep looking on my behalf for a great deal on a Ted Baker shirt, the ship’s computer can keep looking for optimal courses on the mission’s behalf.
In this particular scene, the stoic guru has for some reason neglected up to this point to provide a crucial piece of information, and that is the optimal path. Why was it holding this information back if it knew it? How does it know that now? “Well,” I imagine Barcalow saying as he slaps the side of the monitor, “Why didn’t you tell me that the first time I asked you to navigate?” I suspect that, if it had been written with the active academy in mind, it would not end up in the stupid COURSE OPTIMAL zone.
Optimal vs. more optimal than
Part of the believability problem of this particular case may come from the word “optimal,” since that word implies the best out of all possible choices.
But if it’s a stoic guru, it wouldn’t know from optimal. It would just know what you’d asked it or provided it in the past. It would only know relative optimalness amongst the set of courses it had access to. If this system worked that way, the screen text should read something like “34% more optimal than previous course” or “Most optimal of supplied courses.” Either text could show some fuigetry that conveys a comparison of compared parameters below the Big Text Label. But of course the text conveys how embarrassingly limited this would be for a computer. It shouldn’t wait for supplied courses.
If it’s an active academy model, this scene would work differently. It would have either shown him optimal long ago, or show him that it’s still working on the problem and that Ibanez’ is the “Most optimal found.” Neither is entirely satisfying for purposes of the story.
![Hang-on-idea](https://i0.wp.com/scifiinterfaces.com/wp-content/uploads/2014/06/hang-on-idea.png?resize=545%2C346)
How could this scene have gone?
We need a quick beat here to show that in fact, Ibanez is not just some cocky upstart. She really knows what’s up. An appeal to authority is a quick way to do it, but then you have to provide some reason the authority—in this case the computer—hasn’t provided that answer already.
A bigger problem than Starship Troopers
This is a perennial problem for sci-fi, and one that’s becoming more pressing as technology gets more and more powerful. Heroes need to be heroic. But how can they be heroic if computers can and do heroic things for them? What’s the hero doing? Being a heroic babysitter to a vastly powerful force? This will ultimately culminate once we get to the questions raised in Her about actual artificial intelligence.
Fortunately the navigator is not a full-blown artificial intelligence. It’s something less than A.I., and that’s an agentive interface, which gives us our answer. Agentive algorithms can only process what they know, and Ibanez could have been working with an algorithm that the computer didn’t know about. She’s just wrapped up school, so maybe it’s something she developed or co-developed there:
- Barcalow turns to the nav computer and sees a label: “Custom Course: 34% more efficient than models.”
- BARCALOW
- Um…OK…How did you find a better course than the computer could?
- IBANEZ
- My grad project nailed the formula for gravity assist through trinary star systems. It hasn’t been published yet.
BAM. She sounds like a badass and the computer doesn’t sound like a character in a cheap sitcom.
So, writers, hopefully that model will help you not make the mistake of penning your computers to be stoic gurus. Next up, we’ll discuss this same short scene with more of a focus on interaction designers.