Think of the most complicated thing you’ve written. Maybe it was a report for your employer, or an essay while in college. It could even be a computer program. Whatever it was, think of all the stuff you packed into it. Now, pause for a moment to imagine creating all that without using a word processor or a paper and pen, or really anything at all to externalize thought to something outside of your head. It seems impossible. What we get with this technology–ancient as it is–is an amplification of our brain power. Besides their gorgeous techy looks, do interactive holographics like that shown in Iron Man 2, reminiscent of interfaces shown in Minority Report, offer up some of the same brain amping?
While I was still a doctoral student, I had the opportunity work with a relative of interactive holographics, 3D virtual reality data CAVEs. This particular one, at the National Center for Supercomputing Applications (NCSA) in Urbana Illinois (the birthplace of HAL) circa 1999, was a cube with back projection on five of the six walls. You wore a headset that tracked your head position and orientation, and goggles that were LCD screens that blocked images to your right eye when the projectors were rendering images for your left eye, and vice versa when the projector was displaying images for your right eye. As you walk through space or move your head, what you see in the virtual space changes as you would expect it to.
The problem that had pushed me to use this system was trying to analyze 3D motion data of a fish that I was conducting research on. I’d developed a motion capture system for the fish, which gave fantastic 3D data of the fish moving while it was attacking its prey, but looking at this 3D data on 2D computer monitors turned out to be quite difficult. Even replaying the motion from several different views didn’t quite do the trick. So Stuart Levy at NCSA put my data set into a system called “Virtual Director” and I was able to playback the data in the cave. It was something of an unbelievable experience the first time I tried it – suddenly I could walk around the animal as it engaged in its behavior, manipulate it to get any view, rotate the wand I held to wind the behavior forward or back at different speeds. Visitors particularly enjoyed my “Book of Jonah” demo where I positioned them so that they ended going into the mouth of the fish during a capture sequence.
For my technical problem, the VR CAVE was appropriate technology: 3D display and interaction for an inherently 3D data set. It helped me see patterns in the data that I had not clearly seen before, which were incorporated into some of my subsequent publications that analyzed the movement data. It was worth the effort, and the physicality of it was fine since I didn’t need to spend multiple days working through the data.
Other uses of these kinds of “direct manipulation” interfaces that mix 3D data and real world interaction have not found such a receptive audience, as people complain that it seems tiring to make sweeping (if dramatic) gestures to go through photos that would just as well be navigated through with an arrow key. As someone who still uses “vi” to edit my text with, I can relate to criticisms of interfaces that offer more than is needed.
The important question, for any given interface, is whether simplifies difficult problems of control or analysis, or gets in the way. My former colleague Don Norman at Northwestern University has contributed a great deal to our understanding of this question, in books like The Design of Everyday Things. One of my favorite examples from that book considers two different interfaces to manipulating the position of a car seat. In one interface, on a luxury American car, there is a panel of knobs and buttons almost hidden below the left side of the dashboard. To go from a state of discomfort to a new chair position requires translating your discomfort into a series of knob pulls and twists on a console of many controls with tiny labels below each. In contrast, a German luxury car had a small version of the driver’s chair in the dashboard. To move the back of your chair down, you manipulated the chair in the dashboard accordingly; to move it forward, you would move it in the direction the chair was facing, and so on. One interface placed a large cognitive load on the user to solve the discomfort problem, while the other placed minimal demands.
Another favorite example is the “speed bug” – a tab that a plane pilot puts on the edge of an airspeed indicator to mark the velocities for critical changes to shape of the wing. Were it not for those bugs, the pilot would have to remember the velocity to do the wing adjustments – and that’s not easy, because it changes with things like the weight of the plane.
The virtual fish, miniature car seat adjuster, and speed bug are all examples of interfaces that make problems easier, and in this sense, amplify our brain power. Interactive holographic interfaces can do the same for problems where space is a convenient or needed basis for navigating the information. This isn’t always apparent in sci-fi depictions of these interfaces, but their use speaks to our hope that such 3D holographic wizardry will help us cope with the flood of data we contend with on a daily basis.