Amplifying Our Brain Power Through Better Interactive Holographics

By Malcolm MacIver | August 17, 2010 8:23 pm

iron_man_2_holographics5Think of the most complicated thing you’ve written. Maybe it was a report for your employer, or an essay while in college. It could even be a computer program. Whatever it was, think of all the stuff you packed into it. Now, pause for a moment to imagine creating all that without using a word processor or a paper and pen, or really anything at all to externalize thought to something outside of your head. It seems impossible. What we get with this technology–ancient as it is–is an amplification of our brain power. Besides their gorgeous techy looks, do interactive holographics like that shown in Iron Man 2, reminiscent of interfaces shown in Minority Report, offer up some of the same brain amping?

While I was still a doctoral student, I had the opportunity work with a relative of interactive holographics, 3D virtual reality data CAVEs. This particular one, at the National Center for Supercomputing Applications (NCSA) in Urbana Illinois (the birthplace of HAL) circa 1999, was a cube with back projection on five of the six walls. You wore a headset that tracked your head position and orientation, and goggles that were LCD screens that blocked images to your right eye when the projectors were rendering images for your left eye, and vice versa when the projector was displaying images for your right eye. As you walk through space or move your head, what you see in the virtual space changes as you would expect it to.

The problem that had pushed me to use this system was trying to analyze 3D motion data of a fish that I was conducting research on. I’d developed a motion capture system for the fish, which gave fantastic 3D data of the fish moving while it was attacking its prey, but looking at this 3D data on 2D computer monitors turned out to be quite difficult. Even replaying the motion from several different views didn’t quite do the trick. So Stuart Levy at NCSA put my data set into a system called “Virtual Director” and I was able to playback the data in the cave. It was something of an unbelievable experience the first time I tried it – suddenly I could walk around the animal as it engaged in its behavior, manipulate it to get any view, rotate the wand I held to wind the behavior forward or back at different speeds. Visitors particularly enjoyed my “Book of Jonah” demo where I positioned them so that they ended going into the mouth of the fish during a capture sequence.

For my technical problem, the VR CAVE was appropriate technology: 3D display and interaction for an inherently 3D data set. It helped me see patterns in the data that I had not clearly seen before, which were incorporated into some of my subsequent publications that analyzed the movement data. It was worth the effort, and the physicality of it was fine since I didn’t need to spend multiple days working through the data.

Other uses of these kinds of “direct manipulation” interfaces that mix 3D data and real world interaction have not found such a receptive audience, as people complain that it seems tiring to make sweeping (if dramatic) gestures to go through photos that would just as well be navigated through with an arrow key. As someone who still uses “vi” to edit my text with, I can relate to criticisms of interfaces that offer more than is needed.

The important question, for any given interface, is whether simplifies difficult problems of control or analysis, or gets in the way. My former colleague Don Norman at Northwestern University has contributed a great deal to our understanding of this question, in books like The Design of Everyday Things. One of my favorite examples from that book considers two different interfaces to manipulating the position of a car seat. In one interface, on a luxury American car, there is a panel of knobs and buttons almost hidden below the left side of the dashboard. To go from a state of discomfort to a new chair position requires translating your discomfort into a series of knob pulls and twists on a console of many controls with tiny labels below each. In contrast, a German luxury car had a small version of the driver’s chair in the dashboard. To move the back of your chair down, you manipulated the chair in the dashboard accordingly; to move it forward, you would move it in the direction the chair was facing, and so on. One interface placed a large cognitive load on the user to solve the discomfort problem, while the other placed minimal demands.

Another favorite example is the “speed bug” – a tab that a plane pilot puts on the edge of an airspeed indicator to mark the velocities for critical changes to shape of the wing. Were it not for those bugs, the pilot would have to remember the velocity to do the wing adjustments – and that’s not easy, because it changes with things like the weight of the plane.

The virtual fish, miniature car seat adjuster, and speed bug are all examples of interfaces that make problems easier, and in this sense, amplify our brain power. Interactive holographic interfaces can do the same for problems where space is a convenient or needed basis for navigating the information. This isn’t always apparent in sci-fi depictions of these interfaces, but their use speaks to our hope that such 3D holographic wizardry will help us cope with the flood of data we contend with on a daily basis.

Comments (10)

  1. I know all about the car thing. Mercedes does that, although I don’t think they’re the only ones. I often find that simple tweaks on a cellphone require convolutions that don’t make sense even on really expensive phones. Oftentimes this is because interfaces are designed to be one face fits all- but my regular simple tasks aren’t your regular simple tasks. A high degree of interface adaptability is key. 98% of the time I pull my phone out of my pocket, the task I want to perform can be made instantly quicker and easier based on my personal habits, but I have no easy way of tweaking the UI. Interfaces that attempt to guess preferences are even worse, since they take away the most available tools a user has at their disposal- their intelligence and actual knowledge of their plans for the future use of the device.

  2. Captain Slog

    Check out the high tech gears in the TV series, “THIS IS NOT MY LIFE”. Do Search in YAHOO! You’ll find it. Its amazing!
    Here’s a sample! Imagine a translucent Card Phone, the size of a Credit Card. It openes like a folded page and a screen appears, with full colour Icons and a loud clear voice asking what you’d like to do. It’s a Touch Screen, and. . .
    Check it out! It’s, I’m sorry to say, and I’M a Trekker, BETTER than STAR TREK’s Technology. Except foe cars, though. They’re BMW “SMART Cars” and they run on “Carbon Credits.”

  3. I hope holographics will work in mobile devices too..

  4. Brian Too

    Good interfaces are hard to do, but generally worth the effort. One problem is that the interface needs to change with the problem set.

    Also, a problem with a large number of dimensions and a large data set, usually implies great computing power and the simultaneous manipulation of the entire data set simultaneously, or at least large portions of it. In short, give the user the power and they are going to try to use it. Woe to the interface designer if the system cannot keep up!

    If I remember correctly, that was a significant limitation of most of the VR systems. They were laggy in terms of response times and could not keep up with the user.

  5. Malcolm MacIver

    The Chemist: I know what you mean. Including tweakability to the UI is sometimes avoided due to concerns of burdening less tweak-inclined users. What would be nice, as a midway compromise, is some form of subtle adaptation to use. For example, a launcher I use on Mac OSX, Quicksilver, initially has no ranking of options on its progressive search; as it learns what programs you use more often, it will give you that option after only typing the first one or two characters of the application name.

    Brian: I don’t recall lag time being an issue for my application or for some others that involved looking around the 3D structure of molecules. But it was established early on that spending CPU power on getting real time responsiveness rather than on high fidelity graphics is much preferred by users. Even a stick figure, if it walks like a human, is much more realistic looking than an accurately rendered figure whose responsiveness is laggy.

  6. Hi Dion. So fare i can see, Lipscombe has answered your question. I don’t see any sense.

  7. Thanks a ton for present really great informations. Your website is great.I am impressed by the information that you have on this blog. It shows how well you comprehend this subject. Bookmarked this method web page, will come again for a lot more. You, my friend, amazing! I discovered just the facts I currently looked all over the place and just couldn’t discover. What a perfect website. Such as this web page your internet site is 1 of my new most popular.I like this knowledge shown and it has given me some type of motivation to have accomplishment for some cause, so keep up the superior perform!

  8. Hello ! I’m pretty sure that already see this content in another website, maybe another website copy your website, look on google, sorry for mistake i noramlly speak french ;-) ! Bye bye and continue like that

  9. Johnny

    That’s pretty cool

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »