Coming Soon: Nothing Between
You and Your Machine
By JOHN MARKOFF
Menlo Park, Calif.
IT has been more than two decades since Scotty tried to use a computer mouse as a microphone to control a Macintosh in “Star Trek IV.”
Since then, personal computer users have continued to live under the tyranny of the mice, windows, icons and pull-down menus originally invented at the Xerox Palo Alto Research Center in the 1970s and popularized by Apple and Microsoft in the next decade.
Last year, however, the arrival of the Nintendo Wii and the Apple iPhone began to break down the logjam in technological innovation for the way humans interact with computers.
Both devices extend the idea of directly controlling objects on the screen and blending that ability with visually compelling physics software that brings computer screens to life in new, immersive ways. With a Wii, a wave of the hand can slam a tennis ball in cyberspace; with the iPhone, a flick of a finger can slide a photograph across the screen like paper on a table.
The idea of directly manipulating information on a computer screen is almost as old computer graphics terminals, going back at least to 1963, to Ivan Sutherland’s Sketchpad drawing system he created at M.I.T. for his Ph.D. thesis. Since then, a thriving scientific and engineering discipline has sprung up around systems that bridge what was originally called the man-machine interface. There has been a broad exploration of pointing devices, alternatives to keyboards for entering information, voice-recognition technologies, and even sensors that capture and interact with human brain waves.
What is new is a convergence of more powerful and less expensive computer hardware and an inspired set of mostly younger software designers who came of age well past the advent of the original graphical user interface paradigm of the 1970s and ’80s.
This new generation is “mostly under 25,” said Joy Mountford, who until last month was vice president for design innovation at an advanced development group at Yahoo. “They come from a world of fluid media, and they multitask at an extraordinary level.”
One intriguing example of this new immersive approach to Web navigation is the PicLens software from Cooliris, a 10-person start-up based here.
This software plug-in for Web browsers tries to make it possible to navigate, find and share information by directly browsing the images, video and other digital media that are increasingly common on the Web.
PicLens currently offers a small icon cue inset in each Web photo that lets users know they are at a site like Facebook, Google or Flickr that can be browsed with the software. Clicking on the icon transports the user away from the conventional page-oriented Web into an immersive browsing environment.
The software does away with the browser frame and gives the user the effect of flying through a three-dimensional space that feels like an unending hallway of images. In the future, the Cooliris designers plan to make it possible to browse text and video as well.
“I’ve wondered for a long time why the computer interface hasn’t changed from 20 years ago,” said Austin Shoemaker, a former Apple Computer software engineer and now chief technology officer of Cooliris. “People should think of a computer interface less as a tool and more as a extension of themselves or as extension of their mind.”
Some of these ideas can be traced back to the 1990s, to work done at the M.I.T. Media Lab. In 2002, a former student there, John Underkoffler, brought the idea of direct manipulation to life in “Minority Report,” the science-fiction movie. (In the movie, Tom Cruise interacts with a wall-size transparent computer display directly with his hands.) More recently, the idea of a multitouch display, where images could be moved or scaled by direct touch, was brought to life both by Jeff Han, a computer science researcher at the Courant Institute of Mathematical Sciences at New York University and by W. Daniel Hillis and Bran Ferren, researchers at the consulting firm Applied Minds, who developed a “touch table” world map.
The transition to more immersive displays is happening in part because of more powerful computer hardware, but also because of an explosion of more powerful programming tools. These tools offer visual effects that were once within the grasp of only the most skillful programmers to a wide audience with only basic skills.
“The old paradigm is breaking down,” said Paul Mercer, senior director of software at Palm Inc. “It used to be that you needed to be a visionary and technologist like Michelangelo, but we’re turning that corner.”
INDEED, the more powerful graphics-oriented software has spilled over into the creation of palettes for a new generation of software-oriented artists. One new programming language, Processing, is an extension of Sun’s Java designed specifically for students, artists, designers, researchers and hobbyists who are interested in programming images, animations and interactions. It has been used extensively at “Design and the Elastic Mind,” a digital art exhibition now at the Museum of Modern Art in New York.
Voice, too, is finally beginning to play a significant role as an interface tool in a new generation of consumer-oriented wireless handsets. Many technologists now believe that hunting and pecking on the tiny keyboards of cellphones and P.D.A.’s will quickly give way to voice commands that will return map, text and other data displayed visually on small screens.
“We’re on the verge of creating something as compelling as touch, except with voice,” said Mike McCue, general manager of the Tellme subsidiary of Microsoft.
The common theme of all of the technologies will be a new kind of immersive experience.
“If you’re looking for what’s next after the Web browser, this is it,” said Bill Joy, a partner at the Kleiner Perkins Caulfield & Byers, the venture firm that is funding Cooliris.
Copyright 2008 The New York Times Company