Tech Convergence Will Spur Demand for New ADAS Technology

Augmented Reality at Columbia University



Augmented Reality at Columbia University

Over the coming decade, advances in hardware and wireless networking will make it possible to radically change the way in which we interact with computers. Transcending the one-user/one-display metaphor typical of today's computers, our computers will support mobile, interacting users as they move about and use large numbers of wall-mounted, desk-mounted, hand-held and head-worn displays.

To help accomplish this change, Steven Feiner, associate professor of computer science, and his group are developing user interfaces that synergistically combine see-through, head-worn displays with stationary and hand-held displays. A see-through, head-worn display adds graphics and sound to a person's normal vision and hearing, creating what is called "augmented reality." By tracking the position and orientation of the user's head and generating the graphics based on these measurements, augmented reality enhances the real world by superimposing on it a virtual world of additional information.

Such an approach could be tremendously useful in equipment maintenance, building construction, and other complex tasks in which people traditionally must constantly switch between consulting a plan or blueprint and actually working on the problem at hand. Instead, using a see-through head-worn display, people could see instructions including text and three-dimensional graphics overlaid directly on the work area. For example, a computer linked to the head-worn display could provide details of how to replace parts in an assembly and in what order the work should be done. Feiner's group is developing software for rapidly constructing prototypes of these applications that can run on interconnected computers.

Architectural Anatomy

One prototype augmented reality project, developed jointly with Anthony Webster, associate professor at Columbia's School of Architecture, Planning and Preservation, guides users through the assembly of an actual spaceframe structure. (Spaceframe buildings, such as New York's Javits Convention Center, are constructed from components of similar size and shape, often cylindrical struts and spherical nodes.) The user is instructed to use a hand-held scanner to verify the identity of the next component to install, and an image is projected onto the head-worn display to show where the component is to be placed relative to the other components that have already been assembled.

In a companion wearable computing project, the computer is worn in a backpack that also contains a global positioning system (GPS) receiver that can determine the user's position outdoors. In addition to a see-through head-worn display with built-in headphones, the user holds a small pen-based computer whose color display presents complementary information. Feiner refers to such a combination of devices as a "hybrid user interface" because it combines together several very different kinds of displays and interaction devices, to benefit from the advantages of each.

One application being prototyped for this wearable hybrid user interface is a guided campus tour that uses the head-worn display to overlay labels on surrounding buildings. The user can select a building and see its departments, and then select a department to call up its Web page on the hand-held display. Another hybrid user interface application, created in cooperation with John Pavlik, professor of journalism, and his students at Columbia's Center for New Media, presents a multimedia news story in context of the places where it occurred. Sections of the story appear as labels on the head-worn display, which when selected, present narration, movies, still pictures, and text on a combination of the two displays and headphones.

Creating effective multimedia presentations of this sort requires large amounts of skill, effort, and time. Furthermore, much material should ideally be customized to the needs of the individual user and situation, which is prohibitively expensive to do by hand. To address these problems, Feiner's group has been developing techniques for automatically designing animated 3D graphics for use in a wide variety of applications, such as explaining maintenance procedures, showing the transactions in a computer network and visualizing the value of financial instruments.

This work uses artificial intelligence techniques to plan the tasks needed to create effective pictures. These tasks include determining which objects to depict to communicate information, such as illustrating an action; which illustrative methods to use to depict these objects, such as cut-away views to reveal internal details that would otherwise be hidden; how to set up the virtual lights and cameras needed to create the pictures, and how to segue from one scene to another in an animation. Feiner's group is working with the group headed by Kathleen McKeown, professor and department chairman of computer science, to explore how to coordinate these generated graphics with generated speech and written text to produce customized multimedia presentations on the fly.

Feiner's research has been supported by the Office of Naval Research, the Defense Advanced Research Projects Agency, NYNEX Science & Technology, the New York State Science and Technology Foundation Center for Advanced Technology, the Columbia Center for Telecommunications Research, and the National Science Foundation.
Thanks to Sturocks.

Comments