Tech Convergence Will Spur Demand for New ADAS Technology

Seybold Seminars Boston Year 2000: Next-Generation Display Technology and Devices

Here's an interesting presentation from five years back. The 'Claymore' device mentioned here was sold at MSRP of $100,000 in 1999. With that additional data point plugged into my post from yesterday it is easy to see that we are already through the knee in the curve of exponential increases in performance for dollar of the scanned beam display technology -- and the $150 see-through MVIS-inside device may be closer than we think...

Seybold Seminars Boston Year 2000: Next-Generation Display Technology and Devices (Word doc)

Mr. Roberts: Thank you, Matt. Our next speaker is Dr. Thomas Lippert, who's the chief scientist at Microvision, Inc. He supports research and engineering tasks with a range of government and commercial contracts. He's going to talk today about laser scan display.

Thomas Lippert: (Have we switched over? There we are.) Thanks, John. Thanks for inviting us this morning. Typically, I find myself speaking to military aviation, an industrial shop floor group, or medical practitioners. We're going to talk now for a little while about that other kind of display: the microdisplay, the virtual display, the head-worn display. I notice no one here is wearing a head-worn display. Is that right? How many of you own one and use one? And that's because they're no good. They've been inadequate. Their colors are poor, the contrast is poor, the ergonomics, the fit, the comfort, the usability, all of it is poor. We're trying to do something about that.

I'll tell you a little bit about what we're doing at Microvision. We're in Seattle, and we took some technology developed at the University of Washington. We're trying to commercialize it. We call it virtual retinal display. Somebody came up with this idea of calling it the retinal scanning display. I'm going to harp on color this morning.

Let's take a moment to walk through how the display works. What you see here is a photonics unit, that is, photo-generating electronics, where we have lasers. We're going to be using laser beams to generate a visual display. A laser beam is something you have to be careful with. I walk in and talk to pilots about shining a laser beam in their eye and they say, "What?" Naturally, we're talking about very low-power lasers. We're talking about a scanning display that creates a raster pattern like a television and modulates the beam of that laser or three lasers RGB to create a meaningful image.

Here we have a red, green, and a blue laser with what we call acoustic-opto modulators. They're little crystals. We take the red, green, and blue video information and deflect the red, green, and blue beams coming through these attenuators (or these AOMs), so that we impress the beams with the video information. We combine them optically into a single full-color pixel beam. Now we've got the color image instead of an electrical signal. It's a light signal. We focus that down into an optical fiber. This whole section can then be remoted, placed in an equipment bay, placed under a table or something in an operating room, or is now run on a belt. The fiber runs up to where it's air-propagated into two mirrors. That's the way we've done it in the past. The first is a fast horizontal or line rate mirror. The second is a vertical refresh. These two mirrors then generate this raster pattern, full image. This is simply projected out through optics and diverted so that it's combined with the viewing axis of the user.

As I said, we've started using lasers, but they're rather expensive and they're rather large. We're moving to laser diodes just like what you see in the pointers. Here we have a small sugar-cube size of combiner that has red, green, and blue laser diodes embedded in it to give us a full-power image. This then becomes part of a very small package the size of an acorn that can be mounted on the head.

Here we see the green, blue, and red extent of the color gamut of a liquid crystal laptop display, the typical gamut of a color shadow mask (or flop mask CRT-type display) and what we're presently delivering in terms of the color gamut of the retinal display. The reason, of course, is that we are using lasers, and they're spectrally pure. This spectrum locus (we'll see it again at the end of the presentation) is indicated at any position on it by a single wavelength only. For that, we start with the largest color gamut; we create the most vibrant color image possible.

A word on resolution: in 1995 (we've got a time lag from 1994 through 2004), we hit our first color VGA demonstration. We sold these to [Swabb] Erickson for military simulators and to the Boeing Company in Seattle. We demonstrated SVGA in 1998 and we've delivered full-color SXGA in 2000. For a company that was incorporated in 1996, this is a rapid move. The reason is that (as you saw from the earlier picture), we're not developing a material system for a matrix display. We have no yield associated with that. We work in bulk silicon and spring steel. At the end of this year, I'll be delivering fully HDTV, 1920 x 1080, and then we can consider going up from there. We can make this rapid advance in resolution because instead of writing the screen with one-scan lines, we can write it with two or four. In other words, we can take the tip from the printing industry and write a whole bunch of lines at one time. So the same mirror, the same scan rate, now has been multiplied in terms of its effective scan rate.

Let's take a look at applications. Here's a man working on a car, and he's got a data display he takes right with him: rather important for a flight line, rather important for the shop floor, where you've got a ton of manuals that you want to be able to access quickly. The first little display that we've made that allowed people to do that is shown here. We've called it Claymore. That's not a mine that blows up; we had a Scottish product manager who liked the name. What we had was a see-through combiner and the original rather large and heavy scanning engine here. This is a see-through combiner, so what we have is a thousand-foot lambert monochrome display. It's VGA, SVGA. And it's real low against the daylight environment. That's an important distinction. If you've used the Sony Glasstron, you've seen that it's an interesting device. It's dim, the contrast is poor, and the colors are desaturated. You wouldn't want to use it in a daylight environment.

Here's one of our vice presidents showing off. This is the first rendition of the monocular system with the see-through display. Here we see the surgeon's-eye-view of some real-time data some MRI or other type of stereotaxic data in a surgical environment, where he's provided with imagery that permits the body to be seen through as a transparent or a translucent medium. The requirement here, of course, is color with extreme resolution. It doesn't have to be daylight readable, but it has to be see-through.

This is a completely different sense of information display than the paper. We're talking about a system that has to meld with the ambient environment and still be visible and legible, and meaningful in that context. We've got one of these delivered. This is a biocular unit. It's a single-unit engine split out to two eyes, so we call it biocular. It's full color and it's XGA resolution and it's being used at the Kettering Facility in Ohio for neurosurgery. If you go to our Web site (which I will show you on the last slide), it will tell you about that development and how excited we are about it and how excited the surgeons are to be using it.

The most demanding application is one like this military aviation where we know that the pilot is going to do the best job by keeping his eyes on the windscreen. That's the way he learns how to fly. That's the way he can see what's going on best, instead of his head looking down at instrumentation and then having to calculate and project from those calculations what he ought to be doing with his hands and his feet. Our problem of daylight readability in a see-through virtual head-mounted display is extreme.

You're talking about thousands of foot-lamberts to background. We're talking about typically a couple of thousand foot-lamberts of displayed luminance required at the eye just to have modest contrast and legibility. It's also true that we're talking about monochrome green systems, because those are what have traditionally been doable. These aircraft don't even have red and blue information to display. They simply have intensity information displayed in green.

We spent several million dollars of your tax dollars for the U.S. Army to deliver this SXGA (it's almost HDTV) binocular helmet-mounted display for helicopter pilots. This is in association but not part of the Comanche helicopter program. What we have here is optical fiber coming from the photonics unit, which is in an equipment bay. The fiber comes up to a scanner unit, an optical projection unit, and it's free-space projected (the light is) into this ocular, which is held in front of the eye. The pilot sees right through this and sees the image superimposed on the real world. He sees it at optical infinity and verged out infinity, so that it appears to be sitting on a mountaintop.

This is where we want to get to. We have a demonstration of one of these back at the ranch in Seattle. This is an SVGA monochrome data display for the Net, since we know that the two big trends are biotech and wireless. In the next few years, we're going to see a lot of competition in this area. Finally, we're talking about getting down to the point where we get rid of all the claptrap and we get to eyewear, with a scanning engine, audio and combiner optics in the form of wraparound glasses. In the last three months, I've designed a system that looks a lot like this for the U.S. Army. It looks like everybody is working with the military in these displays.

Obviously, this system is lightweight and easy to use. We use the nose bridge to maintain alignment and to minimize the obtrusiveness of the device. Once again, color-- Remember, if we're talking about reflective systems, the purer the color, the less light returned from the illumination source. If you want extremely clear colors reflectively, you're going to be talking about dim returns. If you want bright returns, the reflective display, you're talking about desaturated returns, relatively speaking.

We select available colors from around the pure spectral region here and we combine those RGB. What does that surgeon need? The surgeon wants to see the difference between oxygenation levels in tissues, so we want to blow up this area. We want to give him the greatest discrimination of red possible. Our system already tends to do that because we have a larger color gamut. But we can select even deeper reds as a primary. We can select a number of greens as a primary for satellite reconnaissance foliage, delineation, analysis, visually things like that. No other display technology we're aware of has anything like this color flexibility. You will be able to choose your color gamut at will.

What's the weakness? The weakness, of course, is that the state of the art in these light sources is immature. We've had lasers. They're expensive, as I said. We're moving to diodes, but we only have red right now. I've got a green pointer and I've got a blue pointer, but they're expensive and they don't last very long. We want these things to be throw-a-way. This will evolve over the next few years. It's moving faster than we thought it would three years ago. With that, I'd invite you to visit us at our Web site. Thank you very much.

Comments