I know it's been a pretty light couple of months for updates here on MVIS Blog, but I wanted to say thanks to each of you for continuing to come by and for all your interest and support.
Here's to a happy and healthy new year and best wishes to all in 2008!
Computing: Superimposing computer graphics on the real world, instead of displaying them on screens, has many potential uses
FIRST, catch your cockroach. The clinical psychology department at Universitat Jaume I in Castelló, Spain, paid its cleaners to capture live specimens. A team of computer-imaging specialists filmed the creatures, digitised images of their scurrying and teeming, and displayed the images—not on a computer monitor, but on see-through goggles. To the wearer, the virtual roaches then look as though they are really in the room. Next, university psychologists set about therapeutically frightening patients who have a fear of insects. “They put a foot on the ground and the cockroaches start climbing over it,” says Cristina Botella, who led the researchers. “The computer just pumps them out.” In November Ms Botella presented her team's findings at the Association for Behavioural and Cognitive Therapies conference in Philadelphia. The treatment worked very well.
For some things, it turns out, computer graphics can be much more effective when viewed not on screens, but superimposed on the real world. The technique is known as “augmented reality” (AR) or, less frequently, as “augmented vision”, because the real world is augmented with virtual text or graphics. Much AR technology remains in labs, but research funding in both the private and public sectors is increasing, and all kinds of eclectic and ingenious applications are emerging in fields as diverse as medicine, warfare, manufacturing and entertainment.
Consider the task of locating veins, a crucial step in procedures such as inserting intravenous drips, injecting medicines and drawing blood. Last year Luminetx, a medical-equipment firm in Memphis, Tennessee, began selling an AR machine called the VeinViewer. It shines near-infra-red light at the patient's skin, and because blood vessels absorb such light, a digital video camera that captures the reflected light can work out the precise location of veins to a depth of almost 1cm. A projector then shines a map of the vein network directly on the skin. The process takes place in real time, so the luminous map changes as the patient moves. “It's like Superman vision—you can see under the skin,” says Kasuo Miyake of Clínica Miyake, a clinic that performs vein-related procedures in São Paulo, Brazil. Dr Miyake says his VeinViewer boosted referrals by 20%, cut costs by 30% and reduced the need for anaesthesia.
AR can also be used in surgery. A big benefit is that surgeons do not need to keep looking up and down, switching their gaze between the patient and displays on nearby equipment; instead, the information can be directly overlaid, in effect giving the surgeon X-ray vision and other superhuman powers. Even so, there are very few AR operating rooms, says Henry Fuchs, an expert in AR medicine at the University of North Carolina at Chapel Hill. Dr Fuchs says that although early (and sparse) evidence suggests that AR surgery is more accurate, uptake is slow for several reasons. The equipment is expensive, training is time-consuming and involves ditching some skills surgeons have worked hard to attain, and new surgical techniques are more vulnerable to malpractice lawsuits.
The technology also has less serious uses, however. YDreams, a marketing and digital-media firm in Lisbon, Portugal, has developed an AR sightseeing viewer called VSS. The first such machine, bolted atop a battlement on the 12th-century Pinhel Castle in north-eastern Portugal, delights tourists who tilt it up, down and around for an augmented view of the castle and its surroundings. Place names and explanatory text are superimposed over objects seen through the viewer's screen, and animated graphics show how some structures were built or destroyed. The number of visitors has doubled since the viewer was installed in July 2006, says Isabel Almeida, who manages the castle.
“Head-mounted displays could show soldiers the locations of friendly forces, as in a video game.”In France the Parc du Futuroscope, an amusement park near Poitiers, is building a €7m ($10m) safari attraction that will be devoid of animals. Instead, passengers riding on a small train will look through hand-held AR binoculars that will superimpose frolicking 3-D virtual animals over the real decor. “It's pretty close to being a magic show,” says Bruno Uzzan, the boss of Total Immersion, the company that is developing the attraction. Video games would also seem a perfect fit for AR technology, and many game studios are investing in development. But very little AR gaming technology has hit the market. Gamers demand extremely fast and rich graphics, and so far the hardware is too expensive to make AR a mass-market proposition. If the future is any guide, however, prices will fall once AR is adopted by more serious users—such as the armed forces.
Virtual reality has proved to be immensely useful in military training. But virtual worlds—those created entirely by computer, and viewed on screens—cannot be used to improve live-fire training, long a priority of America's Marine Corps, says Bob Armstrong, until recently deputy director of the Marine Corps' Training and Education Technology Division. “We end up shooting at piles of tyres or old vehicles,” he says. “We wanted to inject a thinking, moving target into the live-fire environment.” His team, working closely with organisations including the Office of Naval Research and various defence contractors, managed to do just that using AR technology.
During battle, forward observers identify targets and direct artillery, mortar and aviation fire. With the Marine Corps training system, forward observers wear a head-mounted display with a see-through visor; objects displayed on the visor appear to be part of the real world. Mr Armstrong, now director of technology at the Virginia Modelling, Analysis and Simulation Centre at Old Dominion University in Norfolk, Virginia, says the Marine Corps' AR live-fire training system works exceptionally well. Instructors use tablet PCs to move the virtual targets while trainees shoot at them, and the virtual targets can even hide behind real terrain or buildings.
On the battlefield, AR could have an important role in disseminating tactical intelligence. Soldiers with head-mounted displays might, for example, read street names superimposed on the ground, follow colour-coded arrows for patrols or retreats, and see symbols indicating known or potential sniper nests, weapons caches and hiding places for booby-traps. The displays could also show the locations of friendly forces and levels of ammunition and other supplies, as in a video game.
Mark Livingston, head AR researcher at the Naval Research Laboratory in Washington, DC, says his team is developing “3-D ink” writing methods that will allow soldiers to paint virtual symbols or text onto the real world, so that other soldiers who arrive at the same spot later can see them. He remarks, only half jokingly, that young soldiers who are used to video games are better equipped to handle this visual “information overload”.
Industrial engineers call them discrepancies, deviations, clashes or conflicts. Variations between a structure's digital architectural models—the virtual renderings produced by computer-assisted design (CAD) software—and the structure itself are common. Finding and mapping them is important, because accurate CAD models are needed to operate, maintain, repair and insure buildings. And some discrepancies require rebuilding, so the sooner they are found the better. But current checking methods, generally involving laser scanning, are expensive and require lengthy set-up. A new method, using AR, is on the way.
Siemens, working with the Technical University of Munich, has prototyped AR discrepancy-checking software for industrial plants. Engineers superimpose the original CAD models over actual buildings to determine which bits of the model need to be updated, or which parts of the building need to be rebuilt. The process is less complicated than laser scanning, so discrepancy checks can be more frequent, making rebuilding less expensive. Mirko Appel, an engineer and senior project manager at Siemens, estimates that the software will reduce the cost of constructing a typical medium-sized coal-fired power plant by more than $1m. Areva, a French nuclear giant, used the system to check a European plant for discrepancies in September. And OMV, an Austrian energy company, has tested a similar system developed by the Upper Austria University of Applied Sciences.
Volkswagen (VW) uses AR discrepancy-checking software to verify the conformity of components supplied by subcontractors. Christoph Kohnen, a spokesman, says the technique is “tremendously faster and cheaper” than previous measuring methods. The carmaker, which has teamed up with Kuka, a robot-maker, to improve the system, is introducing the technology in some 40 factories worldwide. And it has devised other uses for AR. To study crash tests, VW superimposes an image of an intact car over the wreckage of a crashed vehicle. To speed up prototype construction, it uses AR to superimpose luminous instructions directly on the tools and prototype components in front of workers. AR can also help to design production lines. Metaio, an AR developer based in Munich, does brisk business applying AR to “interfering-edge analysis”. Its systems move virtual models of prototype machinery and products around manufacturing plants to determine how existing equipment would have to be moved or modified.
AR is still an immature field compared with virtual reality, which has now entered the mainstream in the form of video games, online virtual worlds and computer-animated films and special effects. The additional technologies required to take virtual images and integrate them into the real world still have a long way to go: most AR technology is still expensive, fragile and unwieldy, though researchers are doing their best to change that. But given a few more years, it is not hard to imagine where all this might lead: imagine satellite-navigation systems that appear to paint the road yellow to show a driver which way to go, mirrors that let you try on different outfits or haircuts, or glasses that turn the whole world into a backdrop for a video game. Why settle for reality when you can augment it?
Q&A: Author Nicholas Carr on the Terrifying Future of Computing
[Editor's Note: 'Terrifying' is a little dramatic, don't you agree?]
Nicholas Carr is high tech's Captain Buzzkill — the go-to guy for bad news. A former executive editor of Harvard Business Review, he tossed a grenade under big-budget corporate computing with his 2004 polemic Does IT Matter? (Answer: Not really, because all companies have it in spades.) Carr's new book, The Big Switch, targets the emerging "World Wide Computer" — dummy PCs tied to massive server farms way up in the data cloud. We asked Carr why he finds the future of computing so scary.
Wired: IBM founder Thomas J. Watson is quoted — possibly misquoted — as saying the world needs only five computers. Is it true?
Carr: The World Wide Web is becoming one vast, programmable machine. As NYU's Clay Shirky likes to say, Watson was off by four.
Wired: When does the big switch from the desktop to the data cloud happen?
Carr: Most people are already there. Young people in particular spend way more time using so-called cloud apps — MySpace, Flickr, Gmail — than running old-fashioned programs on their hard drives. What's amazing is that this shift from private to public software has happened without us even noticing it.
Wired: What happened to privacy worries?
Carr: People say they're nervous about storing personal info online, but they do it all the time, sacrificing privacy to save time and money. Companies are no different. The two most popular Web-based business applications right now are for managing payroll and customer accounts — some of the most sensitive information companies have.
Wired: What's left for PCs?
Carr: They're turning into network terminals.
Wired: Just like Sun Microsystems' old mantra, "The network is the computer"?
Carr: It's no coincidence that Google CEO Eric Schmidt cut his teeth there. Google is fulfilling the destiny that Sun sketched out.
Wired: But a single global system?
Carr: I used to think we'd end up with something dynamic and heterogeneous — many companies loosely joined. But we're already seeing a great deal of consolidation by companies like Google and Microsoft. We'll probably see some kind of oligopoly, with standards that allow the movement of data among the utilities similar to the way current moves through the electric grid.
Wired: What happened to the Web undermining institutions and empowering individuals?
Carr: Computers are technologies of liberation, but they're also technologies of control. It's great that everyone is empowered to write blogs, upload videos to YouTube, and promote themselves on Facebook. But as systems become more centralized — as personal data becomes more exposed and data-mining software grows in sophistication — the interests of control will gain the upper hand. If you're looking to monitor and manipulate people, you couldn't design a better machine.
Wired: So it's Google über alles?
Carr: Yeah. Welcome to Google Earth. A bunch of bright computer scientists and AI experts in Silicon Valley are not only rewiring our computers — they're dictating the future terms of our culture. It's terrifying.
Wired: Back to the future — HAL lives!
Carr: The scariest thing about Stanley Kubrick's vision wasn't that computers started to act like people but that people had started to act like computers. We're beginning to process information as if we're nodes; it's all about the speed of locating and reading data. [Editor's Note: This is my take as well...] We're transferring our intelligence into the machine, and the machine is transferring its way of thinking into us.
Tuesday December 11, 6:00 am ET
REDMOND, Wash.--(BUSINESS WIRE)--Microvision (NASDAQ:MVIS - News), a leader in light scanning technologies for display and imaging products, announced that it has signed a development agreement with a leading European supplier of automotive and industrial technologies. Under the agreement, Microvision will deliver prototype samples for the automotive Tier 1 partner to evaluate Microvision’s PicoP™ technology for a variety of automotive display applications, including Head Up Displays (HUD).
Microvision has pioneered the development of an ultra-miniature laser projection technology called PicoP. The PicoP display is based on Microvision’s proprietary MEMS scanning micro mirror technology that offers important mobile application advantages over existing flat panel technologies: exceptional resolution, contrast and color, smaller packaging, and less power consumption.
“We are now in development with leading Tier 1 integrators in all three major automotive economies: North America, Europe and Asia,” said Alexander Tokman, President and CEO of Microvision. “We believe this further affirms our PicoP automotive strategy and allows us to broaden opportunities with leading automakers.”
The name of the automotive tier 1 supplier was withheld at its request for confidentiality reasons.
About Microvision: http://www.microvision.com
Headquartered in Redmond, WA, Microvision, Inc., is the world leader in the development of high-resolution displays and imaging systems based on the company's proprietary silicon micro-mirror technology. The company's technology has applications in a broad range of industrial, consumer, military, and professional products.
Certain statements contained in this release, including those relating to future product development, market opportunities and statements using words such as "believe" are forward-looking statements that involve a number of risks and uncertainties. Factors that could cause actual results to differ materially from those projected in the company's forward-looking statements include the following: capital market risks, our ability to raise additional capital when needed; market acceptance of our technologies and products; our financial and technical resources relative to those of our competitors; our ability to keep up with rapid technological change; our dependence on the defense industry and a limited number of government development contracts; government regulation of our technologies; our ability to enforce our intellectual property rights and protect our proprietary technologies; the ability to obtain additional contract awards; the timing of commercial product launches and delays in product development; the ability to achieve key technical milestones in key products; dependence on third parties to develop, manufacture, sell and market our products; potential product liability claims, risks related to Lumera's business and the market for its equity and other risk factors identified from time to time in the company's SEC reports and other filings, including the Company's Annual Report on Form 10-K filed with the SEC. Except as expressly required by the federal securities laws, we undertake no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events, changes in circumstances or any other reason.
Bi-stable displays, touch screens and miniature projectors are gaining momentum
By Dennis P. Barker
Digital TV Designline
(12/06/2007 5:06 PM EST)
LCD is the dominant display technology for most electronic products, including televisions, computer monitors, notebook PCs, Ultra Mobile PCs (UMPCs), MP3/Portable Media Players (PMPs) and mobile phones. However, there still is room and a need for emerging display technologies, according to iSuppli Corp.
According to Jennifer Colegrove, senior analyst for display technology and strategy for iSuppli, "Alternative technologies are still required because they can overcome some of the disadvantages of LCDs, and have some special capabilities that LCDs cannot match. These technologies include touchscreen, bi-stable, near-eye, Head-Up Display (HUD) and miniature projection displays."
Examples of the strong market prospects for such technologies include:
Near-eye display revenue is expected to grow to $724 million by 2012, rising from $209 million in 2007.
The global HUD module market is expected to reach $107 million in revenue by 2012, up from $26 million in 2006.
Consumers love tiny handheld electronic devices, but don't love diminutive displays that can show only infinitesimal images. Because of this, makers of handhelds—including Portable Media Players (PMPs), DVD players and mobile TVs—hope to improve the viewing experience by offering products with pocket/embedded projectors and near-eye displays, also called Head-Mounted Displays (HMDs). Such display solutions not only offer a larger viewing area, but also lower costs, less power consumption and reduced weight and size.
As its name suggests, the near-eye display is designed to be placed on a helmet or visor close to the user's eye, providing a virtual image that is larger than the physical dimensions of the display. HMDs can display a virtual image ranging in size from 20 inches to 100 inches, providing a much more comfortable and compelling viewing experience than the as small as 2-inch displays typically used on mobile phones.
The pocket projector market is growing due to the high demand for portable presentation equipment. iSuppli defines pocket projectors as those that weigh less than 2 pounds, or about 0.9 kilograms and have a size smaller than 60 cubic inches, or about 983 cubic centimeters, without a battery.
Pocket projectors are preferred by travelers, because they allow them to deliver presentations to small groups of people instantly, at any time, and in any place required. Most of these projectors can run on batteries.
Commercially available pocket projectors mostly now weigh between 1 and 2 pounds, or 0.45 to 0.9 kilograms. A pocket projector that weighs less than 1 pound is set to come to the market in the fourth quarter.
Displays have been used in automobiles for decades, as they can provide information for drivers and entertainment for passengers. Head-Up Displays (HUDs) enhance safety by keeping drivers' eyes on the road. Currently, there are many vehicle manufacturers offering HUDs including General Motors, BMW, Toyota, Nissan, Ford and Honda.
The global HUD module market is expected to reach $107 million in revenue by 2012, up from $26 million in 2006.
There are big growth opportunities for miniature projectors. And with the rear-projection television market losing momentum, microdisplay manufacturers should view this market as an opportunity for growth.
Can't believe it's been nearly a month since I've updated MVIS Blog...! Just wanted to take a second and let you know I'm still here, still pumped, and have been basically too busy to post for the last few weeks.
Thanks for sticking with me here, and I hope to get back into the swing of regular posts before too long. Hope you're doing great out there.
REDMOND, Wash. (Business Wire EON) November 14, 2007 -- Microvision, Inc. (Nasdaq:MVIS): a leading developer of scanned light beam technology, announced today that its pocket-sized ROV™ Laser Barcode Scanner is now shipping to customers. ROV Scanner is a hand-held, Bluetooth®-enabled laser barcode scanner for mobile workers who need an affordable data collection solution that easily connects via a wireless link to a wide variety of mobile computing platforms.
“We are pleased to announce to the business mobility marketplace the availability of the ROV Scanner,” said Ian Brown, Vice President of Sales and Marketing for Microvision. “The small hand-held ROV Scanner will provide customers with a data collection device with outstanding reliability and performance at a very attractive price.”
About ROV Scanner
The Microvision ROV Scanner is specifically designed to read and collect barcode data in both simple business environments and more demanding mobility environments, including construction, field services, transportation, professional services, hospitality, government, retail, manufacturing, and healthcare. According to Venture Development Corporation, the business mobility market comprises 10.9 million organizations and 69 million workers. Of this total available market, Microvision estimates that there are more than 13 million mobile devices that could benefit from incorporating ROV-enabled barcode scanner applications.
The ROV Scanner provides users with simple-to-use “point and scan” capability, offers a broad range of barcode symbol decodes, boasts onboard memory to hold more than 4,000 scans, and runs on inexpensive AAA batteries. To further support mobile workers, the scanner can be operated using rechargeable batteries and has rubberized grip points to keep the barcode scanner securely fitted in the user’s hand.
Through its standard Bluetooth connection, coupled with Microvision’s Scanner Wedge software, the ROV Scanner seamlessly delivers scanned barcode data directly into business applications on users’ laptops, mobile phones, and PDA’s. The ROV Scanner is compatible with major mobile computing platforms including Windows®, Windows Mobile®, BlackBerry®, Symbian® and Palm®, to enable complete solutions for mobile data capture.
For developers who desire advanced barcode data control, Microvision provides Software Developer Kits (SDKs) containing Application Programming Interfaces (APIs) and other integration tools. The simple, affordable and connected ROV Scanner with Bluetooth has a suggested manufacturer’s retail price of only $299.95 per unit.
For more information on the ROV Scanner, visit www.microvision.com/barcode. The product is available from Microvision authorized resellers, as well as from Microvision’s on-line store at www.microvision.com/store.
Microvision provides a display technology platform to enable next-generation display and imaging products for pico projectors, vehicles displays, and wearable displays that interface to mobile devices. The company also manufactures and sells its barcode scanner product line, which features the company’s proprietary MEMS technology.
Certain statements contained in this release, including those relating to product release timing, market acceptance and statements using words such as "will" and "believe," are forward-looking statements that involve a number of risks and uncertainties. Factors that could cause actual results to differ materially from those projected in the company's forward-looking statements include the following: capital market risks, our ability to raise additional capital when needed; market acceptance of our technologies and products; our financial and technical resources relative to those of our competitors; our ability to keep up with rapid technological change; our dependence on the defense industry and a limited number of government development contracts; government regulation of our technologies; our ability to enforce our intellectual property rights and protect our proprietary technologies; the ability to obtain additional contract awards; the timing of commercial product launches and delays in product development; the ability to achieve key technical milestones in key products; dependence on third parties to develop, manufacture, sell and market our products; potential product liability claims, risks related to Lumera's business and the market for its equity and other risk factors identified from time to time in the company's SEC reports and other filings, including the Company's Annual Report on Form 10-K filed with the SEC. Except as expressly required by the federal securities laws, we undertake no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events, changes in circumstances or any other reason.
November 11, 2007 (Computerworld) -- Ray Kurzweil is a futurist and author whose book The Singularity Is Near: When Humans Transcend Biology (Viking Adult, 2005) predicts advances in computing technologies and biological research over the next four decades, culminating in the merger of biological and nonbiological intelligence. Kurzweil is also a prolific inventor who has developed hardware and software for optical character recognition, speech recognition and electronic music.
At the heart of your book is the idea that technology advances exponentially. Can you explain? Technology, particularly if we can measure the information content, proceeds exponentially, not linearly. And a lot of people don’t realize that, and that’s one of the reasons long-term forecasts generally fall substantially short of the ultimate reality.
If we look at information technology, we see this reflected in an exponential growth in the power of those technologies. The price/performance for computing is literally doubling every year. Information processes are revolutionizing every industry, every area of technology. And so [areas] like health and medicine, which used to be hit-or-miss, are now becoming information technologies and will be subject to what I call this law of accelerating returns.
How will hardware technologies evolve over the next 10 years? If you go out 10 years, computers are not going to be these rectangular objects we carry around. They’re going to be extremely tiny. They’re going to be everywhere. There’s going to be pervasive computing. It’s going to be embedded in the environment, in our clothing. It’s going to be self-organizing.
We’re going to solve this dilemma we have now with displays. On the one hand, people like 50-inch screens, and they’ll spend thousands of dollars on them. On the other hand, they like watching movies on a 1- or 2-inch screen, but that’s really not a satisfactory experience. We are going to solve that by putting the displays in our glasses, which will beam images to our retinas. This will create very high-resolution virtual displays that can hover in the air. And it can also completely overtake your visual field of view in three dimensions, creating full-immersion visual/auditory virtual reality. [Editor's Note: That's Microvision, gang.]
We’ll also have augmented real reality. The computers will be watching what you watch, listening to what you’re saying, and they’ll be helping. So if you look at someone, little pop-ups will appear in your field of view, reminding you of who that is, giving you information about them, reminding you that it’s their birthday next Tuesday. If you look at buildings, it will give you information, it will help you walk around. If it hears you stumbling over some information that you can’t quite think of, it will just pop up without you having to ask.
What’s your definition of artificial intelligence? Artificial intelligence is the ability to perform a task that is normally performed by natural intelligence, particularly human natural intelligence. We have in fact artificial intelligence that can perform many tasks that used to require — and could only be done by — human intelligence. There are hundreds of examples today, and they are deeply embedded in our economic infrastructure.
All communication is governed by intelligent algorithms that route and connect the information. Programs are embedded into computer-assisted design systems. AI flies and lands airplanes, guides intelligent weapons systems, places billions of dollars of financial transactions each day.
These examples are narrow AI, in that they are performing specific tasks, very often sophisticated tasks that required human experts to perform.
What could slow down the arrival of strong AI, or of the “smarter than human” technologies you call the Singularity? There are really two areas to think about. One is hardware and one is software. There’s a strong consensus that the hardware will be available. So, the key issue is how long it will take to get the software and science. I make the case that a 20-year horizon is a conservative estimate, based on the exponential progress we’re making in reverse-engineering the human brain.
In one of your earlier books, The Age of Spiritual Machines, you have a chapter titled “2009.” And you nailed quite a few technologies pretty well. But one technology that didn’t seem to fulfill the promise that you anticipated was speech recognition. Well, first of all, this isn’t 2009 yet. We need exponential progress in computation to get linear gains in speech recognition accuracy, because we are making exponential gains in computing. And a lot of people’s impressions of speech recognition are based on having tried it three, four, five years ago. It’s actually improved a great deal.
Language translation is quite good, particularly now that we have these large Rosetta Stones of matching text in different languages, so the statistical approach of doing language translation with very large Rosetta Stone text to train on, using pattern-recognition techniques, gets very excellent results.
You have also discussed an intriguing invention that you call the “Document Image and Storage Invention,” for long-term storage of computer files. But you have concluded that it really wouldn’t work. Why? Software formats are constantly changing. Try resuscitating some information on some PDP-1 magnetic tapes. Even if you could get the hardware to work, the software formats are completely alien, and nobody is there to support these formats anymore.
I think this is fundamentally a philosophical issue. I don’t think there’s any technical solution to it. Information actually will die if you don’t continually update it.
— Interview by Ian Lamont
Name: Ray Kurzwell
Title: Founder and CEO, Kurzweil Technologies Inc.
Invention He's Most Proud Of: "The Kurzweil Reading Machine for the blind. What's exciting for an inventor is to have your inventions be used and actually have a benefit for their users. So the kind of feedback I've gotten from bilnd students and blind people, who say they couldn't have held their jobs without the Kurzweil Reading Machine, has been the most gratifying."
Web Sites Visited Every Day: Slashdot.org, Foresight.org and Singinst.org
Favorite Musical Work: "I like artists from many genres, ranging from Carrie Underwood and Alanis Morissette to Eminem. For classic rock, I like the Beatles and Jefferson Airplane. My favorite classical composer is Beethoven."
Labels: Ray Kurzweil
R. Colin Johnson
(11/12/2007 9:00 AM EST)
Microelectromechanical systems (MEMS) have revolutionized every industry that has adopted them, according to presenters at the MEMS Executive Conference earlier this month in San Diego. For instance, the MEMS accelerometer has greatly enhanced the safety of automobiles with airbags. Likewise, the Nintendo Wii's motion-based controller has changed the gaming landscape, while Apple's iPhone has set a new standard for cell phones. Now, MEMS chips, combined with the smart software that utilizes them, are being designed into cell phones at a pace reminiscent of camera phone adoption, enabling a new breed of consumer-pleasing electronic devices.
"I predict that there will be 10 billion MEMS chips in mobile phones by 2010," said keynote speaker Philippe Kahn, chairman of Fullpower Technologies Inc. (Santa Cruz, Calif.), to the 150 MEMS executives at the conference. "The Wii and iPhone are just the beginning. Motion detection with MEMS accelerometers will soon enable all kinds of functions, such as shaking your cell phone to pick up a call: no buttons, no fingers, just simple, natural gestures."
Kahn invented the camera phone, founded Borland Software Corp., Starfish Software (acquired by Motorola in 1998) and Lightsurf (acquired by VeriSign in 2005). Kahn's latest startup company, Fullpower Technologies, provides a multitasking preemptive priority operating environment for consumer-device designers trying to utilize MEMS accelerometers, proximity sensors, ambient light detectors, pressure sensors, magnetometers (compass) and global positioning system (GPS) chips, as well as MEMS pressure and flow-rate sensors for measuring heart rate, blood glucose and other health parameters.
"Everyone has experienced having to go through menu after menu until you get to the function you want; so if those can be done by shaking or tilting the phone, those are features that consumers will really want," said Russell Hannigan, director of product management at Microvision Inc. (Redmond, Wash.). Microvision makes a MEMS projector-display chip that enables cell phones to project images on a wall.
According to presenter Jean-Christophe Eloy, the founder and managing director of Yole Development (Lyon, France), the global MEMS market in 2006 was about $7 billion and is expected to grow to more than $11 billion by 2011. In 2007, about 400 million MEMS chip units were shipped, or about 5 percent of the total foundry market.
"We see the total MEMS market going to $20 billion by 2016, with a 13 percent annual growth rate and about 70 percent coming from semiconductor companies," said Eloy. "Venture capitalists are heavily investing in MEMS, too, putting about $443 million into MEMS companies in 2006, with 12 companies raising more than $15 million each."
"We also expect to see new kinds of MEMS devices, such as tiny speakers for earphones and specialized battery replacement chips by 2008," said Eloy.
Besides startups, established players such as General Electric are ramping up their MEMS manufacturing capabilities. For instance, GE already claims a $1 billion in-house MEMS operation, and is developing a wide variety of new MEMS devices, according to Brian Wirth, GE's global product manager for MEMS, microstructures and nanotechnologies.
The number for 2006 U.S. Patents is a proxy for relative patent prowess worldwide. The Pipeline Power score is derived by multiplying the company’s patent count by the product of four other variables. Pipeline Growth (not shown here) represents the firm’s 2006 patent activity, relative to its average performance in the five previous years. For the other three variables, a score above 1.00 indicates that the company performed better than average in its technology class; below 1.00 indicates worse than average performance. Pipeline Impact indicates how frequently all 2006 patents cited a company’s patents from the previous five years. Pipeline Generality is a measure of the variety of technologies drawing on a company’s patents. Pipeline Originality measures the variety of the technologies upon which an organization’s patents build. Adjusted Pipeline Impact eliminates self-citation. The final score, Adjusted Pipeline Power, is an estimate of a company’s overall patent power. For the complete data, which include all of the top 20 companies in each category, as well as the Pipeline Growth and percentage of self-citation numbers, see http://spectrum.ieee.org/nov07/scorecard.
Q3 2007 Conference Call Transcript
Big thanks to TIGRE!
Labels: Alex Tokman
Company Progresses on Key Strategic Milestones, Reduces Quarterly Cash Burn by 17%
REDMOND, Wash.--(BUSINESS WIRE)--Microvision, Inc. (NASDAQ:MVIS - News), a global leader in light scanning technologies, today reported operating and financial results for the third quarter and first nine months of 2007.
"Our operating results for the third quarter and first nine months of the year include reaching several important business, development and financial milestones that we believe will move us closer to commercialization of high volume consumer and automotive products based on Microvision’s proprietary projection display PicoPTM technology,” said Alexander Tokman, President and CEO of Microvision.
“The latest contract with a world leading Asian consumer electronics manufacturer is another example of progress on our strategy to bring PicoP enabled solutions to market. This agreement should allow us to leverage the extensive integration and manufacturing capabilities of one of the world’s largest suppliers of mobile phones, digital cameras, and personal media players.”
PicoP for Mobile Projection Applications. PicoP™ is an ultra miniature projection module being designed to produce full color, high-resolution images and be small and low power enough to be embedded directly into a mobile device.
Announced an agreement with Motorola to develop pico projector display solutions for mobile applications using Microvision’s PicoP display technology. The companies are initially working together to integrate the PicoP projector inside a functioning mobile device for demonstration purposes.
Signed an agreement with an Asian Consumer Electronics Manufacturer to integrate Microvision’s PicoP display engine into fully functional stand-alone projector prototypes. The prototypes are expected to be marketed to leading consumer electronics companies for private labeling and distribution for mobile applications.
PicoP for Automotive Applications. Subsequent to the end of the quarter, delivered the first advanced PicoP based projection module for an Automotive Head-Up Display (HUD) to Visteon, a Tier 1 automotive supply partner of Microvision. Visteon plans to use the advanced HUD prototype samples to demonstrate the unique performance characteristics of Microvision’s platform technology in order to secure automotive OEM customers.
PicoP for Eyewear Applications. Delivered a demonstrator unit of an innovative eyewear optical system to the U.S. Air Force under the contract awarded by the United States Air Force Research Laboratory in 2006. This initial eyewear optical system is expected to serve as the foundation for a new generation of see-through, full color eyewear display products from Microvision.
ROV™ Laser Bar Code Scanner. Subsequent to the end of third quarter, began commercial shipments of the ROV scanner. ROV, with its rich feature set, wide array of accessories, and powerful software, has been designed for a variety of mobility applications to address both simple business environments, and more demanding mobility environments such as construction, field services, transportation, professional services, hospitality, government, retail, manufacturing, and healthcare.
Funding. Completed the call of the company’s publicly traded warrants -- raising $34.1 million to fund operations without an increase in the fully diluted common shares outstanding.
Industry Awards. Received 2007 North American Frost & Sullivan Award for Technology Innovation for business and design advancements of a bi-directional MEMS scanning mirror, a key component of the company’s PicoP display engine.
“In addition to the accomplishments listed above, we continued to mature the PicoP technology and strengthen the supply chain for PicoP components while lowering the cash burn from operating activities by 17% to $5.1 million for the quarter. We also added key business development and strategic sourcing resources to increase the capacity and depth of both functions. The entire Microvision team remains focused on achieving the milestones we have communicated to our customers, partners, and shareholders,” concluded Tokman.
For the nine months ended September 30, 2007, the company reported revenue of $7.5 million compared to $5.2 million for the same period in 2006 and $2.6 million for the three months ended September 30, 2007 compared to $823,000 for the same period in 2006. As of September 30, 2007, the backlog totaled $5.7 million compared to $6.9 million at September 30, 2006.
The company reported an operating loss for the nine months ended September 30, 2007 of $18.8 million compared to $21.1 million for the same period in 2006 and $6.5 million for the three months ended September 30, 2007 compared to $6.7 million for the same period in 2006.
The company reported a net loss available to common shareholders of $13.8 million for the nine months ended September 30, 2007 compared to $18.6 million for the same period in 2006 and $4.7 million for the three months ended September 30, 2007 compared to $7.7 million for the same period in 2006. The net loss per share was $0.29 for the nine months ended September 30, 2007 compared to $0.60 for the same period in 2006 and $0.08 for the three months ended September 30, 2007 compared to $0.20 for the same period in 2006.
Net cash used in operating activities was $5.1 million for the three months ended September 30, 2007 compared to $6.1 million for the same period in 2006. The company ended the quarter with $40.4 million in cash, cash equivalents and investment securities.
Microvision will host a conference call to discuss its third quarter 2007 financial and operating results at 4:30 p.m. ET on November 1, 2007. Participants may join the conference call by dialing 866-203-3206 (for U.S. participants) or 617-213-8848 (for International participants) ten minutes prior to the start of the conference. The conference pass-code number is 68375632. Additionally, the call will be broadcast over the Internet and can be accessed from the Company’s web site at www.microvision.com. The web cast and information needed to access the telephone replay will be available through the same link following the conference call.
About Microvision: www.microvision.com
Microvision provides a display technology platform designed to enable next generation display and imaging products for pico projectors, vehicles displays, and wearable displays that interface to mobile devices. The company also manufactures and sells its bar code scanner product line which features the company's proprietary MEMS technology.
Forward-Looking Statements Disclaimer
Certain statements contained in this release, including those relating to commercialization and future products, future product form factor, product applications, as well as statements containing words like “could,” “should,” “believe,” “expects” and other similar expressions, are forward-looking statements that involve a number of risks and uncertainties. Factors that could cause actual results to differ materially from those projected in the Company's forward-looking statements include the following: our ability to raise additional capital when needed; our financial and technical resources relative to those of our competitors; our ability to keep up with rapid technological change; our dependence on the defense industry and a limited number of government development contracts; government regulation of our technologies; our ability to enforce our intellectual property rights and protect our proprietary technologies; the ability to obtain additional contract awards; the timing of commercial product launches and delays in product development; the ability to achieve key technical milestones in key products; dependence on third parties to develop, manufacture, sell and market our products; potential product liability claims and other risk factors identified from time to time in the Company's SEC reports, including the Company’s Annual Report on Form 10-K filed with the SEC. Except as expressly required by the federal securities laws, we undertake no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events, changes in circumstances or any other reason.
Labels: Alex Tokman
SAN FRANCISCO, Oct. 24 /PRNewswire/ -- Alcatel-Lucent, (Euronext Paris and NYSE: ALU), and Georgia Institute of Technology today announced, here at the CTIA Wireless IT & Entertainment trade show and exhibition in San Francisco, that they are in negotiations to establish the Alcatel-Lucent Center of Excellence for ultra-high bandwidth services to jointly develop augmented reality applications and massive multiplayer online games for mobile devices.
Under the proposed strategic agreement, Alcatel-Lucent and Georgia Tech would form a team, with funding from Alcatel-Lucent, on the university campus dedicated to developing high-bandwidth applications, identifying the network challenges that accompany such applications and creating solutions to those challenges.
The joint work would focus on augmented reality, which is the addition of computer-generated graphics to a real scene, as well as multiplayer online gaming and television-gaming convergence.
The Alcatel-Lucent and Georgia Tech partnership would be one of the first under the new University Innovations Program that Alcatel-Lucent also announced today.
"Combining Alcatel-Lucent's leadership in wireless and convergence technology, with Georgia Tech, one of America's top research universities, will spark innovation leading to more exciting and robust applications that carriers can offer their customers no matter how they connect," said Jessica Stanley-Yurkovic, Vice President, North America Marketing, Alcatel-Lucent. "Our goal is to create scenarios in Georgia Tech's research labs that challenge the maximum capability of current wireless systems so that we can develop services and technologies that go beyond today's capabilities, toward a next-generation high-bandwidth user experience."
Alcatel-Lucent and Georgia Tech intend to have prototype applications and a "test bed" in place by the end of 2008.
Labels: Augmented Reality
Microvision Enters Development Agreement with Asian Consumer Electronics Manufacturer to Create Accessory Pico Projector for Mobile Phones9 comments Posted by Ben at 10:36 AM
Tuesday October 23, 6:30 am ET
REDMOND, Wash.--(BUSINESS WIRE)--Microvision (NASDAQ:MVIS - News), the leader in light scanning technologies for display and imaging products, announced today it has entered into a development agreement with one of the world’s largest consumer electronics manufacturers of mobile phones, digital cameras and personal media players.
Under the agreement, the manufacturing partner will integrate Microvision’s proprietary PicoPTM display engine into fully functional stand-alone projector prototypes. The projector prototypes are expected to be marketed to leading consumer electronics companies for private labeling and distribution for mobile applications including mobile phones and other devices. The accessory prototypes will incorporate the PicoP display engine that is being developed by Microvision’s other strategic supply chain partners. For confidentiality reasons and at the request of the manufacturing partner, Microvision is not releasing the partner’s name or details regarding the expected timing of the product launch.
"In our pursuit of high volume consumer and automotive applications we are partnering with world leading manufacturers to bring the PicoP, our ultra-miniature, low power projection display technology to market," stated Alexander Tokman, Microvision President and CEO. "This agreement is another example of our commitment to accelerate PicoP’s path to market by leveraging the extensive integration and manufacturing capabilities of one of the world’s largest suppliers of mobile phones, digital cameras and personal media players."
The PicoP accessory projector is expected to be the size of today's feature-rich cell phones and will provide consumers with a 'large screen viewing experience' by projecting multimedia content from their mobile phones and other devices. The PicoP accessory projector is designed to deliver a large, full-color, WVGA display that is always in focus even when projecting onto curved surfaces. Applications for the accessory projector could include watching mobile television, movies, personal videos, photographs, web surfing, gaming and various business applications.
About Microvision www.microvision.com
Microvision provides the PicoP display technology platform designed to enable next generation display and imaging products for pico projectors, vehicles displays, and wearable displays that interface to mobile devices. The company also manufactures and sells its bar code scanner product line which features the company's proprietary MEMS technology.
Forward-Looking Statements Disclaimer
Certain statements contained in this release, including those relating to future products and product applications, as well as statements containing words like "expects," "could," and other similar expressions, are forward-looking statements that involve a number of risks and uncertainties. Factors that could cause actual results to differ materially from those projected in the Company's forward-looking statements include the following: our ability to raise additional capital when needed; our financial and technical resources relative to those of our competitors; our ability to keep up with rapid technological change; our dependence on the defense industry and a limited number of government development contracts; government regulation of our technologies; our ability to enforce our intellectual property rights and protect our proprietary technologies; the ability to obtain additional contract awards; the timing of commercial product launches and delays in product development; the ability to achieve key technical milestones in key products; dependence on third parties to develop, manufacture, sell and market our products; potential product liability claims and other risk factors identified from time to time in the Company's SEC reports, including the Company's Annual Report on Form 10-K filed with the SEC. Except as expressly required by the federal securities laws, we undertake no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events, changes in circumstances or any other reason.
You attended the SID Mobile Display Conference recently, how was it?
The conference was great. The entire venue was informative in all areas of displays for mobile devices. For us it was especially exciting around the interest and progress around pico projectors. Compared to last year's conference, the pico projection subject wasn’t even on the agenda. This year an entire morning session was devoted to the subject. Microvision was invited to participate and present on the “Projection Technology for Mobile Devices” panel along with Explay, Light Blue Optics, Texas Instruments, and Market Research firm Insight Media.
Tell us about what you saw regarding Mobile Projection?
We demonstrated an earlier prototype of our PicoP™ Display Engine. We briefly saw Texas Instruments' latest prototype and a demonstration of Explay’s prototype. Light Blue Optics did not provide a demonstration. We are encouraged by two things from what we saw. First, there is an overwhelming belief that consumers want a larger display experience from small mobile devices. Secondly, the trends in mobile devices are drastically impacting the performance of mobile displays in resolution, power, and size. It is clear the future of mobile devices center around instant broadband access to all types of multi-media rich content—movies, TV, user generated content, web pages, gaming and even certain business applications.
What are Microvision’s advantages over other Pico Projectors?
The advantages of our PicoP display engine can best be described by what cell phone OEMs have communicated to us. It is all about size, power, and resolution.
To be considered an embedded product the display engine must fit volumetrically inside the device while minimally impacting the overall thinness of the device. Through extensive consultations with cell phone companies, we believe the maximum size of any projector must be about 5 cubic centimeters with a 7 mm thickness. Our first prototypes are already very close to these requirements. The main reason why we can achieve such a small package is because PicoP uses just one tiny mirror to “paint” the image, pixel-by-pixel on the screen. By contrast, most of our competitors require a panel that has one element for each and every pixel, thereby leading to a larger device. Additionally, we don’t require a large projection lens. These and other items lead to a very compact and thin package.
While the first hurdle is fitting inside the device, the PicoP must be compatible with typical batteries used in cell phones. Our PicoP display engine efficiently manages every milli-watt of power by adjusting the light intensity of each laser source as and when it is needed for each pixel. The added benefit of managing the light source is that very minimal light is wasted as heat, significantly reducing thermal issues on other components inside the mobile device. By contrast, most competing systems require the light source to be on continuously regardless of the content being displayed.
While size and power describes the ability of a projector to be embedded and operated inside a mobile device, the display characteristics are what a consumer best judges the display on. For this we must consider advantages in terms of Resolution, Depth of Focus, and Colors.
Consider Resolution: Most of our competitors are focused on developing projectors that will offer resolution of about QVGA which is 320 x 240 pixels, or roughly one quarter the area of a traditional TV screen. QVGA is also the same resolution as used on higher-end cell phones today such as the Motorola Q or the T-Mobile Dash. However, while a QVGA display when projected to create a 36 inch image, for example, may be acceptable for some photographs and lower quality video, we don’t believe it is sufficient for viewing higher-quality photos and video, web surfing, viewing documents and presentations all of which customers have shown they want.
Our PicoP display engine is currently at WVGA resolution (848 x 480 pixel) which is near DVD quality and can support numerous devices like the Apple iPod®, iPhone®, Sony PSP Slim, cell phones with a TV out such the Nokia N95, digital cameras and camcorders, gaming consoles and laptops. We believe the demand for increased resolution will continue to grow as 3.5 and 4G broadband mobile networks flourish. One key advantage of our technology is that it offers a path to increased resolution over time without a large impact to the engine size. In fact, it may decrease. We see this as a distinct advantage over any fixed panel display. Greater resolution for these displays means more elements leading to a larger engine size.
Our understanding of the usages for a projector inside a mobile device is one of spontaneity and instant enjoyment. The infinite depth of focus in the PicoP display engine allows any projected image to be in focus on any surface from any distance. This means a 10 inch image in a dark room can instantly be a 40+ inch image just by moving away from the wall.
For color, because we use lasers we are able to achieve well beyond NTSC colors. NTSC refers to the color standards set for televisions. With laser light we are able to achieve more colors than the NTSC standard. Human eyes are capable of seeing many colors but current displays are limited to how many they can reproduce. Children and teens in particular like the eye popping colors in animated and colorful types of content. For other viewers flesh tones are accurately presented.
What other advantages are there for using laser light sources?
In our opinion, laser light is the only viable technology that enables an incredibly small, thin, and low power projector that can be embedded inside cell phones. Lasers are much more efficient devices than LEDs and as result LED-powered devices require more optics to collect the light leading to a thicker display engine.
Lasers produce a display image dynamic called speckle, what is that?
Any type of projector using laser light will exhibit some level of noise in the image, referred to as “speckle”. This appears to some viewers as a glistening effect typically noticeable in the light color areas. However,as a result of our continuous progress in maturing the technology, the majority of people we have shown the PicoP to either don’t notice it or say it is very acceptable given the application space of mobile devices. Indeed, at the SID conference, a number of people looked at the PicoP image and said “how did you get rid of the speckle?”
Are there any product and safety certifications required for PicoP?
Yes, as there are any with any consumer electronics product. Prior to commercial production we and our supply chain partners and global OEM customers will meet all the requirements to deliver high-quality and safe products for the consumer. For example, the PicoP display engine is designed to not exceed a Class 2 laser product as defined by the International Electrotechnical Commission (IEC). The IEC is the global body that regulates laser safety standards. Additionally, as part of our product certification requirements, we would also work with the laser based products and other standards requirements of individual countries.
Tell us about the status of Microvision’s current work on Pico Projectors?
Today, we are in the advanced development phases for PicoP enabled consumer products, starting with an accessory projector which would be a stand-alone device about the size and thickness of a PDA. The consumer could easily plug this accessory projector into various devices with video-out capabilities, such as cell phones, media players, laptops, camcorders, and more.
How are things going with Motorola?
Due to confidentiality reasons we are unable to discuss any of the terms. However, what I can say is that we are very pleased with the progress so far and excited about the future.
Any other developments that are emerging?
Yes, we’re incredibly busy—it seems that everyone in the value chain is interested in mobile projection. We expect to continue to secure new customers and partners along the way toward commercialization of the PicoP-based accessory and embedded products.
What does the future look like for Microvision five years down the road?
It is very conceivable to think that pico projectors could follow the same deployment growth trajectory as the color LCD display or the camera in cell phones. In 1999, there were virtually no cell phones with color LCD displays. The same is true for the tiny camera in cell phones. But look where we are today: Both are in about 80% of the more than 1 billion* cell phones that will be shipped this year alone! Given the global move to broadband wireless, we think the PicoP could be an enabler of a very valuable and compelling mobile display experience.
*According to Gartner, in 2007 1.18 billion cell phones will be sold worldwide.
Published: October 16, 2007
By Bill Gates, Chairman, Microsoft Corporation
If you’ve been in the work force for 20 years or more, you can remember a time when the pace of business—and life in general—was quite a bit slower than it is today. Back then we read newspapers and magazines and watched the network news to stay informed. Faxes were just becoming a common way to share written business information. A phone call might elicit a busy signal or no one would answer at all. In those days, no one expected to send documents to coworkers on the other side of the globe instantly, collaborate in real-time with colleagues in distant cities, or share photographs the very day they were taken.
These and similar advances have delivered remarkable results. The ability to access and share information instantly and communicate in ways that transcend the boundaries of time and distance has given rise to an era of unprecedented productivity and innovation that has created new economic opportunities for hundreds of millions of people around the world and paved the way for global economic growth that is unparalleled in human history.
But few people would argue that there is no room for improvement. Although we have once-unimaginable access to people and information, we struggle today to keep track of emails and phone calls across multiple inboxes, devices, and phone numbers; to remember a growing number of passwords; and to synchronize contacts, appointments, and data between desktop PCs and mobile devices. The fact is that the proliferation of communications options has become a burden that often makes it more difficult to reach people than it used to be, rather than easier.
In 2006, I wrote about how unified communications innovations were already beginning to transform the way we communicate at work. Today, I want to provide an update on the progress we’re making toward achieving our vision for unified communications. I also want to share my thoughts on how rapid advances in hardware, networks, and the software that powers them are laying the foundation for groundbreaking innovations in communications technology. These innovations will revolutionize the way we share information and experiences with the people who are important to us at work and at home, and help make it possible to put the power of digital technology in the hands of billions of people around the globe who have yet to reap the benefits of the knowledge economy.
Moving Beyond Disconnected Communications
A fundamental reason that communicating is still so complex is the fact that the way we communicate is still bound by devices. In the office, we use a work phone with one number. Then we ask people to call us back on a mobile device using another number when we are on the go, or reach us on our home phone with yet another number. And we have different identities and passwords for our work and home email accounts, and for instant messaging.
This will change in the very near future. As more and more of our communications and entertainment is transmitted over the Internet thanks to email, instant messaging, video conferencing, and the emergence of Voice over Internet Protocol (VoIP), Internet Protocol Television (IPTV), and other protocols, a new wave of software-driven innovations will eliminate the boundaries between the various modes of communications we use throughout the day. Soon, you’ll have a single identity that spans all of the ways people can reach you, and you’ll be able to move a conversation seamlessly between voice, text, and video and from one device to another as your location and information sharing needs change. You’ll also have more control over how you can be reached and by whom: when you are busy, the software on the device at hand will know whether you can be interrupted, based on what you are doing and who is trying to reach you.
The communications expectations that young people—and anybody else who has adopted the latest digital communications tools—bring to the workplace are already changing how we do business. To them, the desk phone is an anachronism that lacks the flexibility and range of capabilities that their mobile device can provide. A generation that grew up on text messaging is driving the rapid adoption of instant messaging as a standard business communications tool. Accustomed to forming ad hoc virtual communities, they want tools that facilitate the creation of virtual workgroups. Used to collecting and storing information online, they look for team Web sites, Wikis, and other digital ways to create and share information.
A Foundation for Future Innovation
It would be hard to overstate the magnitude of the changes that are coming. Standardized, software-powered communications technologies will be the catalyst for the convergence of voice, video, text, applications, information, and transactions, making it possible to create a seamless communications continuum that extends across people’s work and home lives. This will provide the foundation for new products, services, and capabilities that will change the world in profound and often unexpected ways.
This will happen not only in developed countries where access to digital technology is the norm, but also in emerging economies around the world. Currently, about 1 billion of us have a PC, just a fraction of the world’s 6 billion people. As we make technology more accessible and simpler to use—often in the form of affordable mobile devices—we can extend new social and economic opportunities to hundreds of millions of people who have never been able to participate in the global knowledge economy. And as more and more of the world’s people are empowered to use their ideas, talents, and hard work to the fullest, the results will be new innovations that make everyone’s lives richer, more productive, and more fulfilling.
Forget that tiny screen. The PicoP turns your phone into a projector.By Michal Lev-Ram, Business 2.0 Magazine
October 16 2007: 12:36 PM EDT
(Business 2.0 Magazine) -- Many phones can play just about any video you want as long as you're willing to watch it on a 3-inch LCD. But a supplier of wearable displays for the U.S. Army wants to change that.
In July, Redmond-based Microvision (Charts) inked a deal with Motorola (Charts, Fortune 500) to develop a built-in projector for mobile devices. The result: the PicoP, a gadget about the size of a Thin Mint that can project a high-resolution, television-screen-size image on any surface - flat or curved - from your phone.
The plug-in version, for smartphones, will arrive at the end of . Phones with embedded PicoPs should be available by 2009.
"Teens will be able to project movies and pictures," says Microvision CEO Alexander Tokman. "Business users can give the ultimate elevator-pitch presentation with their phone."
25 million phones now have the technology to support a projector. By the end of 2008, that number will double to 50 million.
Well, it was a great show, but it's great to be back in WA. The theme of our booth at the show was 'the ultimate in soldier situational awareness', and this can be illustrated most clearly by the interface design in popular warfighting simulation games, such as Ghost Recon. The soldier has access to battlefield augmented reality systems: this means that mission critical information, video feeds and real-time tactical data are superimposed on the soldier's field of view in an unobtrusive and soldier-centric user interface.
Applications such as Blue Force Tracking already integrate data from various systems to create a holistic picture of the battlefield. Disseminating this information to soldiers in real-time, in chaotic battlefield environments, requires a new kind of wearable display.
Microvision is developing see-through color eyewear to enable exactly this scenario and application space. We believe this is an exciting market opportunity, and we're committed to delivering breakthrough display products that meet the needs of our men and women in harm's way.
Hey all, I’m off to Washington D.C. for the annual AUSA Conference and Exhibition. This is the Army’s largest event of the year, and many of the senior military officers, program office officials and leading military product OEMS will be in attendance.
Microvision will be demonstrating our current wearable display initiatives, and seeking support from the military for next generation wearable displays. Every major future soldier program defines wearable displays within their future needs; it's our goal to develop the wearable display of choice for these programs. And we believe we're well positioned: our combination of a small, lightweight form factor, high brightness, rich color, and clear see-through performance is unique in the industry.
I’m excited to be going to AUSA and to represent Microvision’s ongoing efforts to provide our cutting edge wearable display technology to the men and women in the U.S. military.
Much of the enthusiasm and drive we have for our efforts comes from the knowledge and experience we have gained from evaluation of our wearable display development initiatives. These development programs have been supported over the years to address the direct visualization and electronic display needs of our troops in the field. Feedback from the initial deployment of our technology back in 2004 suggested that commanders in the field found the technology to be valuable to their operations and made our troops safer.
Our soldiers risk their lives every day. We think we can help them perform their difficult duties while providing increased safety – and we appreciate the opportunity to work with the U.S. military to do just that.
Photo: David Stuart; Retouching: Smalldog Imageworks
Is It Live or Is It AR?
By Jay David Bolter and Blair Macintyre
By blending digital creations with our view of the world, augmented reality is set to transform the way we entertain and educate ourselves
There are two ways to tell the tale of one Sarah K. Dye, who lived through the Union Army's siege of Atlanta in the summer of 1864. One is to set up a plaque that narrates how she lost her infant son to disease and carried his body through Union lines during an artillery exchange, to reach Oakland Cemetery and bury him there.
The other is to show her doing it.
You'd be in the cemetery, just as it is today, but it would be overlaid with the sounds and sights of long ago. A headset as comfortable and fashionable as sunglasses would use tiny lasers to paint high-definition images on your retina—virtual images that would blend seamlessly with those from your surroundings. [Editor's Note: That's Microvision.] If you timed things perfectly by coming at twilight, you'd see flashes from the Union artillery on the horizon and a moment later hear shells flying overhead. Dye's shadowy figure would steal across the cemetery in perfect alignment with the ground, because the headset's differential GPS, combined with inertial and optical systems, would determine your position to within millimeters and the angle of your view to within arc seconds.
That absorbing way of telling a story is called augmented reality, or AR. It promises to transform the way we perceive our world, much as hyperlinks and browsers have already begun to change the way we read. Today we can click on hyperlinks in text to open new vistas of print, audio, and video media. A decade from now—if the technical problems can be solved—we will be able to use marked objects in our physical environment to guide us through rich, vivid, and gripping worlds of historical information and experience.
The technology is not yet able to show Dye in action. Even so, there is quite a lot we can do with the tools at our disposal. As with any new medium, there are ways not only of covering weaknesses but even of turning them into strengths—motion pictures can break free of linear narration with flashbacks; radio can use background noises, such as the sound of the whistling wind, to rivet the listener's attention.
Along with our students, we are now trying to pull off such tricks in our project at the Oakland Cemetery in Atlanta. For the past six years, we have held classes in AR design at the Georgia Institute of Technology, and for the past three we have asked our students to explore the history and drama of the site. We have distilled many ideas generated in our classes to create a prototype called the Voices of Oakland, an audio-only tour in which the visitor walks among the graves and meets three figures in Atlanta's history. By using professional actors to play the ghosts and by integrating some dramatic sound effects (gunshots and explosions during the Civil War vignettes), we made the tour engaging while keeping the visitors' attention focused on the surrounding physical space.
We hope to be able to enhance the tour, not only by adding visual effects but also by extending its range to neighboring sites, indoors and out. After you've relived scenes of departed characters in the cemetery, you might stroll along Auburn Avenue and enter the former site of the Ebenezer Baptist Church. Inside, embedded GPS transceivers would allow the GPS to continue tracking you, even as you viewed a virtual Reverend Martin Luther King Jr. delivering a sermon to a virtual congregation, re-creating what actually happened on that spot in the 1960s. Whole chapters of the history of Atlanta, from the Civil War to the civil rights era, could be presented that way, as interactive tours and virtual dramas. Even the most fidgety student probably would not get bored.
By telling the story in situ, AR can build on the aura of the cemetery—its importance as a place and its role in the Civil War. The technology could be used to stage dramatic experiences in historic sites and homes in cities throughout the world. Tourists could visit the beaches at Normandy and watch the Allies invade France. One might even observe Alexander Graham Bell spilling battery acid and making the world's first telephone call: “Mr. Watson, come here.”
The first, relatively rudimentary forms of AR technology are already being used in a few prosaic but important practical applications. Airline and auto mechanics have tested prototypes that give visual guidance as they assemble complex wiring or make engine repairs, and doctors have used it to perform surgery on patients in other cities.
But those applications are just the beginning. AR will soon combine with various mobile devices to redefine how we approach the vast and growing repository of digital information now buzzing through the Internet. The shift is coming about in part because of the development of technologies that free us from our desks and allow us to interact with digital information without a keyboard. But it is also the result of a change in attitude, broadening the sense of what computers are and what they can do.
We are already seeing how computers integrate artificially manipulated data into a variety of workaday activities, splicing the human sensory system into abstract representations of such specialized and time-critical tasks as air traffic control. We have also seen computers become a medium for art and entertainment. Now we will use them to knit together Web art, entertainment, work, and daily life.
Think of digitally modified reality as a piece of a continuum that begins on one end with the naked perception of the world around us. From there it extends through two stages of "mixed reality" (MR). In the first one, the physical world is like the main course and the virtual world the condiment—as in our AR enhancement of the Oakland Cemetery. In the other stage of MR, the virtual imagery takes the spotlight. Finally, at the far end of the continuum lies nothing but digitally produced images and sounds, the world of virtual reality.
Any AR system must meld physical reality with computer-modeled sights and sounds, a display system, and a method for determining the user's viewpoint. Each of the three components presents problems. Here we will consider only the visual elements, as they are by far the most challenging to coordinate with real objects.
The ability to model graphics objects rapidly in three dimensions continues to improve because the consumer market for games—a US $30-billion-a-year industry worldwide—demands it. The challenge that remains is to deliver the graphics to the user's eyes in perfect harmony with images of the real world. It's no mean feat.
The best-known solution uses a laser to draw images on the user's retina. [Editor's Note: That's Microvision.] There is increasing evidence that such a virtual retinal display can be done safely [see "In the Eye of the Beholder," IEEE Spectrum, May 2004]. However, the technology is not yet capable of delivering the realistically merged imagery described here. In the meantime, other kinds of visual systems are being developed and refined.
Most AR systems use head-worn displays that allow the wearer to look around and see the augmentations everywhere. In one approach, the graphics are projected onto a small transparent screen through which the viewer sees the physical world. This technology is called an optical see-through display. In another approach, the system integrates digital graphics with real-world images from a video camera, then presents the composite image to the user's eyes; it's known as a video-mixed display. The latter approach is basically the same one used to augment live television broadcasts—for example, to point out the first-down line on the field during a football game [see "All in the Game," Spectrum, November 2003].
PARIS, ENHANCED: Nokia's prototype mobile AR system couples a camera, a cellphone, GPS, accelerometers, and a compass to follow the user through a city and point out all the sights.
Some of the most compelling work uses mobile phones to combine Internet-based applications with the physical and social spaces of cities. Many such projects exploit the phone's GPS capabilities to let the device act as a navigational beacon. The positional information might let the phone's holder be tracked in cyberspace, or it might be used to let the person see, on the phone's little screen, imagery relevant to the location.
Meanwhile, new phones are coming along with processors and graphics chips as powerful as those in the personal computers that created the first AR prototypes a decade ago. Such phones will be able to blend images from their cameras with sophisticated 3-D graphics and display them on their small screens at rates approaching 30 frames per second. That's good enough to offer a portal into a world overlaid with media. A visitor to the Oakland Cemetery could point the phone's video camera at a grave (affixed with a marker, called a fiducial) and, on the phone's screen, see a ghost standing at the appropriate position next to the grave.
Video and computer games have been the leading digital entertainment technology for many years. Until recently, however, the games were entirely screen-based. Now they, too, are climbing through mobile devices and into the physical environment around us, as in an AR fishing game called Bragfish, which our students have created in the past year. Players peer into the handheld screens of game devices and work the controls, steering their boats and casting their lines to catch virtual fish that appear to float just above the tabletop. They see a shared pond, and each other's boats, but they see only the fish that are near enough to their own boats for their characters to detect.
We can imagine all sorts of casual games for children and even for adults in which virtual figures and objects interact with surfaces and spaces of our physical environment. Such games will leave no lasting marks on the places they are played. But people will be able to use AR technology to record and recall moments of social and personal engagement. Just as they now go to Google Maps to mark the positions of their homes, their offices, their vacations, and other important places in their lives, people will one day be able to annotate their AR experience at the Oakland Cemetery and then post the files on something akin to Flickr and other social-networking sites. One can imagine how people will produce AR home movies based on visits to historic sites.
Ever more sophisticated games, historic tours, and AR social experiences will come as the technology advances. We represent the possibilities in the form of a pyramid, with the simplest mobile systems at its base and fully immersive AR on top. Each successive level of technology enables more ambitious designs, but with a smaller potential population of users. In the future, however, advanced mobile phones will become increasingly widespread, the pyramid will flatten out, and more users will have access to richer augmented experiences.
Fully immersive AR, the goal with which we began, may one day be an expected feature of visits to historic sites, museums, and theme parks, just as human-guided tours are today. AR glasses and tracking devices will one day be rugged enough and inexpensive enough to be lent to visitors, as CD players are today. But it seems unlikely that the majority of visitors will buy AR glasses for general use as they buy cellphones today; fully immersive AR will long remain a niche technology. [Editor's Note: I'll disagree with this last contention; there's no reason why cool AR glasses could not be a mass market phenomenon, akin to Bluetooth earbuds, etc -- most especially given all these cool applications described in this article.]
On the other hand, increasingly ubiquitous mobile technology will usher in an era of mixed reality in which people look at an augmented version of the world through a handheld screen. [Editor's Note: Again, why use a handheld screen if you have cool AR glasses?] You may well pull information off the Web while walking through the Oakland Cemetery or along Auburn Avenue, sharing your thoughts as well as the ambient sounds and views with friends anywhere in the world.
At the beginning of the 20th century, when Kodak first sold personal cameras in the tens of thousands, the idea was to build a sort of mixed reality that blended the personal with the historic (“Here I am at the Eiffel Tower”) or to record personal history (“Here's the bride cutting the cake”). AR will put us in a kind of alternative history in which we can live through a historic moment—the Battle of Gettysburg, say, or the “I have a dream” speech—in a sense making it part of our personal histories.
Mobile mixed reality will call forth new media forms that skillfully combine the present and the past, historical fact and its interpretation, entertainment and learning. AR and mobile technology have the potential to make the world into a stage on which we can be the actors, participating in history as drama or simply playing a game in the space before us.
Excerpts from a Transcript of Remarks by Bill Gates, Chairman, Microsoft Corporation
Microsoft CEO Summit 2007
May 16, 2007
BILL GATES: Well, good morning. I have the honor of getting to talk about where technology is going, and not just the breakthroughs in technology, but also the changes in how it's being used. If you look at things like how people think about the Internet and video, over the last couple of years it's dramatically different. It's really become the mainstream of how people think about creating, distributing, and getting video, and that has some implications that are pretty profound.
Now, part of the reason that that things keep changing is that the pace of innovation is very, very rapid. We see that at the chip level where we still have the ability to double the number of transistors on these chips every two years or so, and it looks like there's another decade where that will continue; so no limitation in terms of the kind of power that's coming out of these devices.
Now, the size of the hardware also makes a very big difference. We continue to reduce the number of components, reduce the size, and so we're getting even some new form factors. You can think about, where do PCs stop if you take smaller and smaller PCs, and how far up do phones go, and is there a gap there in the middle? Media devices like the iPod or navigation devices or dedicated reading devices, will there be specialized things in between or will these two more horizontal platforms that run many applications come down and even overlap each other? I tend to believe that the phone will move up and the PC will move down and there won't be any special device categories, because the power of being able to run any application, whether it's media, reading, navigation, is very strong.
And so as we get down into these form factors and you think about some phones that are coming out with bigger screens and motion video, you can see that they really are meeting and then just taking on the applications like still photography, motion video, incredible maps where you can see data about commercial establishments or traffic or even your friends that are nearby showing up on that map.
One new thing is taking the phone that you carry around and putting some of the business information there. Electronic-mail has become pretty standard, the calendar sharing, but also documents about customers that you might be going to visit or metrics that you might need to be updated on because there might be something urgent to be done about that; having the software platform expand the information empowerment out even to that small screen device is becoming far more typical and the environments are making that far easier to do.
The last thing here I wanted to mention is that with video arriving on the Internet, it's possible for a company to create essentially their own TV channel. And it's actually better than a TV channel in that people can come in and watch at any time video on demand.
Now, communications, when we say it's changing, you might ask, well, why? What's so unique about this timeframe? Well, historically the networks limited what could be done. The quality of the voice was quite limited. You were working the phone numbers. You had no idea who was where. And so that hardware piece limited the creativity of what you could do.
I want to talk now a little bit about media and how that's changing. Historically, only very high volume media had a simple distribution channel. So, if you had something that was interesting only to a few hundred people, and those people were spread out all over the world, it just wasn't economic to print something or get a channel, to be able to carry on that type of communication.
Well, the Internet has really changed that. No matter what device you're connecting up with, the ability to find people with a common interest, the ease and low cost of authoring means that even these small groups now can effectively have shows and newsletters or any type of thing that only worked for mass media in the past.
And as we drive this forward, it's interesting how it's changing behaviors. This is an article that was in Variety just this last week, and it talks about how these young writers and producers actually meet at night and play videogames. They don't go to the same house and play poker, they don't meet on the golf course; they're sitting there in these games talking to each other with the headset, and it's actually considered you have a strong relationship with somebody if they are willing to give you their gamertag, which is the thing that lets you connect up and play with other people.
The phone itself is changing quite rapidly. We have a big investment in doing software that runs on the platform. We work with a wide variety of hardware manufacturers, including people like Samsung, Motorola, HTC, doing different devices. We're way beyond this just being a voice communication device. I talked a little bit about how maps and photos are coming into this. Media storage will be one of the things that works very well, particularly as the capacities go up there.
Some of the things are in the early stages. The idea of how you do payment with this device, it's actually interesting given the pervasiveness of mobile phones to think about this being a device that can actually bring banking and financial services to people where it wasn't economic to do it. So, actually even my foundation is looking at this as an empowering tool. As the technology allows you to see your savings account and transfer money and those types of things, it's going to make a very big difference.
The innovation in terms of form factors is pretty incredible. You know, every year I see a lot of new devices out there, certain ones of them catch on, but richer and richer in terms of what they're doing and very much a software driven device.
Now, this is changing all the mobility and the Internet is changing media in quite a dramatic way. Historically a typeset document was a sign that a big company had made the document or the brochure. Well, personal computers changed that some time ago. The printers that you have, you can use those rich fonts, and everybody can make great documents.
Well, it still wasn't the case for things like video. The kind of editing tools and bringing things in, sound editing really required pretty large budgets. But now what you can do on a standard PC is actually almost as good as the most expensive system. So creating is different, distribution is different, and consumption is changing.
One fun example of this is that advertising is moving to be more embedded in certain ways. A little bit of that is because the skipping that people are seeing, but it's also just creative ideas. We bought a company called Massive that's the leader in putting ads into a videogame experience, and so when you're watching the baseball game those ads come up, where you're racing around the track. We've also got now with what we call Virtual Earth where you can go and see a city and that's the photo on the right there is a view of the 3-D buildings -- that's actually downtown Seattle. As you navigate around, we actually put up these virtual billboards, and you can see that little orange ad that's there in the distance. That doesn't really exist but we can put that up without interfering with your walking around, seeing the buildings, click a building you want to go into. And so there's advertising inventory in that Virtual Earth experience.
Well, that's going to take some time before people understand is that really valuable real estate, does it cause people to do something different, but our guess is that if you come in and said, you know, I want to find a restaurant, I'm going to a movie nearby, that the idea that that billboard can come up and suggest something to you, that's a context of great value.
And so the types of advertising are quite different, and we have the flexibility to experiment with a lot of things. We can put video onto that virtual billboard. It's not limited to just seeing a static display like a typical real world advertisement.
So, what do we see looking ahead? No slowdown in the rate of innovation. Even if the innovation stopped, of course, there would be a lot that would change as people were taking advantage of what we have today, but that innovation curve is creating new opportunities all the time.
Some of the change agents of this are often the younger people who are so immersed in these digital activities in their entertainment activities, the way that they think about college, their courses, connecting up to their friends, they come into the workplace with an expectation they're going to use these tools in a rich way, and that companies that they are going to do business with will be state of the art in how these things are done. So, year by year things will move into the place where it's really the productivity and the information empowerment will simply be expected.
We've got a long way to go on these things, but the reason we've got our R&D up at record levels, and the industry as a whole is investing in record levels is there are so many opportunities to take the hardware platform that has gotten so much better, and build software on top of that that can create far more natural experiences. And so that's why it's a very exciting thing to be involved in, and I'm sure it will create opportunities for all of you.