- Get link
- X
- Other Apps
Geniuses Show They Care at TED
MONTEREY, California -- Saving the world trumped profit margins for a few days last week, as millionaires, billionaires and other assorted luminaries convened here to mull the future of the planet at the exclusive Technology Entertainment and Design conference.I have thought a whole lot about Jeff Han's incredible Multi-Touch Interaction demonstration since I found it and posted it to this blog. In the words of one reader, "this changes everything." And I really think that is absolutely true. We don't need to use a clumsy puck and a little pointer anymore. We can use gestures and language and interact with computing devices on our terms, in ways that are natural for us. Sitting hunched over a desk inputing data (as I am at this moment) is really an archaic way to interact with supposedly cutting edge modern technology.
Among this year's invitees were Google founders Larry Page and Sergey Brin, venture capitalist John Doerr, actress Meg Ryan, blogging guru Arianna Huffington and the heads of numerous car manufacturers, movie studios and advertising agencies, as well as technology and design firms.
But one of the biggest conference hits was Jeff Han, a consultant with New York University’s computer science department, who wowed the audience with his multi-touch computerized table display. Forget mouse clicks and keystrokes, Han's system resembles a photographer's light box and allows users to view and manipulate data with multiple fingers.
Conference-goers likened the system to one Tom Cruise used in Minority Report, in which he donned special gloves to call up and manipulate digital data in mid-air. Han's system doesn't conjure out of thin air, but it also doesn't require gloves. A camera projects the desktop onto an acrylic tabletop embedded with sensors that reads input from any place on the screen simultaneously. A user can move items around with many fingers and stretch, shape or resize them more precisely than point-and-click interfaces allow.
Han manipulated NASA satellite imagery maps to zoom swiftly in and out of mountain ranges and crevasses and tipped the map with his fingers to view it from any angle. Scattering digital-photo images onto the display, he pushed and pulled photos to re-size them, then called up a keyboard image, super-sized it for easy viewing and typed captions. With a quick touch, he reduced the size of the captions, slid them in place beneath the pictures, then tapped the keyboard to make it all disappear.
Han hoped to pick conference attendees' brains on ways to develop applications for his system, and pique the interest of companies that might manufacture it.
"Google could really use one of these in their lobby," he joked hopefully.
When I see what can only be described as a huge breakthrough like this, I feel really energized and inspired. The title of this article is apt. What we are witnessing is true genius. To MVIS, I say this: Hire this man, Jeff Han.
If we are really serious about "participating in both sides of the interface" (display and data input) then we need this guy. He's already leading the way, showing the world a real device that allows computers to become so much more than they are, to be capable of such greater levels of interaction, more intuitive control, more freedom. A dumb terminal that accepts input from a keyboard and mouse will one day (maybe soon?) look as arcane and ridiculous as old vacuum tube "supercomputers":
The future is going to be built and visionaries are going to lead the way there. There's no question about that. I'd sure like to have this guy working for me. This thing is un-freaking-believable.
Have a great weekend everybody!
- Get link
- X
- Other Apps
Comments
Post a Comment