Posted by Rob Knies

KinectFusion collage

 A scintillating portion of Microsoft Research Cambridge’s Sept. 27 event supporting the 20th anniversary of Microsoft Research came during a late-afternoon panel discussion entitled Old World, NUI World: The Future of Digital Interaction.

The panel included Shahram Izadi, Andrew Fitzgibbon, and Jamie Shotton of Microsoft Research Cambridge, along with Tom Rodden of Nottingham University, and a key part of the discussion was a demo of KinectFusion, a system for real-time 3-D reconstruction that is quickly gaining acclaim as a dazzling extension of the capabilities of Kinect for Xbox 360.

The demo came toward the end of an afternoon that featured introductory remarks by Andrew Blake, Microsoft distinguished scientist and managing director of Microsoft Research Cambridge.

He then introduced a panel discussion called The War on Error: Failure, Fun, and the Future, featuring Byron Cook, Andy Gordon, and Jasmin Fisher of the Cambridge facility, along with Peter O’Hearn of Queen Mary University.

Following that came a series of presentations from Microsoft Research Cambridge personnel, sandwiched around a collection of 12 demos that attendees got a chance to view during a mid-afternoon break. The talks included:

In the demo of KinectFusion during the panel discussion on natural user interfaces at the end of the event, the audience saw how the system takes live depth data from a moving Kinect camera and creates high-quality, geometrically accurate 3-D models in real time. KinectFusion enables a user holding a camera to move quickly within an indoor space to create a fused, 3-D model of a room and its contents—within seconds, and down to the millimeter level.

That’s a succinct explanation, but if you’d like to be wowed as the attendees of the Cambridge symposium were—or as was the writer at Engadget who termed the technology “nothing short of jaw-dropping” —take a look at this explanatory video. Many already have; it has been seen on YouTube more than 100,000 times.

“We are really excited that there has been so much interest in our research project,” said Izadi. “The Kinect device has enabled a whole raft of exciting new possibilities in addition to those in the application of gaming. For example, imagine that you want to see whether that new piece of furniture you saw in the store earlier that day will fit in your living room and what it might look like in situ? This would be easy to view in accurate 3-D using KinectFusion.”

KinectFusion offers a tantalizing glimpse into a new generation of usage, extending from gaming to augmented reality—and beyond.

It tracks a six-degree-of-freedom pose of the camera to build a quick representation of the geometry of arbitrary surfaces. New GPU-based implementations for camera tracking and surface reconstruction enable the system to run at interactive rates in real time, using new versions of two well-known graphics algorithms designed for parallelizable general-purpose GPU hardware.

KinectFusion refines its models as it is used. Motions—even those from camera shake—provide new viewpoints that improve the models. As the camera moves closer to objects, more detail is captured and added to the model.

The demonstration provided even more evidence for the type of computer-science breakthroughs Microsoft Research has been contributing for the past 20 years. The mind boggles in considering what might be forthcoming over the next two decades.

As you might have guessed, there are anniversary events on tap in the United States, too. Next stop: Silicon Valley.