Microsoft’s been awfully busy at this year’s SIGGRAPH conference: embers of the company’s research division have already illustrated how they can interpret speech based on the vibrations of a potato chip bag and turn shaky camera footage into an experience that feels like flying. Look at the list of projects Microsofties have been working on long enough, though, and something of a theme appears: These folks are really into capturing motion, depth and object deformation with the help of some slightly specialized hardware.
Just about everybody carries a camera nowadays by virtue of owning a cell phone, but few of these devices capture the three-dimensional contours of objects like a depth camera can. Depth cameras are quickly gaining prominence for their potential in pocket-sized devices, where the idea is that if our phones capture the contours of everything from street corners to the arrangement of your living room, developers can create applications ranging from better interactive games to helpful guides for the visually impaired. Yet while efforts like Google’s Project Tango are adding depth cameras into mobile gadgets, new research from Microsoft shows that with some simple modifications and machine-learning techniques an ordinary smartphone camera or webcam can be used as a 3-D depth camera. The idea is to make access to developing 3-D applications easier by lowering the costs and technical barriers to entry for such devices, and to make the 3-D depth cameras themselves much smaller and less power-hungry.