Microsoft Research Connections Blog
Next at Microsoft
Social Media Collective
Windows on Theory
Posted by Rob Knies
The term “natural user interfaces” has been in vogue in recent months, generally invoked to describe different ways that humans can interact with computing devices beyond the longtime pairing of keyboard and mouse.Surface computing is one example with its roots in Microsoft Research. Kinect functionalities also benefited from work in Microsoft Research labs. Now, scientists at Microsoft Research Asia are examining ways that you can interact with computers using … your face. Qiufeng Yin, a software-development engineer at that Beijing-based lab, explains.“We envision a world in which mobile devices—phones, tablets, sensors—become more and more ubiquitous,” Yin says. “We hope to make such devices more human-friendly. They can be personalized to a user, and the face is another important, though underutilized, area for interaction, in addition to voice and touch.”
Posted by Jennifer Chayes, Microsoft distinguished scientist and managing director of Microsoft Research New England
We’re thrilled to announce that three leading researchers will be joining danah boyd and the social-media research team at Microsoft Research New England, based in Cambridge, Mass.
Microsoft Research produces some of the strongest computer-science research extant. As the world changes and our business expands, there’s a much broader range of research questions we need to address beyond technology itself, including how we use that technology, why we want to use that technology, and how different cultural norms within the United States and other countries affect how we approach future technology development.
Last year, David Rothschild of Microsoft Research New York City used a versatile, data-driven model to predict correctly the results of the U.S. presidential election in 50 of 51 jurisdictions—the nation’s 50 states and the District of Columbia.Given the overwhelming accuracy—better than 98 percent—of those predictions, it’s no wonder that the work of Rothschild and a few other individuals trying to learn how to harness the value of big data gained the attention of the news media. “Some things,” wrote Steven Cherry in IEEE Spectrum, “are predictable—if you go to the people who rely on data and not their gut.”People, in other words, like Rothschild, who readily admits that his role is to “push the boundaries of information aggregation.”Now, as the next effort in his quest to make use of big data to reinvent how we think about predictions and forecasting—and, coincidentally, to make potential contributions to enable Microsoft to build better products and services—Rothschild has turned his predictive attention toward another major media event of global proportions: the Academy of Motion Picture Arts and Sciences' 85th annual Academy Awards.
Setting an objective is often the first step to achieving it. Case in point: Image Watch.Image Watch is a Visual Studio 2012 plug-in from the Interactive Visual Media (IVM) group at Microsoft Research Redmond. The tool enables anyone building image-processing applications to visualize images just as they would any other variable within the Visual Studio integrated development environment.“The tool—which works for Windows Phone, Windows Store, or desktop apps—began with a straightforward objective,” says Wolf Kienzle, a senior research software-design engineer with the IVM team.
One of the featured technologies on display on April 23, the first day of Microsoft Research Machine Learning Summit 2013, was Infer.NET, a powerful, compelling .NET library from Microsoft Research Cambridge.Infer.NET is an example of model-based machine learning, as explained by Tom Minka from the Cambridge lab during a morning talk.“It’s about trying to get more people to try machine learning,” said Minka, a senior researcher. “The traditional approach to this is that experts build prepackaged learners that are very generic and apply in a robust way to different data sets. But the problem with that approach is that it doesn’t account for domain knowledge. In lots of areas where we want to use machine learning, such as vision or speech or ecology, there is very strong domain knowledge.