Microsoft Research Connections Blog
Next at Microsoft
Social Media Collective
Windows on Theory
Posted by Rob Knies
The term “natural user interfaces” has been in vogue in recent months, generally invoked to describe different ways that humans can interact with computing devices beyond the longtime pairing of keyboard and mouse.Surface computing is one example with its roots in Microsoft Research. Kinect functionalities also benefited from work in Microsoft Research labs. Now, scientists at Microsoft Research Asia are examining ways that you can interact with computers using … your face. Qiufeng Yin, a software-development engineer at that Beijing-based lab, explains.“We envision a world in which mobile devices—phones, tablets, sensors—become more and more ubiquitous,” Yin says. “We hope to make such devices more human-friendly. They can be personalized to a user, and the face is another important, though underutilized, area for interaction, in addition to voice and touch.”
Last August, my colleague Janie Chang wrote a feature story titled Speech Recognition Leaps Forward that was published on the Microsoft Research website. The article outlined how Dong Yu, of Microsoft Research Redmond, and Frank Seide, of Microsoft Research Asia, had extended the state of the art in real-time, speaker-independent, automatic speech recognition.Now, that improvement has been deployed to the world. Microsoft is updating the Microsoft Audio Video Indexing Service with new algorithms that enable customers to take advantage of the improved accuracy detailed in a paper Yu, Seide, and Gang Li, also of Microsoft Research Asia, delivered in Florence, Italy, during Interspeech 2011, the 12th annual Conference of the International Speech Communication Association.The algorithms represent the first time a company has released a deep-neural-networks (DNN)-based speech-recognition algorithm in a commercial product.It’s a big deal. The benefits, says Behrooz Chitsaz, director of Intellectual Property Strategy for Microsoft Research, are improved accuracy and faster processor timing.
Posted by Tony Hoare, winner of the A.M. Turing Award in 1980
Can computers understand their own programs?
From my earliest days as a student of philosophy and classics at Merton College, Oxford, I was attracted into computing by the prospect that it would shed light on some of the age-old problems of philosophy. These include investigation of the scope and limits of human understanding, intelligence, and reasoning.
The early computer scientist Alan Turing, more famous for his work as a mathematician and cryptanalyst, invented the stored-program digital computer in 1936, with the specific purpose of defining clear limits to the understanding a computer can have of its own programs. He proved that a computer cannot always reliably answer a most important question: Will its own program ever terminate? This impossibility theorem has ever since lain at the foundation of computer science.
You’re looking for a photo of a flower. Not just any photo—it needs to be horizontal in shape. And not just any flower—it needs to be a purple flower.What do you do? You could perform a conventional image search on the web. There are lots of flowers out there—lots of shapes, lots of colors. Poke around for a while, and you just might find what you need.Alternatively, you can use the filter bar in Bing Image Search, which has been augmented by work from Microsoft Research Asia. You type in a textual query: “flower” and filter for “purple,” “photograph,” and “wide,” and voilà, a collection of horizontal shots of purple flowers pops up.The color filter is thanks, in large part, to research by Jingdong Wang and Shipeng Li. They are in Providence, R.I., from June 16 to 21, attending the Institute of Electrical and Electronics Engineers’ 2012 Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2012), during which they are presenting their paper Salient Object Detection for Searched Web Images via Global Saliency, written in collaboration with Peng Wang, Gang Zeng, Jie Feng, and Hongbin Zha of the Key Laboratory on Machine Perception at Peking University.
You’re in a hurry. You’ve rushed to the nearest shopping mall during your lunch hour, looking for one item, one item only. It’s a five-minute task, except for finding the store with the right item—and you’re not familiar with the location of the store. Uh-oh.You’re shopping for groceries. The shopping list is on your mobile phone, and you visit this store all the time, so you know where everything is—except for that one special request your spouse made. Where in the world could that be? Uh-oh.GPS technology has altered forever the way people navigate their outdoor surroundings. With GPS, the above scenarios would be easily overcome. If only we had GPS available indoors … but we don’t. Some have tried Wi-Fi-based approaches to indoor localization, but those efforts have encountered challenges.That could change, though, in the wake of FM-based Indoor Localization, a paper to be presented during The Tenth International Conference on Mobile Systems, Applications, and Services (MobiSys 2012), to be held June 25 to 29 along the shores of Lake Windermere in the U.K. Lake District. The paper—written by Dimitrios Lymberopoulos, Jie Liu, and Bodhi Priyantha of Microsoft Research Redmond, along with Yin Chen of Johns Hopkins University, then a Microsoft Research intern—offers an alternative technique.