Microsoft Research Connections Blog
Next at Microsoft
Social Media Collective
Windows on Theory
Posted by Rob Knies
If you are feeling hungry, you go to the kitchen. If you’d like to take a swim, you head to a swimming pool. If you want to catch a movie, you’re bound for a theater.And, Danyel Fisher says, if you’re interested in data, you open Excel.“Excel is where data lives,” says Fisher, a researcher with the Visualization and Interaction for Business and Entertainment (VIBE) team at Microsoft Research Redmond. “When people have data to organize, in any form, it usually passes through Excel at some point—sometimes, just as a quick way to look at it, and sometimes, with tools like Flash Fill and charting and sorting—that’s where it stays.“Data visualizations are incredibly powerful and fun ways for users to understand their data.”
On Nov. 26, 2011, the Mars rover Curiosity was launched from Cape Canaveral, a trip that will have taken more than eight months before Curiosity lands on the surface of the Red Planet.With excitement peaking in the days before the landing, Microsoft and NASA are using the event as an opportunity to enable youngsters to learn computational skills and explore the Martian terrain by using Kodu: Mars Edition.Developed in cooperation with NASA’s Mars Public Engagement Program, led by the Jet Propulsion Laboratory (JPL), and Microsoft Research’s FUSE Labs, Kodu: Mars Edition lets children create games for the PC or Xbox using a simple, visual programming language. The aim of the collaboration is to create compelling learning experiences that develop students’ competency in science, technology, engineering, and mathematics (STEM), along with 21st-century skills.
Over the last few months, I’ve published a series of feature stories to outline the contributions Microsoft Research has made to the groundbreaking Kinect for Xbox 360 product, which Guinness World Records has dubbed the fastest-selling consumer electronic device ever. This week, the Kinect team is marking the one-year anniversary of Kinect. With that in mind, I offer this video, which provides a valuable overview of the research behind Kinect, featuring research personnel from around the world: Ivan Tashev from Microsoft Research Redmond, Baining Guo from Microsoft Research Asia, Jamie Shotton and Andrew Fitzgibbon from Microsoft Research Cambridge, and Oliver Williams from Microsoft Research Silicon Valley. And Alex Kipman, general manager for Incubation for Microsoft’s Interactive Entertainment Business, provides a product-group perspective on the contributions.For more detail, you can take a look at our feature stories, the latest of which was published just yesterday:
Posted by Jennifer Chayes, Microsoft distinguished scientist and managing director of Microsoft Research New England
We’re thrilled to announce that three leading researchers will be joining danah boyd and the social-media research team at Microsoft Research New England, based in Cambridge, Mass.
Microsoft Research produces some of the strongest computer-science research extant. As the world changes and our business expands, there’s a much broader range of research questions we need to address beyond technology itself, including how we use that technology, why we want to use that technology, and how different cultural norms within the United States and other countries affect how we approach future technology development.
The term “natural user interfaces” has been in vogue in recent months, generally invoked to describe different ways that humans can interact with computing devices beyond the longtime pairing of keyboard and mouse.Surface computing is one example with its roots in Microsoft Research. Kinect functionalities also benefited from work in Microsoft Research labs. Now, scientists at Microsoft Research Asia are examining ways that you can interact with computers using … your face. Qiufeng Yin, a software-development engineer at that Beijing-based lab, explains.“We envision a world in which mobile devices—phones, tablets, sensors—become more and more ubiquitous,” Yin says. “We hope to make such devices more human-friendly. They can be personalized to a user, and the face is another important, though underutilized, area for interaction, in addition to voice and touch.”