Microsoft Research Connections Blog
Next at Microsoft
Social Media Collective
Windows on Theory
Posted by Rob Knies
Now, here’s an interesting one: The latest video in Channel 9’s Microsoft Research Luminaries series features John Platt (@johnplattml) and explores his work in the resurgent research area of artificial intelligence (AI), its close cousin, machine learning, and the impact of deep learning on those fields.Platt, a Microsoft distinguished scientist and deputy managing director of Microsoft Research Redmond, tells interviewer Larry Larsen that he has been with Microsoft for 17 years, but that he has spent no fewer than 32 years in the AI domain. At Larsen’s prompting, Platt then attempts to define and differentiate what is meant by the terms “AI” and “machine learning.”“They’re very intertwined,” he begins. “I would define artificial intelligence as software that’s trying to emulate the human mind. That’s often specific to a domain, like computers that can see—that’s computer vision—or computers than can listen, which is speech recognition, or computers that can read text, which would be natural language processing or text mining. So that’s AI: emulating the human mind.“Machine learning is a set of techniques that turn a data set into a set of software. It’s an alternative way of programming. Instead of writing a specification and then trying to hand-build a piece of software that matches the specification, you have a data set and perhaps some desired output or behaviors that you’d like when you see elements of that data set. Machine learning will generate a piece of software that will match your goals on that data set.” That bit of instructive clarification complete, Platt then outlines the history of AI, going all the way back to Alan Turing and the dawn of computing. He then pivots to discuss how AI can be enhanced via deep learning and then to the Cortana dialog system for Windows Phone 8.1 before turning his conversational focus to one of Microsoft’s recent research successes, Project Adamu.That work demonstrates that large-scale, commodity distributed systems can train huge deep neural networks effectively, as evidenced by the world’s best photograph classifier over 22,000 categories encompassing 14 million images. Project Adam, from a small team headed by researcher Trishul Chilimbi, is 50 times faster than and more than twice as accurate as the best previous effort in this direction.“Trishul and I are speculating about what kinds of amazing things we can build with Project Adam,” Platt says. “What sort of large-scale systems can we build, and how much amazing AI stuff can we do? I’m very excited.”His enthusiasm is contagious, as you’ll see when you watch the video. Learn more at the Machine Learning blog.