Microsoft Research Connections Blog
Next at Microsoft
Social Media Collective
Windows on Theory
Posted by George Thomas Jr.
"Selma" director Ava DuVernay wasn't nominated, but she still could affect who wins the Oscar.(Photo by Atsushi-Nishijima. ©2014 Paramount Pictures)
(Editor's Note: Researcher David Rothschild is receiving significant press coverage the day after the Oscars as he correctly predicted 20 of 24 awards)
While it may be difficult to name a year in which The Oscars were not embroiled in some controversy regarding nominees, this year seems particularly pronounced, with widespread criticism spanning numerous categories, mainly objecting to the lack of nominations for “Selma,” and, on a light-hearted note, incredulity about “The Lego Movie” not receiving an Animated Feature Film nomination.
"Birdman" director and co-writer Alejandro González Iñárritu with cinematographer Emmanuel Lubezki. (Photo by Alison Rosa. ©2014 Fox Searchlight)
To mis-paraphrase a song from that excluded film, everything is not awesome in Oscar-nomination land.
Yet, as Microsoft researcher David Rothschild (@DavMicRot) notes, that doesn’t mean the un-nominated can’t affect the Oscar winners in those “disputed” categories. “Absence in the category makes a difference in the distribution of votes for the remaining choices,” he says.
Rothschild, an economist with Microsoft’s New York City research lab, specializes in predictive analytics. His model correctly predicted 21 of 24 Oscar winners last year and 19 of 24 winners in 2013. But he says the “Selma” controversy is having a “particularly interesting” effect on this year’s predictions.
“As this controversy ebbed and flowed over the last few weeks, we had to make sure to take into account how it may affect the final votes,” he says, noting some key factors that affect the prediction model:
So how can a prediction model successfully account for voters’ shifting sentiment?
“Whenever I create predictions I focus on several attributes beyond accuracy, including flexibility, scalability, and timeliness,” Rothschild says, noting timeliness -- ensuring the prediction is constantly updated with the latest information -- as particularly problematic.
But what among the “latest information” is reliably predictive? “One way to take this into account is to follow the public opinion on the topic and assume that public opinion will translate into pressure on the voters,” Rothschild says. However, he notes, “public opinion is only loosely related to winners in the Oscars, because the sentiment and expectations of the general public is closely tied to name recognition and popularity.”
Microsoft researcher David Rothschild says prediction markets are reliable indicators of how people will vote.
Rothschild says it’s also difficult to match public opinion to a specific category, and, further complicating matters, “there are not even many alternatives to express dissatisfaction in the nomination process in that there is no clear movie for people to support in lieu of Selma’s lack of nominations.”
Which is why once the Oscar voting starts his focus shifts to prediction markets.
“Prediction markets follow a select group of people who have high levels of information on what voters will do and are willing to wager real-money on the outcomes,” he says. “And, prediction market-based forecasts have been incredibly accurate,” hence the aforementioned successful predictions of previous years’ Oscar winners.
So, aside from what is relatively unpredictable, what is the model predicting now?
Of course, that’s at the time of the publishing this post, with timeliness not factored into the equation. "Had you asked me about the likely winners in the top six categories last week, two of them would have been different," Rothschild says. Visit predictwise.com to see the up-to-date full list of the Oscar nominees most likely to win, as well as predictions about politics, sports, and economic and financial topics.