University team competition focuses on crowdsourcing


A handful of universities are studying how crowdsourcing can be used.

Maybe you’ve got a hunch Kim Jong Il’s regime in North Korea has seen its final days, or that the Ebola virus will re-emerge somewhere in the world in the next year.

Your educated guess may be just as good as an expert’s opinion. Statistics have long shown that large crowds of average people frequently make better predictions about unknown events, when their disparate guesses are averaged out, than any individual scholar—a phenomenon known as the wisdom of crowds.

Now the nation’s intelligence community, with the help of university researchers and regular folks around the country, is studying ways to harness and improve the wisdom of crowds. The research could one day arm policy makers with information gathered by some of the same methods that power Wikipedia and social media.

In a project that is part competition and part research study, George Mason professors Charles Twardy and Kathryn Laskey are assembling a team on the internet of more than 500 forecasters who make educated guesses about a series of world events, on everything from disease outbreaks to agricultural trends to political patterns.

They are competing with four other teams led by professors at several universities. Each differs in its approach, but all are studying how crowdsourcing can be used.

At stake is grant money provided by the Intelligence Advanced Research Projects Activity (IARPA), part of the Office of the Director of National Intelligence, which heads up the nation’s intelligence community.

Put simply, crowdsourcing occurs when a task is assigned to a wide audience rather than a specific expert or group of experts. The online encyclopedia Wikipedia is one of the most prominent examples—anyone can write or edit an entry. Over time, the crowds refine and improve the product. Crowdsourcing can range from a simple question blasted to a person’s Twitter followers to amateur programmers fine-tuning open-source software.

IARPA spokeswoman Cherreka Montgomery said her project’s goal is to develop methods to refine and improve on crowdsourcing in a way that would be useful to intelligence analysts.

“It’s all about strengthening the capabilities of our intelligence analysts,” Montgomery said.

And if analysts can use crowdsourcing to better determine the likelihood of seemingly unpredictable world events, those analysts can help policy makers be prepared and develop smarter responses. In a hypothetical example, a crowd-powered prediction about the breakout of popular uprisings in the Middle East could influence what goes in a dossier given to decision-makers at the highest levels.

The program at George Mason is called DAGGRE, short for Decomposition-based Aggregation. The researchers have used blog postings, Twitter, and other means to get the word out about their project to potential participants. No specialized background is required, though a college degree is preferred.

The project seeks to break down various world events into their component parts. The stability of Kim Jong Il’s regime in North Korea provides an example. One forecaster might base his prediction based solely on political factors. But what if the political experts could be guided by health experts, who might observe that Kim’s medical condition is flagging?

The DAGGRE participants key their answers into forms on the project’s website, and also supply information at the outset about their education and what areas they have expertise in. The scholars overseeing the project will then seek to break down the variables that influence a forecaster’s prediction, and use the data in a way that people with disparate knowledge bases can help guide each other to the most accurate forecast.

Military and intelligence researchers have long studied ways to improve the ability to predict the future. In 2003, the Defense Advanced Research Projects Agency (DARPA) launched research to see whether a terrorist attack could be predicted by allowing speculative trading in a financial market, in which people would make money on a futures contract if they bet on a terrorist attack occurring within a designated time frame. The theory was that a spike in the market could serve as a trip wire that an attack was under way. But some found the idea ghoulish, and others objected to the notion that a terrorist could conceivably profit by carrying out an attack, and the research was halted.

Laskey said George Mason’s research bears some fundamental similarities with the discontinued DARPA research, with the crucial difference that nobody participating in George Mason’s project can profit from making accurate predictions. But participants who make accurate predictions are rewarded with a point system, and there is a leaderboard of sorts for participants to measure their success. Some can also choose to receive a small stipend for their time, but it’s not tied to how they answer questions.

Another team, led by psychologists at the University of California and the University of Pennsylvania, is focused on asking questions in ways that minimize experts’ overconfidence and misjudgment, said Don Moore, a professor at Cal-Berkeley.

“Small wording changes in a question can have a huge effect” on how a person answers, Moore said.

Twardy said the George Mason study has already drawn more than 500 participants, but only about half are actively participating. The study continues to recruit people as some participants drop out over the four-year course of the study.

Participants come from all walks of life. While Twardy said he’d love to have, say, agronomists, on his team to help forecast European polices and responses to mad cow diseases and the cattle trade, the overriding principle is that people from various backgrounds can contribute to the crowd’s collective wisdom, so participation is not restricted by fields of expertise.

George Mason received a $2.2 million grant from IARPA to conduct the study. If the team remains in the competition for the full four years—weaker teams are at risk of being discontinued—the grant will be increased to $8.2 million.

Twardy expects to publish the results of his research and hopes it will ultimately help world leaders make more informed choices when they confront global crises.

“At some level, you cannot predict the future,” Twardy said. “But you can do a lot better than just asking an expert.”

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Oops! We could not locate your form.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.