7. Adding the Computer Crowd to the Human Crowd

Written by Patrick Meier

Investigative journalists and human rights practitioners have for decades used a mix of strategies to verify information in emergency and breaking news situations. This expertise is even more in demand with the growth of user-generated content.

But many are increasingly looking to “advanced computing” to accelerate and possibly automate the process of verification. As with any other technique, using advanced computing to verify social media content in near real time has promises and pitfalls.

Advanced computing consists of two elements: machine computing and human computing. The former uses techniques from natural language processing (NLP) and machine learning (ML), while the latter draws on crowdsourcing and microtasking methods.

The application of advanced computing to verify user-generated content is limited right now because the field of research is still new; the verification platforms and techniques described below are still being developed and tested. As a result, exactly how much value they will add to the verification process remains to be seen, but advancements in technology are likely to continue to bring new ways to help automate elements of the verification process.

This is an important moment in the application of advanced computing to verify user-generated content: Three new projects in this field are being developed. This chapter provides an overview of them, along with background on how human and machine computing are being used (and combined) in the verification process. As we dive in, let me add a disclaimer: I spearheaded the digital humanitarian response efforts described below - for Haiti, the Phil- ippines and Pakistan. In addition, I’m also engaged in the Verily project and with the creation of the Twitter Credibility Plugin, both of which are also mentioned.

Human computing

In human computing, also referred to as crowd computing, a machine outsources certain tasks to a human or crowd. The machine then collects and analyzes the processed tasks.

An early use of human computing in an emergency was after the Haiti earthquake in 2010. Ushahidi Inc. set up a Web-based human computing platform to microtask the translation of urgent text messages from Haitian Creole into English. These messages came from disaster-affected communities in and around Port-au-Prince. The translated texts were subsequently triaged and mapped to the Ushahidi Haiti Crisis Map. While the translation of the texts was the first and only time that Ushahidi used a human computing platform to microtask crisis information, the success of this computer science technique highlighted the value it added in disaster response.

Human computing was next used in 2012 in response to Typhoon Pablo in the Philippines. At the request of the United Nations, the Digital Humanitarian Network (DHN) collected and analyzed all tweets posted during the first 48 hours of the typhoon’s making landfall. More specifically, DHN volunteers were asked to identify all the pictures and videos posted on Twitter that revealed damage caused by the strong winds and rain. To carry out this opera- tion, the DHN used the free and open-source microtasking platform CrowdCrafting to tag individual tweets and images. The processed data was then used to create a crisis map of disaster damage.

The successful human computing response to Typhoon Pablo prompted the launch of a new, streamlined microtasking platform called MicroMappers. Developed using CrowdCraft ing software, MicroMappers was first used in September 2013 to tag tweets and images posted online following the Baluchistan earthquake. This operation was carried out by the DHN in response to a request by the U.N. in Pakistan.

In sum, human computing is just starting to gain traction in the humanitarian community. But human computing has thus far not been used to verify social media content.

Verily platform

The Verily platform that I am helping to develop uses human computing to rapidly crowdsource evidence that corroborates or discredits information posted on social media. We expect Verily to be used to help sort out conflicting reports of disaster damage, which often emerge during and after a major disaster. Of course, the platform could be used to verify images and video footage as well.

Verily was inspired by the Red Balloon Challenge, which was launched in 2009 by the Defense Advanced Research Projects Agency (DARPA). The challenge required participants to correctly identify the location of 10 red weather balloons planted across the United States.

The winning team, from MIT, found all 10 balloons in less than nine hours without ever leaving their computers. Indeed, they turned to social media, and Twitter in particular, to mobilize the public. At the beginning of the competition, the team announced that rather than keeping the $40,000 cash prize if they won, they would share the winnings with members of the public who assisted in the search for the balloons. Notably, they incentivized people to invite members of their social network to join the hunt, writing: “We’re giving $2000 per balloon to the first person to send us the correct coordinates, but that’s not all - we’re also giving $1000 to the person who invited them. Then we’re giving $500 whoever invited the inviter, and $250 to whoever invited them, and so on.”

The Verily platform uses the same incentive mechanism in the form of points. Instead of looking for balloons across an entire country, however, the platform facilitates the verification of social media reports posted during disasters in order to cover a far smaller geo- graphical area - typically a city.

Think of Verily as a Pinterest board with pinned items that consist of yes or no questions. For example: “Is the Brooklyn Bridge shut down because of Hurricane Sandy?” Users of Verily can share this verification request on Twitter or Facebook and also email people they know who live nearby.

Those who have evidence to answer the question post to the Verily board, which has two sections: One is for evidence that answers the verification question affirmatively; the other is for evidence that provides a negative answer.

The type of evidence that can be posted includes text, pictures and videos. Each piece of evidence posted to the Verily board must be accompanied by an explanation from the person posting as to why that evidence is relevant and credible.

As such, a parallel goal of the Verily project is to crowdsource critical thinking. The Verily platform is expected to launch at www.Veri.ly in early 2014.

Machine computing

The 8.8 magnitude earthquake that struck Chile in 2010 was widely reported on Twitter. As is almost always the case, along with this surge of crisis tweets came a swell of rumors and false information.

One such rumor was of a tsunami warning in Valparaiso. Another was the reporting of looting in some districts of Santiago. Though these types of rumors do spread, recent empirical research has demonstrated that Twitter has a self-correcting mechanism. A study of tweets posted in the aftermath of the Chilean earthquake found that Twitter users typically push back against noncredible tweets by questioning their credibility.

By analyzing this pushback, researchers have shown that the credibility of tweets could be predicted. Related data-driven analysis has also revealed that tweets with certain features are often false. For example, the length of tweets, the sentiment of words used and the number of hashtags and emoticons used provide indicators of the likely credibility of the tweet’s messages. The same goes for tweets that include links to images and videos - the language contained in tweets that link to multimedia content can be used to determine whether that multimedia content is credible or not.

Taken together, these data provide machines with the parameters and intelligence they need to begin predicting the accuracy of tweets and other social media content. This opens the door to a bigger role for automation in the verification process during disasters and other breaking news and emergency situations.

In terms of practical applications, these findings are being used to develop a “Credibility Plugin” for Twitter. This involves my team at the Qatar Computing Research Institute working in partnership with the Indraprastha Institute of Information Technology in Delhi, India.

This plugin would rate individual tweets on a scale from 0 to 100 based on the probability that the content of a given tweet is considered credible. The plugin is expected to launch in early 2014. The main advantage of this machine computing solution is that it is fully automated, and thus more scalable than the human computing platform Verily.

Hybrid computing

The Artificial Intelligence for Disaster Response (AIDR) platform is a hybrid of the human and machine computing models.

The platform combines human computing (microtasking) with machine computing (machine learning). Microtasking is taking a large task and splitting it into a series of smaller tasks. Machine learning involves teaching a computer to perform a specified task.

AIDR enables users to teach an algorithm to find information of interest on Twitter. The teaching process is done using microtasking. For example, if the Red Cross were interested in monitoring Twitter for references to infrastructure damage following a disaster, then Red Cross staff would use AIDR’s microtasking interface to tag (select) individual tweets that refer to damage. The algorithm then would learn from this process and automatically find additional tweets that refer to damage.

This hybrid computing approach can be used to automatically identify rumors based on an initial set of tweets referring to those rumors. Rapidly identifying rumors and their source is an important component of verifying user-generated content. It enables journalists and humanitarian professionals to track information back to its source, and to know whom to contact to take the next essential step in verifying the information.

To be sure, the goal should not only be to identify false or misleading information on social media but to counter and correct this information in near real time. A first version of AIDR was released in November 2013.

Accelerating the verification process

As noted earlier, the nascent stages of verification platforms powered by advanced computing mean that their ultimate value to the verification of user-generated content remains to be seen. Even if these platforms bear fruit, their early iterations will face important constraints. But this early work is essential to moving toward meaningful applications of advanced computing in the verification process.

One current limitation is that AIDR and the upcoming Credibility Plugin described above are wholly dependent on just one source: Twitter. Cross-media verification platforms are needed to triangulate reports across sources, media and language. While Veri.ly comes close to fulfilling this need, it relies entirely on human input, which does not scale easily.

In any event, these solutions are far from being the silver bullet of verification that many seek. Like other information platforms, they too can be gamed and sabotaged with sufficient time and effort. Still, these tools hold the possibility of accelerating the verification process and are likely to only advance as more effort and investment are made in the field.


subscribe figure