Our goal at Expert Labs is to give policy makers the tools and techniques that enable them to use crowdsourcing to make better decisions. The first big experiment in this effort was our support for the White House's Office of Science and Technology Policy, which sought to identify the Grand Challenges in science and technology which could yield significant breakthroughs in the future.
Today, we're excited to share some of the first results of that effort, including the full datasets of collected replies from the call for responses, an evaluation of the technical performance of the ThinkTank application that our community created to support the effort, and an assessment of areas that we can improve on in future experiments. We will also be sharing some of these findings today at the AAAS Forum on Science & Technology Policy in Washington, DC and asking for our community's help in our next steps.
- Over 2,000 replies to the request for innovation were collected across Twitter and Facebook in just over 48 hours.
- The initial prompts from the White House's social network accounts (such as @whitehouse on Twitter) were forwarded by enough individuals to nearly double the number of people who saw the prompts.
- The ThinkTank platform succeeded in capturing all of the replies submitted, on both Twitter and Facebook. You can view the captured data right now.
- AAAS was also able to increase the number of replies to the request for innovation that were sent via email, by prompting its members and affiliates to respond as well.
- Today, we'll be asking our community to analyze and present the data collected thus far, and to help turn those analysis and visualization tools into parts of the ThinkTank platform.
Off-topic and on-topic responses can coexist on the network.One of the biggest concerns with large-scale crowdsourcing efforts is "noise", or the perception that off-topic or low-value responses will crowd out high-quality answers. Expert Labs is explicitly not qualifying which responses policy makers will use for their processes, instead working to create a set of tools that makes it more efficient to do so themselves. But there are obviously responses that are frivolous, or ones that are advancing political concerns to the White House that do not relate to science or technology, and the presence of these submissions did not prevent interested parties from also suggesting ideas that may be of value. This may seem like an obvious conclusion, but it is an important one to highlight as a typical first reaction to hearing about crowdsourcing experiments is the fear that noise on the network will scare away participants who may have beneficial contributions.
It is possible to reliably capture a sizable volume of responses across multiple social networks.A significant portion of Expert Labs' efforts go towards supporting the open source community which has created the ThinkTank software platform for gathering social networking responses to prompts. Led by our Project Director Gina Trapani, it's clear that even by a very conservative reckoning, ThinkTank has been a decided success in its first outing. The application was able to connect to the appropriate networks, collect all of the replies submitted by people connected to the White House's social network accounts, and store them in its database system with perfect fidelity almost immediately after they were submitted. Given early concerns about the reliability of the programming interfaces of various networks, the potential for undiscovered bugs when working with such a large network of connections, and the relative newness of the open source community that was creating the platform success, it's clear that the system has performed incredibly well. Going forward, the challenge for the ThinkTank platform is clearly to evolve from its present relatively simple set of capabilities into a more robust system that allows for intelligent filtering and analysis of responses to a prompt. The ThinkTank community has begun to work on these tasks already.
The prompts used to drive a crowdsourcing initiative are perhaps the most important part of the effort.Across the few days that ThinkTank was monitoring responses to the Grand Challenges initiative, the White House prompted its social network connections to respond in several different ways. Messages varied in tone, timing, and in how much background expository information was provided. Clearly, the Grand Challenges initiative itself is an ambitious one. By addressing it to a large swath of users on networks like Twitter and Facebook, the prompt was asking many who don't usually focus on science and technology to reckon with a fairly complicated question. And even those in the science and technology communities who might have ready responses would have to acclimate to the huge new idea of being asked for their feedback, as well as the big new idea that they could give feedback using common social networking tools. If there is an area for improvement in our efforts, this is clearly an important one to focus on. Even relatively minor variables like the time of day when a social networking prompt is sent can have significant impact on results, both in terms of the quality of responses, as well as the speed with which they responses are submitted. More significantly, the terse wording and distracted attention environment of social networks can amplify ambiguities in a prompt. For example, the initial White House prompt on Twitter about the Grand Challenges initiative was designed to use past scientific successes such as the moon landing or the human genome sequencing as models for what suggestions respondents could use in their Grand Challenges suggestions. The initial wording began, "The next Apollo program or human genome project?" As a result, a significant number of responses took the question very literally and answered with one of the programs prioritized over the other. This is not a tremendously negative result, and in fact showed that a number of members of the network were very passionate about one or both of the efforts, but was very effective in teaching just how important phrasing of a prompt is.
Amplifying a crowdsourcing prompt is as common a behavior as actually replying to it.Even a cursory examination of the replies sent to the White House's prompts about Grand Challenges shows a significant number of "retweets" (forwards) or simple restatements of the original prompt. While some of this is to be expected, as organizations use the networks to relay the message to their followers or members, it would also appear that some number of network members see the amplification of the message as a form of participation in itself. This suggests that it may be valuable to reward amplification of a prompt as well as direct responses to the prompt.
With the publication of our first learnings, we're also eager to distribute the actual responses to the Grand Challenges prompt, which are viewable here as a Google Docs spreadsheet, from which they can be downloaded in any common format. We're eager for our community to analyze this data, create visualizations from it, and in general provide any insights into the responses that can be gleaned.
Just as importantly, we hope our community can suggest future capabilities for the ThinkTank platform which would increase the value of the data collected. We're beginning our next round of explorations with policy makers shortly, and your feedback will help us determine exactly what questions to raise, what technologies to advance, and how we can best encourage the creation of better policy.
Finally, a sincere thanks to the open source community that has created ThinkTank, our colleagues at AAAS who have supported the outreach effort around Grand Challenges, the team at Twitter which helped amplify this campaign, the White House Office of Science and Technology Policy for pursuing an open process around policy creation, and most importantly the scientists and public who responded to this first experiment with such enthusiasm. We'll try to honor all of these contributions by redoubling our efforts going forward.