crowdsourcing science by tricking gamers into doing work

Picture this: You’ve just come home from a long day at work, and have the promise of a free evening before you. It’s been a stressful day, and you want to relax. Do you a) fire up your favourite video game or b) do a bunch of extra work categorising petri dishes for someone else’s science experiment for which you get no credit? If you’re like most people, you’re more likely to choose option a), and this is precisely the problem with the ongoing online projects for crowdsourcing scientific research.

Zooniverse’s project “Decoding the Civil War” has volunteers transcribe and categorise documents from the Civil War.

Many “citizen science” projects, which rely on volunteers to parse the large amount of available data to help in a variety of areas, from discovering planets to transcribing civil war telegrams, can be found on Zooniverse. At first glance, it’s a great idea to get ordinary people to do work a computer can’t, for no fee for the researchers, and yet, these efforts have not been the runaway success that other crowdsourcing projects, such as Wikipedia, have been. A 2015 study reported that the top 10% of contributors on Zooniverse made 79% of the total contributions, and that less than one third of users returned to a project after their first visit1.

Why? I think it boils down to our motivation to do anything. You may think of motivation as a two step process: you get motivated to do something for whatever reason, and then you do it. That is to say, motivation –> action. Rinse and repeat. The problem with this process is that you need to build motivation up from scratch once you’ve done the action.

The above logic is missing a key third element. Motivation is in fact a three step cycle: you get motivated to do something, then you do it, and if you see positive results, this will in turn motivate you to perform the action again. For example, if you’re learning to play guitar, and you practise every day, you may find yourself able to play more chords, notes and songs and thus are motivated to continue practising and improving your skills. If, however, you are initially motivated to learn guitar, but despite practising every day, you feel that you’re not making progress and continue to be terrible at music, you will not feel motivated to keep playing because there are no visible improvements to your skills. So, motivation –> action –> visible results –> motivation. That we need visible results to be motivated again is an important distinction.

To engineer motivation, crowdsourcing projects need to consider their target audience, as this will determine the specific motivations to participate. With these principles in mind, it’s easy to compare existing projects and get a fairly accurate idea of how successful they have been.

Example 1: Wikipedia

  • Summary: The free online encyclopedia that allows anyone to create and edit articles started in 2001, and now has 5.5 million English articles and over 300,000 active users, numbers that are still growing2.
  • Success: High, the numbers and the ubiquity of the phrase “to wiki” something speak to the immense success of the project, which far exceeded anyone’s expectations at the time.
  • Target audience: Subject matter experts. I say this not to mean professionals of academic or technical subjects, but anyone can be a subject matter expert in something, even if it’s Lady Gaga or the Game of Thrones television series – there’s certainly a Wikipedia page for that.
  • Motivation: If you’re an expert at something, a big motivation is getting to show off your knowledge and improve or correct existing pages where you feel you know better, or can add more accurate information.
  • Action: It’s really simple to create and edit Wikipedia pages, and it doesn’t take long.
  • Visible Results: Page edits are usually visible immediately, and the sense of accomplishment by having written something that is published live to the world is what makes users motivated to make more contributions to the encyclopedia.

Example 2: Zooniverse

  • Summary: This online portal to a variety of citizen science projects started in 2009 and has produced several publications, with help from users who analyse data for scientific research, from identifying animal species to cataloging the surface of Mars3.
  • Success: Medium, while a study has found that the website provided projects with $1.5 million worth of work for 7 projects over the first 180 days, a small share of the participants (10%) supply most of the work (79%), and retention of users in individual projects is not very high1.
  • Target audience: Volunteers and science enthusiasts.
  • Motivation: Volunteers want to make the world a better place, and science enthusiasts want to make meaningful contributions to research.
  • Action: In most cases, participants answer quiz-like questions to help parse or analyse data. The problem here is that the action does not always line up with the motivation, as it can feel disjointed from feeling that you are making a difference.
  • Visible Results: Zooniverse falls short here, as there are no good visible indications of progress or results. There’s a progress bar showing the progress of the entire project, but not much in the way of showing the impact of an individual’s contributions. This is understandable as scientific research generally takes a long time, and progress may not be apparent straight away. It’s much more rewarding for a volunteer to go help at a soup kitchen, or for a science enthusiast to sort fossils at a museum, where the results of their hard work are immediately present and all the more real, motivating them to participate again.

Example 3: Google Image Labeler

  • Summary: In 2006, Google released a game that pit two users against each other to help generate labels for images in their extensive database. Players had to provide labels for an image, without using a select group of words, and scored points when their answers matched4.
  • Success: Low, the project was discontinued in 2011 and redesigned without the game or multiplayer experience for release in 2016. Players of the initial game often reported disconnects, trolling and behaviour to game the game rather than provide helpful information to Google.
  • Target audience: The Internet, and gamers to some extent, but a problem with this project was that there was no distinct target audience and it seemed to be open to everyone, which led to typical Internet trolling behaviour.
  • Motivation: Playing a fun game to score high, or to game the system by providing intentionally false information, which is an instance of a bad motivation, and harmed this project.
  • Action: Players played the game.
  • Visible Results: After each round, players saw their scores and a review of the images, getting immediate feedback on how well they did. This was good and bad, because in some cases, players got to see the deliberately incorrect phrases like “carcinoma” and “googley” that their partner was using to game the system, and then used those words to further mess with the game. Players also found out that both players might not have been shown the same image (whether due to a bug or for testing) or the list of banned words was different for each player, leading to mistrust in the game itself, which is pretty terrible when it comes to wanting to play again.

I’m most interested in how we can make games a viable option for crowdsourcing. From these three examples, think the premise of Zooniverse with the game aspect of Google Image Labeler aiming for the success of Wikipedia. It’s beginning to sound complex, and it is, though game developers have been trying to develop games using the principles of science for years, mostly for educational purposes.

This is not the approach that works for crowdsourcing, because gamers are selfish. No gamer wants to play a game that feels like work, the way Zooniverse has designed its projects with lots of reading or studying to understand what to do in quiz-like activities (and granted, these are not actually games and are aimed at volunteers and not gamers). It would do game designers well to stop thinking “how can I make my science project into a game?” and to go the other way around, considering motivation first, and ask, “how can I make gamers do my science project?”

The question is not how we can leverage science to make games, but how we can leverage gamers to help make discoveries in science.

Along came FoldIt

Picture again the initial scenario: You’ve just come home from a long day at work, and have the promise of a free evening before you. It’s been a stressful day, and you want to relax. Do you a) fire up your favourite video game or b) analyse protein structures to determine accurate folding models for scientific research? FoldIt challenged the game-science boundary by allowing players to do both, while cleverly focusing on the game aspect of the project.

Released in 2008, FoldIt is a puzzle game in which players solve protein folding puzzles and compete for high scores. The project led to the solving of a 15-year-old AIDs problem, by players successfully modelling an enzyme, Mason-Pfizer monkey virus (M-PMV) retroviral protease, that had previously stumped scientists5. The key difference between this project and Zooniverse is that it targets gamers, and with that in mind, uses gamers’ motivations to inform its design.

Target audience: Gamers.

Motivation: Gamers are motivated by:

1. Fun – At the core of any game experience, a player wants to have fun. When someone’s primary goal is to help others, or learn skills, they choose other activities over games that are more suited to those goals. FoldIt makes protein folding fun by introducing game elements, including a tutorial and toolkit for solving the puzzles.

Clear directions and helpful hints help players progress through each level.

FoldIt doesn’t shy away from the science behind proteins, which kept me interested.

When you start playing, you get a tutorial on how to use the tools in the tool bar, like in other video games.

2. Competition – Players want to gain mastery and see how they rank against other players. An important part of the game is the online leaderboards, highlighting top players and their scores for different puzzles. This provides motivation to try harder at a puzzle that many people have solved with high scores, or try to do better than your last attempt.

The point and levelling system is easily found on puzzle menus.

Progression through various levels is also graphically represented.

FoldIt’s leaderboard is prominently featured on its homepage.

3. Rewards – As I mentioned, gamers are selfish, and they want to know what’s in it for them. This does not necessarily mean monetary rewards, but games have found ways to provide virtual rewards that motivate players to keep playing. In FoldIt’s case, the game includes lots of feedback, tangible scores in a progress bar that go up and down with each move, and game acknowledgment such as the win animation and the text feedback to reinforce ways that the player is doing well or making bad moves. The rewards for scoring correctly are tied back into achievements, rankings, and progression through completing more difficult puzzles.

Action: Players play the game.

Visible Results: FoldIt runs into the same problem that Zooniverse does, because scientific progress takes a long time. It circumvents this by cleverly providing indications of progress without it necessarily being scientifically relevant. For example, you get congratulated for solving a puzzle simply because you solved a puzzle, not because you made any significant contribution to science. The puzzle may have been solved many times before, but the game feedback to each individual player telling them they were smart enough to pass a level is invaluable. The best part about FoldIt is how it provides to gamers the results they want in the language with which they are familiar, the high scores and the ascension on the leaderboard that are not necessarily relevant to making the next great scientific discovery. This shows a good understanding of players’ motivations and design choices that reflect players’ goals rather than the scientists’ goals, which help drive motivation and continued playing.

While FoldIt is fun, it’s also in a largely untapped space, though there are exciting new initiatives slowly appearing. A recently announced partnership between EVE Online and CoRoT (Convection Rotation and Planetary Transits) satellite scientists sounds promising. Their goal is to place a mini-game within EVE Online, in which players can help classify real scientific data during loading time as they jump between space stations6. Tying this in more strongly with the EVE Online in-game currency or rewards would help engage and motivate players. It will be interesting to observe this program as it develops, as partnerships between scientific ventures and existing successful games may be the most successful way to harness the power of gamers to help parse large amounts of scientific data.

Nevertheless, these lessons can be applied to all sorts of game design, or to that buzz word of “gamification”. If we consider motivation first, and start the design there, we will be far more effective at getting gamers to do anything for a cause other than bringing themselves entertainment. That is, we need to ask ourselves not “how can we make these scientific/historical/arbitrary concepts into a game?”, but “why would anyone want to play this game in the first place?”

References Cited

  1. Henry Sauermann and Chiara Franzoni. “Crowd science user contribution patterns and their implications.” PNAS 112.3 (2014): 679-684. Web. 26 April 2017.
  2. Wikipedia.” Wikipedia. Wikipedia, 20 April 2017. Web. 26 April 2017.
  3. About Us.” Zooniverse. Zooniverse. Web. 26 April 2017.
  4. Google Image Labeler.” Wikipedia. Wikipedia, 13 March 2017. Web. 26 April 2017.
  5. Dean Praetorius. “Gamers Decode AIDS Protein That Stumped Researchers For 15 Years In Just 3 Weeks.” Healthy Living. The Huffington Post, 19 November 2011. Web. 26 April 2017.
  6. Elizabeth Howell. “Planet-Hunting Scientists Turn to Online Gamers For Help.” Air & Space Magazine. Smithsonian, 17 April 2017. Web. 26 April 2017.

Additional References

  1. Laura Bailey. “Video gamers outdo scientists in contest to discover protein’s shape.” Software., 19 September 2016. Web. 26 April 2017.

One thought on “crowdsourcing science by tricking gamers into doing work

  1. Great analysis of feedback loops and their constituent parts! Understanding how these loops grip us is the first step to being able to utilize them in our designs. My favorite insight here is that the target audience determines what motivation we as designers should use to pull them into our feedback loops. I think too often designers, me among them, assume the motivations of our audience to deleterious effect.

    Much can be said about ways to learn your audiences’ motivation, probably more than should be written here. But I think it comes down to some mix of empathy and empiricism.

    The only thing I might add might be a discussion of intrinsic vs extrinsic motivation for “gamified” tasks. One example that comes to mind are those capchas from Google that ask you to select all the street signs or cars out of a picture. In this case, while not very game like, the user’s motivation is to fill out their form which is extrinsic.

    A nice read overall.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s