Neat Science Thursday – The Efficacy of Smartphone based Citizen Science Training

Citizen science has contributed greatly to ecological studies and nature surveys for over a century, but is just beginning to make a mark in biomedical research. Part of the problem may be attributed to the difficulty in enabling a citizen scientists’ participation on complex tasks. Foldit did an excellent job in harnessing the problem solving skills of the gaming community by turning protein folding problems into a game. Eyewire has enabled citizen scientists to help map neurons by creating a challenging and interesting virtual coloring book. In order to be successful at these tasks, training is of extreme importance–so important that the recent PLOS One paper studying three different citizen science training methods deserves more attention.

Although this study was again focused on ecological work, the authors of the paper studied three different modes of training:

  • In-person training- participants are provided in-person training along with app-based videos and app-based text/images.
  • Video-training- participants are given no in-person training, but receive app-based video and app-based text/image training.
  • Text/Image only training- participants only receive app-based text/image training (no video or in-person training).

Each mode of training had an equal number of participants during training; however, removing low submission participants and participant drop out resulted in unequal numbers in the training groups during the data analysis. All in all, there were a total of 56 participants studied in the final analysis: 14 (in-person training), 17 (video training), and 25 (text/image training).

Participants were trained on the identification of specific invasive plant species in Maine and were asked to submit their pictures and locations of the invasive plants in question using the Outsmart app.

Table 1. Percent correctly identified by the five species investigated. doi:10.1371/journal.pone.0111433.t001

After analyzing the results, the authors found that participants did an excellent job of identifying the invasive species in the ‘easy’ category. The biggest difference was in the identification of invasive species in the ‘difficult’ category. The authors expected that participants from the in-person training group would outperform the others, but found that participants in the video training group did just as well. This has important implications for citizen science training since geographic limitations restrict the ability to do massive amounts of in-person training, but video training is stream-able and may help participants to perform just as well.

It’s unclear whether or not there were sample issues at the end due to participant drop off was an issue in the final results–that is, whether or not less skilled users were dropping out, effectively increasing the % correct rates in the different groups since more users dropped out of the in-person and video training groups than the text/image training group. It’s also unclear as to whether the training method affected drop out rates, but it is interesting that the text/image training had lower drop out rates.

Also important to note- regardless of the training medium, the citizen scientists did pretty good overall–leading to two take home messages:

  1. Citizen scientists can and do contribute high quality data (not a new finding, but certainly worth repeating)
  2. This research group did a pretty awesome job with their training to begin with and the different media through which they offered the training was a bit like icing on the cake.

See the original paper here, it’s open access, and quite an enjoyable read.
For more citizen science games, visit this post.