The role of randomness in crowdsourcing


app-contest-dice illustration

I’ve just read a very interesting paper about crowdsourcing, authored by four researchers from Vienna (Austria). It’s called “Does god play dice?” Randomness vs. deterministic explanations of idea originality in crowdsourcing (PDF), will be presented in June at the 35th DRUID Celebration Conference in Barcelona (Spain), and argues that the originality of ideas in crowdsourcing contests is largely random (not determined by skills, expertise, creativity or motivations of participants or other deterministic factors). To come up with these results, they simulated an app contest sponsored by Apple and Orange.


Which factors are responsible for the success of crowdsourcing tournaments? The recent crowdsourcing literature has investigated this field thoroughly in the last years, and came up with a variety of explanations: incentives (monetary or not), interaction between participants, the task expected from the crowd, participants’ expertise, situational factors… Research has come up with lots of different results to explain the success of online contests, most of it posits that these different factors determine the outcome of crowdsourcing initiatives. But what it this deterministic explanation was not true?

We suggest that this lack of a consensus may be due to the inherent limitations of the deterministic perspective (Franke et al., 2013)

The authors of this paper wanted to find out what role randonmess might play in getting original ideas through crowdsourcing. To find out, they organized an experimental idea competition regarding mobile communication of the future, sponsored by Apple (who has never crowdsourced in the real world) and the network provider Orange. A total net sample of 1,089 participants provided app ideas and responded to the researchers’ survey about key variables: expertise, experience, education, creativity, motivation etc. After that, they let the crowd assess the originality of each of these ideas.

The reserachers have chosen the development of ideas for smart phone apps as the object of their study

The reserachers have chosen the development of ideas for smart phone apps as the object of their study

To test the role of randomness, the researchers simulated different crowdsourcing tournaments (a total of 36,400 setting depending on a varying level of interaction with others, the presence of absence of incentives, broad or narrow task framing, number of participants) and for each contest simulated, they randomly drew participants from the overall sample. As they knew their ideas and had them assessed by the crowd, they could measure the performance of each simulated contest (the mean originality of its top ten ideas) if the contests had been organized in these specific ways.

Our finding is crystal clear: randomness rules in our crowdsourcing tournament (Franke et al., 2013)

The total model shows that although we include 22 independent variables and thus basically all causal factors discussed in the literature, 93.6% of the variance of the dependent variable is left unexplained” they explain, leading to the conclusion that “randomness indeed plays a major role in determining the originality of an idea submitted.” So how do you make your crowdsourcing initiative a success if the design parameters don’t impact the outcome?

The obvious conclusion for managers […] is that they are well advised to recruit as many participants as possible (Franke et al., 2013)

If we take the fact that randomness explains more than 90% of the outcome of a contest for granted, increasing the number of participants indeed seems to be the only solution. Overall, this finding is provocative and significant, because it adopts a radically different approach to understanding – or not understanding – crowdsourcing initiatives. It tells us that we can run the best regression analyses, create the most complicated models, test a variety of settings… we will never be able to significantly influence the outcome of crowdsourcing.

randomness-dice-illustration

But the study also has limitations. Four of them are presented in the paper’s conclusion, to which I would add that the task they employed did not require specific technical knowledge; participants were just asked to submit a text description of their app idea. Some idea contests ask for more elaborate submissions, including visualization or even prototyping of the concept. It is likely that skills, expertise or experience plays a much bigger role in such a setting than just asking for short descriptions (which everyone can do). I’m not even talking about other crowdsourcing tasks like the design of a packaging, the elaboration of a promotional concepts or the creation of a video advertisement.

The sheer difference in effect size gives us confidence that our main conclusion, namely that randomness plays an important role in explaining the success of crowdsourcing tournaments, holds (Franke et al., 2013)

But whatever the limitations, the authors reveal that we can not control everything, and this is particularly true in crowdsourcing. It is always highly uncertain what the crowd willl come up with, and this paper is (to my knowledge) to emprically prove that. I’d be curious to see what comments they will get at the conference in Barcelona… and what reviewers will say when they send to a journal. It’s a great read for everyone interested in the field!

Reference:

Franke, N., Lettl, C., Roiser, S., & Tuertscher, P. (2013). “Does god play dice?” Randomness vs. deterministic explanations of idea originality in crowdsourcing. 35th DRUID Celebration Conference 2013 (pp. 1–39). Barcelona, Spain.

One Comment

  1. I agree with you Yannig the result is surprising, what matters in crowdsourcing is the quantity of the crowd participating, independently of all factors related to the context.
    I think one variable is missing in the study such that the ergonomy of the web site.
    Finally I wonder how do you experiment 36,400 settings?
    I personaly made an experiment to assess creativity with many different latent variables, and the AVE is about 5% which is consistent with the Franke study!! May be randomness is the explanation.

    Hope to see tou in Paris in july

    Stéphane

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s