Literature by the same author
plus at Google Scholar

Bibliografische Daten exportieren
 

Curating a Crowd that Evaluates Ideas to Solve Grand Challenges : The Role of Specialization

Title data

Gimpel, Henner ; Laubacher, Robert J. ; Schäfer, Ricarda ; Schoch, Manfred:
Curating a Crowd that Evaluates Ideas to Solve Grand Challenges : The Role of Specialization.
In: Proceedings of the 6th International Conference on Computational Social Science (IC2S2). - Boston, USA , 2020

Abstract in another language

Complex problems such as climate change, social inequality, or the fight against pandemics pose severe challenges to societies across the globe. In order to facilitate collective action, stakeholder involvement, and successful innovation for tackling these challenges, the generation of effective and innovative solution ideas is necessary. This often requires specialized knowledge in the respective domain. Digital innovation contests have emerged as a promising tool for such idea generation, as they offer a platform for sharing expertise and developing ideas. Distinguishing good and bad ideas in innovation contests, however, is increasingly becoming a problem, as the number of created ideas rises. Expert juries are traditionally responsible for idea evaluation in such contests, as their evaluation performance has been found to be the best approach available. However, they are often subject to heavy time constraints and incur high costs for contest operators, resulting in the evaluation process being a substantial bottleneck. In order to assess whether expert juries could be supported or replaced by crowdsourced idea evaluations in the domain of complex problems, we draw on data from four different innovation contests on climate change. We compare expert jury evaluations with crowdsourced idea evaluations and assess performance differences. Results indicate that contest specialization – the degree to which a contest’s topic belongs to one particular, knowledge-intensive domain, rather than a broad field of interest – is a key inhibitor of crowd evaluation performance. We find that relative ranking evaluations, rather than absolute rating scales are a promising approach to improve crowd performance. Our study suggests that even in the domain of complex problems, crowdsourced idea evaluations are a suitable method for pre-selecting ideas in innovation contests.

Further data

Item Type: Article in a book
Refereed: Yes
Keywords: Innovation contests; idea evaluation; crowdsourcing; expert judgement; climate change
Institutions of the University: Faculties > Faculty of Law, Business and Economics > Department of Business Administration
Research Institutions
Research Institutions > Affiliated Institutes
Research Institutions > Affiliated Institutes > Fraunhofer Project Group Business and Information Systems Engineering
Research Institutions > Affiliated Institutes > FIM Research Center Finance & Information Management
Faculties
Faculties > Faculty of Law, Business and Economics
Result of work at the UBT: Yes
DDC Subjects: 000 Computer Science, information, general works > 004 Computer science
300 Social sciences > 330 Economics
Date Deposited: 16 Dec 2020 06:45
Last Modified: 09 Dec 2022 10:34
URI: https://eref.uni-bayreuth.de/id/eprint/61148