Literatur vom gleichen Autor/der gleichen Autor*in
plus bei Google Scholar

Bibliografische Daten exportieren
 

Curating a Crowd that Evaluates Ideas to Solve Grand Challenges : The Role of Specialization

Titelangaben

Gimpel, Henner ; Laubacher, Robert J. ; Schäfer, Ricarda ; Schoch, Manfred:
Curating a Crowd that Evaluates Ideas to Solve Grand Challenges : The Role of Specialization.
In: Proceedings of the 6th International Conference on Computational Social Science (IC2S2). - Boston, USA , 2020

Abstract

Complex problems such as climate change, social inequality, or the fight against pandemics pose severe challenges to societies across the globe. In order to facilitate collective action, stakeholder involvement, and successful innovation for tackling these challenges, the generation of effective and innovative solution ideas is necessary. This often requires specialized knowledge in the respective domain. Digital innovation contests have emerged as a promising tool for such idea generation, as they offer a platform for sharing expertise and developing ideas. Distinguishing good and bad ideas in innovation contests, however, is increasingly becoming a problem, as the number of created ideas rises. Expert juries are traditionally responsible for idea evaluation in such contests, as their evaluation performance has been found to be the best approach available. However, they are often subject to heavy time constraints and incur high costs for contest operators, resulting in the evaluation process being a substantial bottleneck. In order to assess whether expert juries could be supported or replaced by crowdsourced idea evaluations in the domain of complex problems, we draw on data from four different innovation contests on climate change. We compare expert jury evaluations with crowdsourced idea evaluations and assess performance differences. Results indicate that contest specialization – the degree to which a contest’s topic belongs to one particular, knowledge-intensive domain, rather than a broad field of interest – is a key inhibitor of crowd evaluation performance. We find that relative ranking evaluations, rather than absolute rating scales are a promising approach to improve crowd performance. Our study suggests that even in the domain of complex problems, crowdsourced idea evaluations are a suitable method for pre-selecting ideas in innovation contests.

Weitere Angaben

Publikationsform: Aufsatz in einem Buch
Begutachteter Beitrag: Ja
Keywords: Innovation contests; idea evaluation; crowdsourcing; expert judgement; climate change
Institutionen der Universität: Fakultäten > Rechts- und Wirtschaftswissenschaftliche Fakultät > Fachgruppe Betriebswirtschaftslehre
Forschungseinrichtungen
Forschungseinrichtungen > Institute in Verbindung mit der Universität
Forschungseinrichtungen > Institute in Verbindung mit der Universität > Projektgruppe Wirtschaftsinformatik der Fraunhofer FIT
Forschungseinrichtungen > Institute in Verbindung mit der Universität > FIM Kernkompetenzzentrum Finanz- & Informationsmanagement
Fakultäten
Fakultäten > Rechts- und Wirtschaftswissenschaftliche Fakultät
Titel an der UBT entstanden: Ja
Themengebiete aus DDC: 000 Informatik,Informationswissenschaft, allgemeine Werke > 004 Informatik
300 Sozialwissenschaften > 330 Wirtschaft
Eingestellt am: 16 Dec 2020 06:45
Letzte Änderung: 09 Dec 2022 10:34
URI: https://eref.uni-bayreuth.de/id/eprint/61148