Affordable Access

Access to the full text

Answer validation for generic crowdsourcing tasks with minimal efforts

Authors
  • Hung, Nguyen Quoc Viet
  • Thang, Duong Chi
  • Tam, Nguyen Thanh
  • Weidlich, Matthias
  • Aberer, Karl
  • Yin, Hongzhi
  • Zhou, Xiaofang
Type
Published Article
Journal
The VLDB Journal
Publisher
Springer Berlin Heidelberg
Publication Date
Oct 13, 2017
Volume
26
Issue
6
Pages
855–880
Identifiers
DOI: 10.1007/s00778-017-0484-3
Source
Springer Nature
Keywords
License
Yellow

Abstract

Crowdsourcing has been established as an essential means to scale human computation in diverse Web applications, reaching from data integration to information retrieval. Yet, crowd workers have wide-ranging levels of expertise. Large worker populations are heterogeneous and comprise a significant amount of faulty workers. As a consequence, quality insurance for crowd answers is commonly seen as the Achilles heel of crowdsourcing. Although various techniques for quality control have been proposed in recent years, a post-processing phase in which crowd answers are validated is still required. Such validation, however, is typically conducted by experts, whose availability is limited and whose work incurs comparatively high costs. This work aims at guiding an expert in the validation of crowd answers. We present a probabilistic model that helps to identify the most beneficial validation questions in terms of both improvement in result correctness and detection of faulty workers. By seeking expert feedback on the most problematic cases, we are able to obtain a set of high-quality answers, even if the expert does not validate the complete answer set. Our approach is applicable for a broad range of crowdsourcing tasks, including classification and counting. Our comprehensive evaluation using both real-world and synthetic datasets demonstrates that our techniques save up to 60% of expert efforts compared to baseline methods when striving for perfect result correctness. In absolute terms, for most cases, we achieve close to perfect correctness after expert input has been sought for only 15% of the crowdsourcing tasks.

Report this publication

Statistics

Seen <100 times