By Topic

Learning from Multiple Annotators: When Data is Hard and Annotators are Unreliable

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Wolley, C. ; LSIS, Aix-Marseille Univ., Marseille, France ; Quafafou, M.

The crowd sourcing services became popular making it easy and fast to label datasets by multiple annotators in order to achieve supervised learning tasks. Unfortunately, in this context, annotators are not reliable as they may have different levels of experience or knowledge. Furthermore, the data to be labeled may also vary in their level of difficulty. How do we deal with hard data to label and unreliable annotators? In this paper, we present a probabilistic model to learn from multiple naive annotators, considering that annotators may decline to label an instance when they are unsure. Both errors and ignorance of annotators are integrated separately into the proposed Bayesian model. Experiments on several datasets show that our method achieves superior performance compared to other efficient learning algorithms.

Published in:

Data Mining Workshops (ICDMW), 2012 IEEE 12th International Conference on

Date of Conference:

10-10 Dec. 2012