System Maintenance Notice:
Single article purchases and IEEE account management are currently unavailable. We apologize for the inconvenience.
By Topic

Cheap, Fast, and Good Enough for the Non-biomedical Domain but is It Usable for Clinical Natural Language Processing? Evaluating Crowdsourcing for Clinical Trial Announcement Named Entity Annotations

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
7 Author(s)
Haijun Zhai ; Div. of Biomed. Inf., Cincinnati Children''s Hosp. Med. Center, Cincinnati, OH, USA ; Lingren, T. ; Deleger, L. ; Qi Li
more authors

Building upon previous work from the general crowdsourcing research, this study investigates the usability of crowdsourcing in the clinical NLP domain for annotating medical named entities and entity linkages in a clinical trial announcement (CTA) corpus. The results indicate that crowdsourcing is a feasible, inexpensive, fast, and practical approach to annotate clinical text (without PHI) on large scale for medical named entities. The crowdsourcing program code was released publicly.

Published in:

Healthcare Informatics, Imaging and Systems Biology (HISB), 2012 IEEE Second International Conference on

Date of Conference:

27-28 Sept. 2012