Abstract:
Handwriting is still the preferred writing mode in the classroom, but it makes introducing intelligent tutoring systems much harder. A major building block within these s...Show MoreMetadata
Abstract:
Handwriting is still the preferred writing mode in the classroom, but it makes introducing intelligent tutoring systems much harder. A major building block within these systems are scoring algorithms that assess whether the content of free-text student answers is correct or not. We collect a dataset of handwritten student answers which we make publicly available to foster future research in the field. We use a state-of-the-art system for handwriting recognition containing a line and word segmentation module and a neural recognition model. We find that scoring performance on handwritten answers is significantly worse compared to typed answers due to recognition errors. Using a simple postprocessing step based on a unigram language model is already sufficient to achieve a performance very close to that on typed input.
Date of Conference: 08-10 September 2020
Date Added to IEEE Xplore: 25 November 2020
ISBN Information: