By Topic

Leave-One-Out-Training and Leave-One-Out-Testing Hidden Markov Models for a Handwritten Numeral Recognizer: The Implications of a Single Classifier and Multiple Classifications

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)

Hidden Markov models (HMMs) have been shown to be useful in handwritten pattern recognition. However, owing to their fundamental structure, they have little resistance to unexpected noise among observation sequences. In other words, unexpected noise in a sequence might ldquo breakrdquo the normal transmission of states for this sequence, making it unrecognizable to trained models. To resolve this problem, we propose a leave-one-out-training strategy, which will make the models more robust. We also propose a leave-one-out-testing method, which will compensate for some of the negative effects of this noise. The latter is actually an example of a system with a single classifier and multiple classifications. Compared with the 98.00 percent accuracy of the benchmark HMMs, the new system achieves a 98.88 percent accuracy rate on handwritten digits.

Published in:

IEEE Transactions on Pattern Analysis and Machine Intelligence  (Volume:31 ,  Issue: 12 )