Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Speech recognition using autocorrelation analysis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Purton, R. ; British Telecommunications Research Ltd., Berks., England

Experiments are described in which word recognition is based on digital autocorrelation analysis followed by computer pattern matching. Incoming speech is split into two frequency bands, and the signals in each band are quantized into two amplitude levels. The two signals are fed to separate autocorrelators, consisting of binary shift registers, digital multipliers, and RC integrators. The low- and high-frequency correlators have, respectively, 10 and 8 outputs which are coded into a 36-bit character, sampled 40 times per second, and fed to a digital computer for recognition. In the computer, master patterns in the form of a 36 ×30 matrix, are generated for each word of the vocabulary from a number of known utterances of the word. Unknown utterances are then compared with each master pattern in turn, and the best match determined by a simple scoring technique; if desired, master patterns can be "updated" when correct recognition occurs. Master patterns can be formed from either one or several speakers; when formed from a single speaker, and with a vocabulary of 10 words, subsequent utterances by the same speaker are recognized with an average accuracy of 90 percent.

Published in:

Audio and Electroacoustics, IEEE Transactions on  (Volume:16 ,  Issue: 2 )