By Topic

Speech perception and cochlear signal processing [Life Sciences]

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Allen, J.B. ; Electr. Eng., Univ. of Illinois, Urbana-Champaign, Urbana, IL, USA ; Feipeng Li

Speech sounds are encoded by time-varying spectral patterns called acoustic cues. The processing and detection of these acoustic cues lead to events defined as the psychological correlates of the acoustic cues. Due to the similarity between the acoustic cues, speech sounds form natural confusion groups. When the feature of the sound within a group is masked by noise, one event can turn into another. A systematic psychoacoustic "3-D method" has been developed to explore the perceptual cues of stop consonants from naturally produced speech sounds. For each sound, our 3-D method measures the contribution of each subcomponent by time-truncating, high-pass/low-pass filtering, and masking with noise. The Al-gram, a visualization tool that simulates the auditory peripheral processing, is used to predict the audible components of the speech sound. The results are that the plosive consonants are defined by a short duration bursts characterized by their center frequency, as well as the delay to the onset of voicing. Fricatives are characterized by the duration and bandwidth of a noise-like feature. Pilot studies of hearing-impaired (HI) speech perception indicate that cochlear dead regions have a considerable impact on consonant identification. An HI listener may have problems understanding speech simply because he/she cannot hear certain sounds, since the events are missing due to either the hearing loss, or the masking effect introduced by the noise.

Published in:

Signal Processing Magazine, IEEE  (Volume:26 ,  Issue: 4 )