Modularity and scaling in large phonemic neural networks | IEEE Journals & Magazine | IEEE Xplore

Modularity and scaling in large phonemic neural networks


Abstract:

The authors train several small time-delay neural networks aimed at all phonemic subcategories (nasals, fricatives, etc.) and report excellent fine phonemic discriminatio...Show More

Abstract:

The authors train several small time-delay neural networks aimed at all phonemic subcategories (nasals, fricatives, etc.) and report excellent fine phonemic discrimination performance for all cases. Exploiting the hidden structure of these small phonemic subcategory networks, they propose several technique that make it possible to grow larger nets in an incremental and modular fashion without loss in recognition performance and without the need for excessive training time or additional data. The techniques include class discriminatory learning, connectionist glue, selective/partial learning, and all-net fine tuning. A set of experiments shows that stop consonant networks (BDGPTK) constructed from subcomponent BDG- and PTK-nets achieved up to 98.6% correct recognition compared to 98.3 and 98.7% correct for the BDG- and PTK-nets. Similarly, an incrementally trained network aimed at all consonants achieved recognition scores of about 96% correct. These results are comparable to the performance of the subcomponent networks and significantly better than that of several alternative speech recognition methods.<>
Page(s): 1888 - 1898
Date of Publication: 06 August 2002
Print ISSN: 0096-3518

Contact IEEE to Subscribe

References

References is not available for this document.