By Topic

Applying neural network developments to sign language translation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)

Neural networks are used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. Signal processing techniques developed for defense-related programs have been adapted and applied to this project. Algorithm development and transition using neural network architectures has been encouraging. The results of the feasibility study for this project are described

Published in:

Neural Networks for Processing [1993] III. Proceedings of the 1993 IEEE-SP Workshop

Date of Conference:

6-9 Sep 1993