Language without Barrier: A Machine Learning Based American Sign Language Analyzer
Course Instructor
Pramod Gupta
Abstract
This project focuses on developing a real-time system that translates American Sign Language (ASL) into written English. The core of the framework uses Temporal Convolutional Networks (TCNs) to recognize and classify motion-based ASL gestures, those involving hand movements over time, rather than static signs. By accurately interpreting these gestures from video input, the system helps improve communication between the deaf or hard-of-hearing community and the hearing world. This work aims to enhance accessibility and support more inclusive digital interactions.
Language without Barrier: A Machine Learning Based American Sign Language Analyzer
This project focuses on developing a real-time system that translates American Sign Language (ASL) into written English. The core of the framework uses Temporal Convolutional Networks (TCNs) to recognize and classify motion-based ASL gestures, those involving hand movements over time, rather than static signs. By accurately interpreting these gestures from video input, the system helps improve communication between the deaf or hard-of-hearing community and the hearing world. This work aims to enhance accessibility and support more inclusive digital interactions.