Sign Language Recognition and Video Generation Using Deep Learning
DOI:
https://doi.org/10.61779/jasetm.v1i2.4Keywords:
Gesture recognition, Deep Learning, Sign language, Video generationAbstract
The proposed system aims to help normal people understand the communication of speech impaired individuals through hand gestures recognition and generating animation gestures. The system focuses on recognizing different hand gestures and converting them into information that is understandable by normal people. YOLOv8 model, a state-of-the-art object detection algorithm, is being employed in this system to detect and classify sign language gestures. Sign language video generation can act as a guide for anyone who is in the process of learning sign language, by providing them with expressive sign language videos using avatars that can translate the user inputs to sign language videos. CWASA Package and SiGML files are used for this process. The project contributes to the advancement of assistive technologies for the hearing-impaired community, offering innovative solutions for sign language recognition and video generation.
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Meera Treesa Mathews, Joyal Raphel, Joseph Shaju C, Steve Soney Varghese, Paul J Puthusserry
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after the acceptance, with an acknowledgement of its initial publication in this journal, as it can lead to productive exchanges, as well as earlier and greater citation of published work.