HearAI - Where AI Supports Inclusion of Deaf and Hearing-Impaired Individuals

Image credit: Authors

Abstract

Many people use sign language every day. Some of them understand spoken and/or written native language, while others do not. This presents a huge communication barrier for Deaf people as access to qualified interpreters, due to their low availability and relatively high cost, is difficult. There is no doubt that there is a need to automate the process of sign language translation. Deep learning-based methods are a promising approach to automate this task. However, they need a lot of adequately labeled training data to perform well. Moreover, different nationalities use different sign language versions, and there is no universal one. Sign languages are natural human languages with their own grammatical rules and lexicons. Therefore, developing an efficient system capable of translating spoken languages into sign languages and vice versa would significantly improve two-way communication. In the HearAI non-profit project, we addressed this problem and investigated different multilingual open sign language corpora labeled by linguists in the language-agnostic Hamburg Notation System (HamNoSys). First, we simplified the difficult to understand structure of the HamNoSys without significant loss of gloss meaning by introducing numerical multilabels. Second, we utilized estimated pose landmarks and selected video keyframes’ image-level features to recognise isolated glosses. Thus, we separately analyzed possibilities of dominant hand location, its position and shape, and overall movement symmetry, which allowed us to deeply explore the usefulness of HamNoSys for gloss recognition. We believe that it is an important step toward making the world a more inclusive place.

Date
Jul 23, 2022 2:00 PM — 5:30 PM
Location
Messe Wien
Messepl. 1, Vienna, 1021

Related