Recognition of Sign Language using Neural Network

Authors

  • Reena Sharma
  • Sonam Gour

Keywords:

Convolutional neural network (CNN), Dataset, Deep learning, Sensor, Sign language

Abstract

Given the widespread presence of individuals with hearing impairments, it is crucial to develop effective localized technology for recognizing sign language. Deaf and hard-of-hearing individuals use sign language to communicate within their communities and with others. The process of computerized sign language interpretation involves learning and recognizing sign gestures and subsequently converting them into text and speech. Sign gestures fall into two categories: static and dynamic. Ongoing efforts aim to develop systems for sign language recognition, which would facilitate communication between sign language users and those who do not use sign language. A current research focus is to enhance sign recognition, especially in scenarios with limited computational resources. Although sign language recognition has been a long-studied challenge, society still faces a considerable journey towards finding a comprehensive solution. The majority of solutions developed to address this challenge have predominantly relied on either vision-based systems that utilize cameras exclusively or contact-based systems like sensor gloves. The latter option is cost-effective, and its appeal has grown significantly with the emergence of deep learning techniques. This article introduces a prototype for a first-person sign language translation system based on convolutional neural networks with dual cameras. The article is structured around three main sections: the dataset used for the training of deep learning models and the evaluation of the model's performance.

Published

2023-09-14

Issue

Section

Articles