Publication:
Sign Language to Speech Translator Using Deep Learning

Date
2020-02
Authors
Amirul Hakim Bin Azmi
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Abstract
People who are categorised as Deaf-Mute are considered as having a disability. Their method of communication stretches in a lot of various ways, including lip-reading, vocalizations and sign language. However, there exists a boundary in terms of comprehension when it comes to communication between them and normal people. This thesis documents the development of a Deep Neural Network system that interprets electromyography (EMG) signals from the forearm of Deaf-Mute individual and converts the data captured into digital signal. This process is achieved using the Myo Armband to capture the signal, and Tensorflow to train and validate the data. The integration between both the hardware and software increases the efficiency of communication for the Deaf-Mute community.
Description
FYP 2 SEM 2 2019/2020
Keywords
Machine Learning , Deep Learning
Citation