Publication:
Sign Language to Speech Translator Using Deep Learning

dc.contributor.authorAmirul Hakim Bin Azmien_US
dc.date.accessioned2023-05-03T17:11:57Z
dc.date.available2023-05-03T17:11:57Z
dc.date.issued2020-02
dc.descriptionFYP 2 SEM 2 2019/2020en_US
dc.description.abstractPeople who are categorised as Deaf-Mute are considered as having a disability. Their method of communication stretches in a lot of various ways, including lip-reading, vocalizations and sign language. However, there exists a boundary in terms of comprehension when it comes to communication between them and normal people. This thesis documents the development of a Deep Neural Network system that interprets electromyography (EMG) signals from the forearm of Deaf-Mute individual and converts the data captured into digital signal. This process is achieved using the Myo Armband to capture the signal, and Tensorflow to train and validate the data. The integration between both the hardware and software increases the efficiency of communication for the Deaf-Mute community.en_US
dc.identifier.urihttps://irepository.uniten.edu.my/handle/123456789/21539
dc.language.isoenen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.titleSign Language to Speech Translator Using Deep Learningen_US
dspace.entity.typePublication
Files