Publication: Sign Language to Speech Translator Using Deep Learning
dc.contributor.author | Amirul Hakim Bin Azmi | en_US |
dc.date.accessioned | 2023-05-03T17:11:57Z | |
dc.date.available | 2023-05-03T17:11:57Z | |
dc.date.issued | 2020-02 | |
dc.description | FYP 2 SEM 2 2019/2020 | en_US |
dc.description.abstract | People who are categorised as Deaf-Mute are considered as having a disability. Their method of communication stretches in a lot of various ways, including lip-reading, vocalizations and sign language. However, there exists a boundary in terms of comprehension when it comes to communication between them and normal people. This thesis documents the development of a Deep Neural Network system that interprets electromyography (EMG) signals from the forearm of Deaf-Mute individual and converts the data captured into digital signal. This process is achieved using the Myo Armband to capture the signal, and Tensorflow to train and validate the data. The integration between both the hardware and software increases the efficiency of communication for the Deaf-Mute community. | en_US |
dc.identifier.uri | https://irepository.uniten.edu.my/handle/123456789/21539 | |
dc.language.iso | en | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Deep Learning | en_US |
dc.title | Sign Language to Speech Translator Using Deep Learning | en_US |
dspace.entity.type | Publication |