dc.description.abstract |
Sign language is communication between hearing-impaired with hearing-impaired and hearing-impaired with normal people. It is also a way to communicate using hand and face gestures of phrases, words, or letters and it is a nonverbal communication language. However, there is a gap in communication between hearing-impaired with hearing peoples. This is due to the reason that most hearing people in Ethiopia do not know and understand the sign. So, a study aims to design a model that converts Ethiopian Amharic phrase signs to text to eliminate the communication gaps between the hearings disabled with hearing people because there is adifficulty of communication with Amharic phrase level that could not solve the previous works. We used deep learning techniques to solve a problem and achieve the objectives. Hybrid networks (CNN with LSTM) are applied in our study. Because a hybrid network is used to increase the performance of the model. For feature extraction, we used a convolutional neural network (CNN) algorithm and for classification, we used Long-short-term-memory (LSTM). LSTM has an ability to classify sequences of information and remember it for a long period of time. From the total dataset of 10500, 80% (8400) used for training, and 10% (1050) used for validation, and 10%(1050) also used for testing the model. The testing accuracy of the model is 96%. We used accuracy, precision, recall, f1-score, and confusion matrix to evaluate our model. |
en_US |