BDU IR

Ethiopian Sign Language Recognition Based on Hand Gesture and Facial Expression Using Convolutional Neural Network

Show simple item record

dc.contributor.author Walelign, Andargie
dc.date.accessioned 2021-10-13T06:47:01Z
dc.date.available 2021-10-13T06:47:01Z
dc.date.issued 2020-08
dc.identifier.uri http://ir.bdu.edu.et/handle/123456789/12725
dc.description.abstract Sign language is the natural language used by hearing-impaired people to communicate with each other. Most of the time, healthy people do not understand sign language restricting the deaf community to communicate with only themselves. There is a communication barrier between the deaf community and healthy people. Various researches have been done to automate sign language recognition. Researches that are done on one sign language does not apply to another sign language. Most researches that are done on Ethiopian sign language concentrated only on the translation of Amharic text into their corresponding sign gesture and fingerspelling recognition. Almost all recognition systems are concerned with only manual signs such as hand gestures or finger shapes not consider non-manual signals such as facial expression. This research work dealt with Recognition of Ethiopian sign language based on hand gesture and facial expression using convolutional neural network. It translates the gesture of sign word into their corresponding text. The proposed system has five main components: preprocessing, hand and face segmentation, feature extraction, classification, and classifier combination. Preprocessing involves frame extraction, size adjustment to standard size, and noise removal using median filter are performed. Hand and face segmentation is done using YCbCr skin detection and MTCNN techniques respectively. In feature extraction we have applied Gabor filter on the facial image to extract texture features from the segmented face image. Feature learning and classification are carried out using convolutional neural network. Data augmentation and dropout techniques are applied to overcome the overfitting problem in the model. In classifier combination phase, two different modalities manual (hand gesture) and non-manual (facial expression) are combined to improve recognition performance. The Dataset used for training the model is prepared by our own from Yeakatit 23 primary school deaf teachers and students. The model is trained using total of 5210 images. The contribution of this thesis is a CNN model by using two moality hand gesture, and facial expression for Ethiopian sign language, and a segmentation algorithm for segmenting ROI part. The proposed system implemented using python. The model achieved average recognition accuracy of 97.3% by using two modalities (i.e Hand gesture and facial expression). Our model improves its performance from state-of-the-art CNN models. en_US
dc.language.iso en_US en_US
dc.subject INFORMATION TECHNOLOGY en_US
dc.title Ethiopian Sign Language Recognition Based on Hand Gesture and Facial Expression Using Convolutional Neural Network en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record