BIUSTRE

A multi sensor approach to Botswana sign language dataset with view of addressing occlusion

Show simple item record

dc.contributor.supervisor Hlomani, Hlomani
dc.contributor.author Madise, Kabelo Germanus
dc.date.accessioned 2021-03-12T11:57:21Z
dc.date.available 2021-03-12T11:57:21Z
dc.date.issued 2020-10-26
dc.identifier.citation Madise, K. G. (2020) A multi sensor approach to Botswana sign language dataset with view of addressing occlusion, Masters Theses, Botswana International University of Science and Technology: Palapye en_US
dc.identifier.uri http://repository.biust.ac.bw/handle/123456789/276
dc.description.abstract Automatic Sign Language Recognition (ASLR) helps with converting hand gestures to spoken language, therefore, enabling communication between those able to hear and those unable to hear. There is abundant research work on ASLR of British Sign Language and American Sign Language. However, Botswana Sign Language has received less attention at least in terms of computational representation leading to automatic sign language recognition which can be attributed to lack of a Botswana Sign Language dataset Sign Language Dataset. Work done on other languages is not always directly applicable to Botswana Sign Language because sign languages differ significantly from country to country. A dataset plays a pivotal role in sign language recognition pipeline. However, one of the major challenges researcher’s encounter is accurately extracting hands and fingers of a signer when the hands or fingers are not in the field of view of the camera (Occlusion). Researchers have argued that using multiple sensors addresses occlusion better than using a single sensor. This study proposes an approach to developing a Botswana Sign Language dataset based on tracking data from the Microsoft’s Kinect sensor and the leap motion controller. The feature sets from both devices are combined in order to improve recognition performance (especially when occlusion). Recognition is performed by Support Vector Machines (SVM) and K Nearest Neighbor (KNN). The resulting dataset consisted of five thousand four hundred and thirty-three (5433) Botswana Sign Language gestures comprised of five (5) different sign words. The experimental results obtained show that recognition performance improves when compared to using one device to capture sign gestures. An overall recognition accuracy of 99.90% and 99.40% have been recorded using SVM and KNN respectively. en_US
dc.description.sponsorship Botswana International University of Science and Technology (BIUST) en_US
dc.language.iso en en_US
dc.publisher Botswana International University of Science and Technology (BIUST) en_US
dc.subject Automatic Sign Language Recognition (ASLR) en_US
dc.subject Microsoft’s Kinect sensor en_US
dc.subject Leap motion controller en_US
dc.subject Support Vector Machines (SVM) en_US
dc.subject K Nearest Neighbor (KNN). en_US
dc.subject Botswana sign language dataset en_US
dc.title A multi sensor approach to Botswana sign language dataset with view of addressing occlusion en_US
dc.description.level msc en_US
dc.description.accessibility unrestricted en_US
dc.description.department cis en_US


Files in this item

This item appears in the following Collection(s)

  • Faculty of Sciences
    This collection is made up of electronic theses and dissertations produced by post graduate students from Faculty of Sciences

Show simple item record

Search BIUSTRE


Browse

My Account