Fundamentals of digital image processing
Fundamentals of digital image processing
Modelling of a fuzzy controller with application to the control of biological processes
Fuzzy Sets and Systems
The Design and Use of Steerable Filters
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine vision
Digital image processing
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Face Recognition by Elastic Bunch Graph Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence
Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACM Computing Surveys (CSUR)
Unsupervised Segmentation of Color-Texture Regions in Images and Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognition of gestures in Arabic sign language using neuro-fuzzy systems
Artificial Intelligence
An Introduction to Fuzzy Logic Applications in Intelligent Systems
An Introduction to Fuzzy Logic Applications in Intelligent Systems
Computer Vision for Interactive Computer Graphics
IEEE Computer Graphics and Applications
An Efficient k-Means Clustering Algorithm: Analysis and Implementation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Segmentation of Color Images Based on k -means Clustering in the Chromaticity Plane
CBAIVL '99 Proceedings of the IEEE Workshop on Content-Based Access of Image and Video Libraries
Computer vision for computer games
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Robust classification of hand postures against complex backgrounds
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Image Processing, Analysis, and Machine Vision
Image Processing, Analysis, and Machine Vision
Hi-index | 0.00 |
Sign language plays a crucial role in communication between people when voices cannot reach them. Deaf people use sign language as their primary method of communication. Hand gestures represent the alphabets of sign languages. For proper inter-communication between hearing and deaf people, a translator becomes of great need. In this paper, a fully automated translator of the gestures representing the alphabets of the Arabic Sign Language (ASL) was developed. A set of 30 ANFIS networks were designed and trained properly to recognize the ASL gestures. The developed system is a visual-based system that does not rely on the use of gloves or visual markings. To this end, the developed system deals with images of bare hands, allowing the user to interact with the system in a natural way. A twin approach that is based on boundary and region properties is utilized to extract a set that recognizes the gesture. The extracted features are translation, scaling, and rotation invariant so as to make the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with a recognition rate of 100% when approximately 19 rules are used per ANFIS model, and a recognition rate of 97.5% when approximately 10 rules are used.