Human-computer interaction: toward the year 2000
Human-computer interaction: toward the year 2000
Human motion analysis: a review
Computer Vision and Image Understanding
Automatic Extraction Of The Lips Shape Via Statistical Lips Modelling and Chromatic Feature
CERMA '07 Proceedings of the Electronics, Robotics and Automotive Mechanics Conference
Vision-Based Multimodal Human Computer Interface Based on Parallel Tracking of Eye and Hand Motion
ICCIT '07 Proceedings of the 2007 International Conference on Convergence Information Technology
FUZZ-IEEE'09 Proceedings of the 18th international conference on Fuzzy Systems
Lip image segmentation using fuzzy clustering incorporating an elliptic shape function
IEEE Transactions on Image Processing
Rotation Moment Invariants for Recognition of Symmetric Objects
IEEE Transactions on Image Processing
Intelligent video and audio applications for learning enhancement
Journal of Intelligent Information Systems
Hi-index | 0.00 |
Results of experiments regarding lip gesture recognition with an artificial neural network are discussed. The neural network module forms the core element of a multimodal human-computer interface called LipMouse. This solution allows a user to work on a computer using lip movements and gestures. A user face is detected in a video stream from a standard web camera using a cascade of boosted classifiers working with Haar-like features. Lip region extraction is based on a lip shape approximation calculated by the means of lip image segmentation using fuzzy clustering. ANN is fed with a feature vector describing lip region appearance. The descriptors used include a luminance histogram, statistical moments and co-occurrence matrices statistical parameters. ANN is able to recognize with a good accuracy three lip gestures: mouth opening, sticking out the tongue and forming puckered lips.