Eye finding via face detection for a foveated, active vision system
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Face Detection in Color Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Simultaneous Localization and Recognition of Dynamic Hand Gestures
WACV-MOTION '05 Proceedings of the IEEE Workshop on Motion and Video Computing (WACV/MOTION'05) - Volume 2 - Volume 02
Computer Vision and Image Understanding - Special issue on eye detection and tracking
Real-Time Pattern Matching Using Projection Kernels
IEEE Transactions on Pattern Analysis and Machine Intelligence
A recommender system using GA K-means clustering in an online shopping market
Expert Systems with Applications: An International Journal
IEEE Transactions on Information Theory
Discriminative Analysis of Lip Motion Features for Speaker Identification and Speech-Reading
IEEE Transactions on Image Processing
Automatic textile image annotation by predicting emotional concepts from visual features
Image and Vision Computing
Hi-index | 0.10 |
This paper proposes a multiple facial feature interface that allows disabled users with various disabilities to implement different mouse operations. Using a regular PC camera, the proposed system detects the user's eye and mouth movements, and then interprets the communication intent to control the computer. Here, mouse movements are implemented based on the user's eye movements, while clicking events are implemented based on the user's mouth shapes, such as opening/closing. The proposed system is composed of three modules: facial feature detector, facial feature tracker, and mouse controller. The facial region is initially identified using a skin-color model and connected-component (CC) analysis. Thereafter, the eye regions are localized using a neural network (NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, then the mouth region is localized using an edge detector. Once the eye and mouth regions are localized, they are continuously and accurately tracking using a mean-shift algorithm and template matching, respectively. Based on the tracking results, the mouse movements and clicks are then implemented. To assess the validity of the proposed method, it was applied to three applications: a web browser, 'spelling board', and the game 'catching-a-bird'. The two test groups involved 34 users, and the results showed that the proposed system could be efficiently and effectively applied as a user-friendly and convenient communication device.