Extraction of Visual Features for Lipreading
IEEE Transactions on Pattern Analysis and Machine Intelligence
Chi2: Feature Selection and Discretization of Numeric Attributes
TAI '95 Proceedings of the Seventh International Conference on Tools with Artificial Intelligence
An introduction to variable and feature selection
The Journal of Machine Learning Research
An extensive empirical study of feature selection metrics for text classification
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lipreading with local spatiotemporal descriptors
IEEE Transactions on Multimedia
Information theoretic feature extraction for audio-visual speech recognition
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
Real-time surveillance systems, dealing with lipreading, can benefit from a reduction in visual data to be processed. This reduces processing time and improves the efficiency of the system. These systems take features extracted from the mouth region for recognition of speech. In this paper, the lip periphery is represented by a set of boundary descriptors. Three feature selection techniques are applied to reduce the feature set. These are Minimum Redundancy Maximum Relevance, Chi-square statistic and Correlation-based Feature Selection. Feature subsets are used for speech classification and an optimal feature vector is determined on basis of recognition performance and feature vector length. The optimal feature vector shows enhanced recognition performance while achieving a 94.17% reduction in feature size. It is observed that most of the prominent boundary descriptors lie on the upper lip. Lip width emerges as an important contributor to visual speech.