Probabilistic Visual Learning for Object Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Network-Based Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Detection in Color Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
FloatBoost Learning and Statistical Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
2001 IEEE International Conference on Acoustics, Speech, and Signal Processing
ICASSP '01 Proceedings of the Acoustics, Speech, and Signal Processing, 200. on IEEE International Conference - Volume 02
A Bayesian discriminating features method for face detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face detection and tracking in a video by propagating detection probabilities
IEEE Transactions on Pattern Analysis and Machine Intelligence
A view-based statistical system for multi-view face detection and pose estimation
Image and Vision Computing
Block LDA and Gradient Image for Face Recognition
IEA/AIE '09 Proceedings of the 22nd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems: Next-Generation Applied Intelligence
Visual affect recognition
Framework for research on detection classifiers
Proceedings of the 24th Spring Conference on Computer Graphics
Appearance-based face detection with artificial neural networks
Intelligent Decision Technologies
Computers and Electronics in Agriculture
Hi-index | 0.01 |
This paper describes both face detection using the eigenface space and face recognition using neural networks. Real-time face detection from face images was performed in two steps. In the first step, a normalized skin color map based on the Gaussian function was applied to extract a face candidate region. The facial feature information in the candidate region was employed to detect the face region. In this step, face detection was sequentially accomplished using three methods. DFFS, a combination of DFFS and DIFS, and template matching were used. Facial features were extracted according to the Euclidian distance between the determined face region and the predefined eigenfaces from the face region. In the second step, neural network models were trained using 120 images for face recognition. In the experiments, three neural network models corresponding to input variables that included features from face spaces, facial features (geometrical features), and both, were constructed. The image of each person was obtained based on the various directions, poses, and facial expressions. The number of hidden layers was changed from 1 to 3 for several tests of the neural network models. The goal of this study was to reduce lighting effects in order to achieve high-performance of face recognition, because face recognition cannot cope with changes due to lighting.