Principles and practice of information theory
Principles and practice of information theory
Elements of information theory
Elements of information theory
The nature of statistical learning theory
The nature of statistical learning theory
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bioinformatics: the machine learning approach
Bioinformatics: the machine learning approach
Using Adaptive Tracking to Classify and Monitor Activities in a Site
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
A Principled Approach to Detecting Surprising Events in Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Entropy and Distance of Random Graphs with Application to Structural Pattern Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Divergence measures based on the Shannon entropy
IEEE Transactions on Information Theory
Predictive coding as a model of the V1 saliency map hypothesis
Neural Networks
Information-Based scale saliency methods with wavelet sub-band energy density descriptors
ACIIDS'13 Proceedings of the 5th Asian conference on Intelligent Information and Database Systems - Volume Part II
Visual saliency detection using information divergence
Pattern Recognition
Automated reasoning, fast and slow
CADE'13 Proceedings of the 24th international conference on Automated Deduction
A New Framework for Multiscale Saliency Detection Based on Image Patches
Neural Processing Letters
Hi-index | 0.01 |
The amount of information contained in a piece of data can be measured by the effect this data has on its observer. Fundamentally, this effect is to transform the observer's prior beliefs into posterior beliefs, according to Bayes theorem. Thus the amount of information can be measured in a natural way by the distance (relative entropy) between the prior and posterior distributions of the observer over the available space of hypotheses. This facet of information, termed ''surprise'', is important in dynamic situations where beliefs change, in particular during learning and adaptation. Surprise can often be computed analytically, for instance in the case of distributions from the exponential family, or it can be numerically approximated. During sequential Bayesian learning, surprise decreases as the inverse of the number of training examples. Theoretical properties of surprise are discussed, in particular how it differs and complements Shannon's definition of information. A computer vision neural network architecture is then presented capable of computing surprise over images and video stimuli. Hypothesizing that surprising data ought to attract natural or artificial attention systems, the output of this architecture is used in a psychophysical experiment to analyze human eye movements in the presence of natural video stimuli. Surprise is found to yield robust performance at predicting human gaze (ROC-like ordinal dominance score ~0.7 compared to ~0.8 for human inter-observer repeatability, ~0.6 for simpler intensity contrast-based predictor, and 0.5 for chance). The resulting theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.