An information geometry perspective on estimation of distribution algorithms: boundary analysis
Proceedings of the 10th annual conference companion on Genetic and evolutionary computation
One-class clustering in the text domain
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Geometric representations for multiple documents
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Class dependent factor analysis and its application to face recognition
Pattern Recognition
Object recognition using Gabor co-occurrence similarity
Pattern Recognition
Scene recognition on the semantic manifold
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Hi-index | 0.00 |
Statistical machine learning algorithms deal with the problem of selecting an appropriate statistical model from a model space Θ based on a training set xiN i=1 ⊂ X or xi,y iN i=1 ⊂ X × Y. In doing so they either implicitly or explicitly make assumptions on the geometries of the model space Θ and the data space X. Such assumptions are crucial to the success of the algorithms as different geometries are appropriate for different models and data spaces. By studying these assumptions we are able to develop new theoretical results that enhance our understanding of several popular learning algorithms. Furthermore, using geometrical reasoning we are able to adapt existing algorithms such as radial basis kernels and linear margin classifiers to non-Euclidean geometries. Such adaptation is shown to be useful when the data space does not exhibit Euclidean geometry. In particular, we focus in our experiments on the space of text documents that is naturally associated with the Fisher information metric on corresponding multinomial models.