Empirical Learning as a Function of Concept Character
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part I
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Recognition by functional parts
Computer Vision and Image Understanding - Special issue of funtion-based vision
Incorporating Invariances in Support Vector Learning Machines
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
Selective Sampling for Nearest Neighbor Classifiers
Machine Learning
Incremental learning with partial instance memory
Artificial Intelligence
Diverse ensembles for active learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Efficient training of artificial neural networks for autonomous navigation
Neural Computation
Function-based classification from 3D data via generic and symbolic models
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Sparse representations for fast, one-shot learning
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Overview based example selection in end user interactive concept learning
Proceedings of the 22nd annual ACM symposium on User interface software and technology
Designing for effective end-user interaction with machine learning
Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology
Hi-index | 0.00 |
Assume that we are trying to build a visual recognizer for a particular class of objects--chairs, for example--using existing induction methods. Assume the assistance of a human teacher who can label an image of an object as a positive or a negative example. As positive examples, we can obviously use images of real chairs. It is not clear, however, what types of objects we should use as negative examples. This is an example of a common problem where the concept we are trying to learn represents a small fraction of a large universe of instances. In this work we suggest learning with the help of near misses--negative examples that differ from the learned concept in only a small number of significant points, and we propose a framework for automatic generation of such examples. We show that generating near misses in the feature space is problematic in some domains, and propose a methodology for generating examples directly in the instance space using modification operators--functions over the instance space that produce new instances by slightly modifying existing ones. The generated instances are evaluated by mapping them into the feature space and measuring their utility using known active learning techniques. We apply the proposed framework to the task of learning visual concepts from range images.