BOAT—optimistic decision tree construction
SIGMOD '99 Proceedings of the 1999 ACM SIGMOD international conference on Management of data
A hierarchical architecture for behavior-based robots
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Maximizing Reward in a Non-Stationary Mobile Robot Environment
Autonomous Agents and Multi-Agent Systems
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Accurate on-line support vector regression
Neural Computation
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Computer Vision and Image Understanding
Incremental Learning for Robust Visual Tracking
International Journal of Computer Vision
Learning policies for embodied virtual agents through demonstration
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
A study of the effect of different types of noise on the precision of supervised learning techniques
Artificial Intelligence Review
Artificial Intelligence in Medicine
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
When doing learning from demonstration, it is often the case that the demonstrator provides corrective examples to fix errant behavior by the agent or robot. We present a set of algorithms which use this corrective data to identify and remove noisy examples in datasets which caused errant classifications, and ultimately errant behavior. The objective is to actually modify the source datasets rather than solely rely on the noise-insensitivity of the classification algorithm. This is particularly useful in the sparse datasets often found in learning from demonstration experiments. Our approach tries to distinguish between noisy misclassification and mere undersampling of the learning space. If errors are a result of misclassification, we potentially remove the responsible points and update the classifier. We demonstrate our method on UCI Machine Learning datasets at different levels of sparsity and noise, using decision trees, K-Nearest-Neighbor, and support vector machines.