CHILD: A First Step Towards Continual Learning
Machine Learning - Special issue on inductive transfer
Learning to learn
Machine Learning
Yago: a core of semantic knowledge
Proceedings of the 16th international conference on World Wide Web
Inconsistency in deception for defense
NSPW '06 Proceedings of the 2006 workshop on New security paradigms
Strategies for lifelong knowledge extraction from the web
Proceedings of the 4th international conference on Knowledge capture
Inconsistency tolerance in P2P data integration: An epistemic logic approach
Information Systems
Elements of Argumentation
How Dirty Is Your Relational Database? An Axiomatic Approach
ECSQARU '07 Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
ACM Computing Surveys (CSUR)
Discovering Conditional Functional Dependencies
ICDE '09 Proceedings of the 2009 IEEE International Conference on Data Engineering
On Temporal Properties of Knowledge Base Inconsistency
Transactions on Computational Science V
Reasoning with inconsistent ontologies
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Toward a Classification of Antagonistic Manifestations of Knowledge
ICTAI '10 Proceedings of the 2010 22nd IEEE International Conference on Tools with Artificial Intelligence - Volume 01
An Efficient and Robust Approach for Discovering Data Quality Rules
ICTAI '10 Proceedings of the 2010 22nd IEEE International Conference on Tools with Artificial Intelligence - Volume 01
Automatically Detecting and Tracking Inconsistencies in Software Design Models
IEEE Transactions on Software Engineering
Inconsistency issues in spatial databases
Inconsistency Tolerance
On Localities of Knowledge Inconsistency
International Journal of Software Science and Computational Intelligence
Hi-index | 0.00 |
One of the long-term research goals in machine learning is how to build never-ending learners. The state-of-the-practice in the field of machine learning thus far is still dominated by the one-time learner paradigm: some learning algorithm is utilized on data sets to produce certain model or target function, and then the learner is put away and the model or function is put to work. Such a learn-once-apply-next or LOAN approach may not be adequate in dealing with many real world problems and is in sharp contrast with the human's lifelong learning process. On the other hand, learning can often be brought on through overcoming some inconsistent circumstances. This paper proposes a framework for perpetual learning agents that are capable of continuously refining or augmenting their knowledge through overcoming inconsistencies encountered during their problem-solving episodes. The never-ending nature of a perpetual learning agent is embodied in the framework as the agent's continuous inconsistency-induced belief revision process. The framework hinges on the agents recognizing inconsistency in data, information, knowledge, or meta-knowledge, identifying the cause of inconsistency, revising or augmenting beliefs to explain, resolve, or accommodate inconsistency. The authors believe that inconsistency can serve as one of the important learning stimuli toward building perpetual learning agents that incrementally improve their performance over time.