Foundations of logic programming
Foundations of logic programming
Fractals everywhere
Towards a theory of declarative knowledge
Foundations of deductive databases and logic programming
Multilayer feedforward networks are universal approximators
Neural Networks
Symmetric neural networks and propositional logic satisfiability
Neural Computation
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural-Symbolic Learning System: Foundations and Applications
Neural-Symbolic Learning System: Foundations and Applications
Approximating the Semantics of Logic Programs by Recurrent Neural Networks
Applied Intelligence
CHCL - A Connectionist Infernce System
Proceedings of the International Workshop on Parallelization in Inference Systems
Perspectives of Neural-Symbolic Integration
Perspectives of Neural-Symbolic Integration
A fully connectionist model generator for covered first-order logic programs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
The core method: connectionist model generation
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Topology And The Semantics Of Logic Programs
Fundamenta Informaticae
Logic Programs under Three-Valued Łukasiewicz Semantics
ICLP '09 Proceedings of the 25th International Conference on Logic Programming
Extracting reduced logic programs from artificial neural networks
Applied Intelligence
Towards encoding background knowledge with temporal extent into neural networks
KSEM'10 Proceedings of the 4th international conference on Knowledge science, engineering and management
Neuro-symbolic representation of logic programs defining infinite sets
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
A dynamic binding mechanism for retrieving and unifying complex predicate-logic knowledge
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Hi-index | 0.01 |
Knowledge-based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural-symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core. We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.