Communications of the ACM
Computational limitations on learning from examples
Journal of the ACM (JACM)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Elements of information theory
Elements of information theory
An introduction to Kolmogorov complexity and its applications
An introduction to Kolmogorov complexity and its applications
The nature of statistical learning theory
The nature of statistical learning theory
PAC learning of concept classes through the boundaries of their items
Theoretical Computer Science
Gaining degrees of freedom in subsymbolic learning
Theoretical Computer Science
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
The Fuzzy Systems Handkbook with Cdrom
The Fuzzy Systems Handkbook with Cdrom
A General Framework for Symbol and Rule Extraction in Neural Networks
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 2 - Volume 2
Some related problems from network flows, game theory and integer programming
SWAT '72 Proceedings of the 13th Annual Symposium on Switching and Automata Theory (swat 1972)
PAC Meditation on Boolean Formulas
Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
Attentional Agents and robot control
International Journal of Knowledge-based and Intelligent Engineering Systems - Advanced Intelligent Techniques in Engineering Applications
Feature selection via Boolean independent component analysis
Information Sciences: an International Journal
Controlling the losing probability in a monotone game
Information Sciences: an International Journal
Learning rule representations from Boolean data
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
An attention-based architecture for context switch detection
The disappearing computer
Playing monotone games to understand learning behaviors
Theoretical Computer Science
Hi-index | 0.00 |
We consider an integrated subsymbolic-symbolic procedure for extracting symbolically explained classification rules from data. A multilayer perceptron maps features into propositional variables and a set of subsequent layers operated by a PAC-like algorithm learns boolean expressions on these variables. The peculiarities of the whole procedure are: (i) we do not know a priori the class of formulas these expressions belong to, rather from time to time we get some information about the class and reduce uncertainty about the current hypothesis; (ii) the mapping from features to variables varies also over time to improve the suitability of the desired classification rules; and (iii) the final shape of the learnt expressions is determined by the learner who can express his preferences both in terms of an error function to be backpropagated along all layers of the proposed architecture and through the choice of a set of free parameters. We review the bases of the first point and then analyze the others in depth. The theoretical tools supporting the analysis are: (1) a new statistical framework that we call algorithmic inference; (2) a special functionality of the sampled points in respect to the formulas, denoted sentineling; and (3) entropy measures and fuzzy set methods governing the whole learning process. Preliminary numerical results highlight the value of the procedure.