Communications of the ACM
Multilayer feedforward networks are universal approximators
Neural Networks
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Temporal difference learning and TD-Gammon
Communications of the ACM
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Barycentric interpolators for continuous space & time reinforcement learning
Proceedings of the 1998 conference on Advances in neural information processing systems II
Reinforcement learning for symbolic expression induction
Mathematics and Computers in Simulation - Special issue from the IMACS/IFAC international symposium on soft computing methods and applications: “SOFTCOM '99” (held in Athens, Greece)
Independent component analysis: algorithms and applications
Neural Networks
Symbolic Interpretation of Artificial Neural Networks
IEEE Transactions on Knowledge and Data Engineering
A General Framework for Symbol and Rule Extraction in Neural Networks
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 2 - Volume 2
Extracting Provably Correct Rules from Artificial Neural Networks
Extracting Provably Correct Rules from Artificial Neural Networks
Understanding neural networks via rule extraction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Rule learning by searching on adapted nets
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
On-line classification of data streams with missing values based on reinforcement learning
IbPRIA'11 Proceedings of the 5th Iberian conference on Pattern recognition and image analysis
Hi-index | 0.00 |
The article introduces a method, which is based on reinforcement learning, for extracting rules of the form if-then-else from a labeled data-set. The constituent parts of a rule are the input dimensions of the labeled data-set, each accompanied by an appropriate interval of activation, and a label which stands for class membership. Initially, the input space is partitioned using tiles. The algorithm tries to compose the largest possible orthogonal intervals out of tiles. After the creation of intervals for each dimension the rule receives credit for its classification ability. This credit with the aid of reinforcement will be used to improve its constituent parts. The effectiveness of the proposed method has been tested on five different classification problems: the Iris data set, the Concentric data, the 4 Gaussians, the Pima-Indians set and the Image Segmentation data set.