Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
C4.5: programs for machine learning
C4.5: programs for machine learning
Knowledge-based artificial neural networks
Artificial Intelligence
Neural Networks in Computer Intelligence
Neural Networks in Computer Intelligence
Learning in certainty-factor-based multilayer neural networks for classification
IEEE Transactions on Neural Networks
Hybrid Computational Intelligence Schemes in Complex Domains: An Extended Review
SETN '02 Proceedings of the Second Hellenic Conference on AI: Methods and Applications of Artificial Intelligence
Extracting linguistic quantitative rules from supervised neural networks
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.00 |
A new neural network model for inducing symbolic knowledge from empirical data is presented. This model capitalizes on the fact that the certainty-factor-based activation function can improve the network generalization performance from a limited amount of training data. The formal properties of the procedure for extracting symbolic knowledge from such a trained neural network are investigated. In the domain of molecular genetics, a case study demonstrated that the described learning system effectively discovered the prior domain knowledge with some degree of refinement. Also, in cross-validation experiments, the system outperformed C4.5, a commonly used rule learning system.