Approximation capabilities of multilayer feedforward networks
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Applied Multivariate Statistics with SAS Software
Applied Multivariate Statistics with SAS Software
Relative Unsupervised Discretization for Regresseion Problems
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Search-Based Class Discretization
ECML '97 Proceedings of the 9th European Conference on Machine Learning
Functional Models for Regression Tree Leaves
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Pruned neural networks for regression
PRICAI'00 Proceedings of the 6th Pacific Rim international conference on Artificial intelligence
Use of a quasi-Newton method in a feedforward neural network construction algorithm
IEEE Transactions on Neural Networks
Evolving model trees for mining data sets with continuous-valued classes
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Neural networks are often selected as the tool for solving regression problem because of their capability to approximate any continuous function with arbitary accuracy. A major drawback of neural networks is their complex mapping which is not easily understood by a user. This paper describes a method that generates decision rules from trained neural networks for regression problems. The networks have a single layer of hidden units with hyperbolic tangent activation function and a single output unit with linear activation function. The crucial step in this method is the approximation of the hidden unit activation function by a 3-piece linear function. This linear function is obtained by minimizing the sum of squared deviations between the hidden unit activation values of data samples and their linearized approximations. Once the activation function of the hidden units have been linearized, the rules are generated. The conditions of the rules divide the input space of the data into subspaces, while the consequence of each rule is a linear regression function. Our experimental results indicate that the method generates more accurate rules than those from similar methods.