Approximation capabilities of multilayer feedforward networks
Neural Networks
Integrating artificial neural networks with rule-based expert systems
Decision Support Systems - Special issue on neural networks for decision support
An empirical measure of element contribution in neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
IEEE Transactions on Neural Networks
Neural-network feature selector
IEEE Transactions on Neural Networks
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Function approximation with spiked random networks
IEEE Transactions on Neural Networks
Use of a quasi-Newton method in a feedforward neural network construction algorithm
IEEE Transactions on Neural Networks
Generating Linear Regression Rules from Neural Networks Using Local Least Squares Approximation
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Evolving model trees for mining data sets with continuous-valued classes
Expert Systems with Applications: An International Journal
Multilayer perceptron for simulation models reduction: Application to a sawmill workshop
Engineering Applications of Artificial Intelligence
Advances in Artificial Neural Systems
Hi-index | 0.00 |
Neural networks have been widely used as a tool for regression. They are capable of approximating any function and they do not require any assumption about the distribution of the data. The most commonly used architectures for regression are the feedforward neural networks with one or more hidden layers. In this paper, we present a network pruning algorithm which determines the number of units in the input and hidden layers of the networks. We compare the performance of the pruned networks to four regression methods namely, linear regression (LR), Naive Bayes (NB), k-nearest-neighbor (kNN), and a decision tree predictor M5 . On 32 pubhcly available data sets tested, the neural network method outperforms NB and kNN if the prediction errors are computed in terms of the root mean squared errors. Under this measurement metric, it also performs as well as LR and M5'. On the other hand, using the mean absolute error as the measurement metric, the neural network method outperforms all four other regression methods.