Multilayer feedforward networks are universal approximators
Neural Networks
Neural Computation
A practical Bayesian framework for backpropagation networks
Neural Computation
Comparison of approximate methods for handling hyperparameters
Neural Computation
ANN-DT: an algorithm for extraction of decision trees from artificial neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Evaluation of neural network variable influence measures for process control
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
Neural networks (NNs) belong to 'black box' models and therefore 'suffer' from interpretation difficulties. Four recent methods inferring variable influence in NNs are compared in this paper. The methods assist the interpretation task during different phases of the modeling procedure. They belong to information theory (ITSS), the Bayesian framework (ARD), the analysis of the network's weights (GIM), and the sequential omission of the variables (SZW). The comparison is based upon artificial and real data sets of differing size, complexity and noise level. The influence of the neural network's size has also been considered. The results provide useful information about the agreement between the methods under different conditions. Generally, SZW and GIM differ from ARD regarding the variable influence, although applied to NNs with similar modeling accuracy, even when larger data sets sizes are used. ITSS produces similar results to SZW and GIM, although suffering more from the 'curse of dimensionality'.