Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Advances in neural information processing systems 2
The nature of statistical learning theory
The nature of statistical learning theory
Extraction of Logical Rules from Neural Networks
Neural Processing Letters
An empirical measure of element contribution in neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
New algorithms for learning and pruning oblique decision trees
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
A connectionist approach to generating oblique decision trees
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Information Sciences: an International Journal
Prediction of corporate financial health by Artificial Neural Network
International Journal of Electronic Finance
International Journal of Hybrid Intelligent Systems - VIII Brazilian Symposium On Neural Networks
Selective enhancement learning in competitive learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Building credit scoring models using genetic programming
Expert Systems with Applications: An International Journal
How to Explain Individual Classification Decisions
The Journal of Machine Learning Research
Modelling complex data by learning which variable to construct
DaWaK'10 Proceedings of the 12th international conference on Data warehousing and knowledge discovery
Expert Systems with Applications: An International Journal
Decision tree-based technology credit scoring for start-up firms: Korean case
Expert Systems with Applications: An International Journal
Driven forward features selection: a comparative study on neural networks
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.01 |
Neural networks are still frustrating tools in the data mining arsenal. They exhibit excellent modelling performance, but do not give a clue about the structure of their models. We propose a methodology to explain the classification obtained by a multilayer perceptron. We introduce the concept of 'causal importance' and define a saliency measurement allowing the selection of relevant variables. Once the model is trained with the relevant variables only, we define a clustering of the data built from the hidden layer representation. Combining the saliency and the causal importance on a cluster by cluster basis allows an interpretation of the neural network classifier to be built. We illustrate the performances of this methodology on three benchmark datasets.