Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
MetaCost: a general method for making classifiers cost-sensitive
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Pruning Decision Trees with Misclassification Costs
ECML '98 Proceedings of the 10th European Conference on Machine Learning
AdaCost: Misclassification Cost-Sensitive Boosting
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Exploiting the Cost (In)sensitivity of Decision Tree Splitting Criteria
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Methods for cost-sensitive learning
Methods for cost-sensitive learning
Training Cost-Sensitive Neural Networks with Methods Addressing the Class Imbalance Problem
IEEE Transactions on Knowledge and Data Engineering
Multi-class pattern classification using neural networks
Pattern Recognition
Cost-sensitive feature acquisition and classification
Pattern Recognition
A threshold varying bisection method for cost sensitive learning in neural networks
Expert Systems with Applications: An International Journal
Cost-Sensitive Parsimonious Linear Regression
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Tuning Data Mining Methods for Cost-Sensitive Regression: A Study in Loan Charge-Off Forecasting
Journal of Management Information Systems
Thresholding for making classifiers cost-sensitive
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Journal of Artificial Intelligence Research
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Hybrid approaches for classification under information acquisition cost constraint
Decision Support Systems
An investigation of neural network classifiers with unequal misclassification costs and group sizes
Decision Support Systems
Weighted learning vector quantization to cost-sensitive learning
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Sensitive error correcting output codes
COLT'05 Proceedings of the 18th annual conference on Learning Theory
A cost-sensitive decision tree approach for fraud detection
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The current research investigates a single cost for cost-sensitive neural networks (CNN) for decision making. This may not be feasible for real cost-sensitive decisions which involve multiple costs. We propose to modify the existing model, the traditional back-propagation neural networks (TNN), by extending the back-propagation error equation for multiple cost decisions. In this multiple-cost extension, all costs are normalized to be in the same interval (i.e. between 0 and 1) as the error estimation generated in the TNN. A comparative analysis of accuracy dependent on three outcomes for constant costs was performed: (1) TNN and CNN with one constant cost (CNN-1C), (2) TNN and CNN with two constant costs (CNN-2C), and (3) CNN-1C and CNN-2C. A similar analysis for accuracy was also made for non-constant costs; (1) TNN and CNN with one non-constant cost (CNN-1NC), (2) TNN and CNN with two non-constant costs (CNN-2NC), and (3) CNN-1NC and CNN-2NC. Furthermore, we compared the misclassification cost for CNNs for both constant and non-constant costs (CNN-1C vs. CNN-2C and CNN-1NC vs. CNN-2NC). Our findings demonstrate that there is a competitive behavior between the accuracy and misclassification cost in the proposed CNN model. To obtain a higher accuracy and lower misclassification cost, our results suggest merging all constant cost matrices into one constant cost matrix for decision making. For multiple non-constant cost matrices, our results suggest maintaining separate matrices to enhance the accuracy and reduce the misclassification cost.