Neurocomputing: foundations of research
Neurocomputing: foundations of research
Machine Learning
Neural networks in applied statistics
Technometrics
Hybrid neural network models for bankruptcy predictions
Decision Support Systems
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Mapping a manifold of perceptual observations
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Computational Statistics & Data Analysis - Nonlinear methods and data mining
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Crafting Papers on Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
An Improved Cluster Labeling Method for Support Vector Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dynamic Characterization of Cluster Structures for Robust and Inductive Support Vector Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
A hybrid sales forecasting system based on clustering and decision trees
Decision Support Systems
Domain described support vector classifier for multi-classification problems
Pattern Recognition
Prediction of pricing and hedging errors for equity linked warrants with Gaussian process models
Expert Systems with Applications: An International Journal
Constructing sparse kernel machines using attractors
IEEE Transactions on Neural Networks
Variable selection by association rules for customer churn prediction of multimedia on demand
Expert Systems with Applications: An International Journal
Fast support-based clustering method for large-scale problems
Pattern Recognition
Dynamic Dissimilarity Measure for Support-Based Clustering
IEEE Transactions on Knowledge and Data Engineering
A comparative analysis of machine learning techniques for student retention management
Decision Support Systems
Predicting a distribution of implied volatilities for option pricing
Expert Systems with Applications: An International Journal
Dynamic pattern denoising method using multi-basin system with kernels
Pattern Recognition
IEEE Transactions on Neural Networks
Bankruptcy prediction for credit risk using neural networks: A survey and new results
IEEE Transactions on Neural Networks
Equilibrium-Based Support Vector Machine for Semisupervised Classification
IEEE Transactions on Neural Networks
Transductive Bayesian regression via manifold learning of prior data structure
Expert Systems with Applications: An International Journal
Hi-index | 12.05 |
Nowadays, thanks to the rapid evolvement of information technology, an explosively large amount of information with very high-dimensional features for customers is being accumulated in companies. These companies, in turn, are exerting every effort to develop more efficient churn prediction models for managing customer relationships effectively. In this paper, a novel method is proposed to deal with a high-dimensional large data set for constructing better churn prediction models. The proposed method starts by partitioning a data set into small-sized data subsets, and applies sequential manifold learning to reduce high-dimensional features and give consistent results for combined data subsets. The performance of the constructed churn prediction model using the proposed method is tested using an E-commerce data set by comparing it with other existing methods. The proposed method works better and is much faster for high-dimensional large data sets without the need for retraining the original data set to reduce the dimensions of new test samples.