Algorithms for clustering data
Algorithms for clustering data
The nature of statistical learning theory
The nature of statistical learning theory
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
A non-parametric learning algorithm for small manufacturing data sets
Expert Systems with Applications: An International Journal
A neural network weight determination model designed uniquely for small data set learning
Expert Systems with Applications: An International Journal
Binarized Support Vector Machines
INFORMS Journal on Computing
Data clustering: 50 years beyond K-means
Pattern Recognition Letters
Short communication: Diagnosis of bladder cancers with small sample size via feature selection
Expert Systems with Applications: An International Journal
A new approach to prediction of radiotherapy of bladder cancer cells in small dataset analysis
Expert Systems with Applications: An International Journal
A novel virtual sample generation method based on Gaussian distribution
Knowledge-Based Systems
A novel SVM+NDA model for classification with an application to face recognition
Pattern Recognition
Extending Attribute Information for Small Data Set Classification
IEEE Transactions on Knowledge and Data Engineering
A non-linear quality improvement model using SVR for manufacturing TFT-LCDs
Journal of Intelligent Manufacturing
Journal of Intelligent Manufacturing
Hi-index | 0.00 |
Manufacturing forecast problems have been widely discussed in recent years, where more accurate predictions could reduce the overall manufacturing costs. This study uses the case of ensuring the heights of thin film transistor---liquid crystal display photo-spacers. It is a small sample size prediction problem, because the data available for analysis is limited on the manufacturing lines. A new approach is developed to deal with this problem, which involves three steps. The first step is using K-means clustering to separate data into K clusters, while the second step is to compute the possibility through the fuzzy membership function in each cluster for attribute extension. The last step is to put the data with new generate attributes into a backpropagation neural network (BPNN) machine learning algorithm. Two performance evaluation methods, cross-validation and data specification testing, are selected to compare the proposed method with three popular prediction models: linear regression, support vector machine for regression (SVR), and BPNN. The results show that the proposed method outperforms the others with regard to the total errors, mean square error, and standard deviation.