Texture Measures for Carpet Wear Assessment
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Comparing Classifiers: Pitfalls toAvoid and a Recommended Approach
Data Mining and Knowledge Discovery
Training Invariant Support Vector Machines
Machine Learning
Probability Estimates for Multi-class Classification by Pairwise Coupling
The Journal of Machine Learning Research
Robust Adaptive-Scale Parametric Model Estimation for Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
No Unbiased Estimator of the Variance of K-Fold Cross-Validation
The Journal of Machine Learning Research
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
ROC analysis in ordinal regression learning
Pattern Recognition Letters
Quantifying appearance retention in carpets using geometrical local binary patterns
ACIVS'11 Proceedings of the 13th international conference on Advanced concepts for intelligent vision systems
Hi-index | 0.00 |
Nowadays the quality of carpets is in industry still determined through visual assessment by human experts, although this procedure suffers from a number of drawbacks. Existing computer models for automatic assessment of carpet wear are at this moment not capable of matching the human expertise. Therefore, we present a completely new approach to tackle this problem. A three-dimensional laser scanner is used to obtain a digital copy of the carpet. Due to the specific characteristics of the laser scanner data, new algorithms are developed to extract relevant information from the raw data. These features are used as input to a classifier system that defines a partial ranking over the objects. To this end, ordinal regression and multi-class classification models are applied. Experiments demonstrate that our approach gives rise to promising results with correlations up to 0.77 between extracted features and quality labels. The performance obtained with nested cross-validation, including a C-index of more than 0.95, an accuracy of 76% and only 3% serious errors of a full point, gives rise to a substantial improvement compared to other approaches mentioned in the literature.