Improving the Efficiency of Counting Defects by Learning RBF Nets with MAD Loss
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
MAD Loss in Pattern Recognition and RBF Learning
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
Recognition of finite structures with application to moving objects identification
ICAISC'10 Proceedings of the 10th international conference on Artificial intelligence and soft computing: Part I
ICAISC'10 Proceedings of the 10th international conference on Artificial intelligence and soft computing: Part I
Dimensionality reduction using external context in pattern recognition problems with ordered labels
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Hi-index | 754.84 |
The classical problem of constructing a multidimensional pattern classifier in the Bayesian framework is considered. Preprocessing of the learning sequence by a quasi-inverse of a space-filling curve is proposed and properties of space-filling curves which are necessary to obtain Bayes risk consistency are indicated. The learning sequence transformed into the unit interval is used to estimate the coefficients in an orthogonal expansion of the Bayes decision rule. To transform a new observation into the unit interval requires O(d) elementary operations, where d is the dimension of the observation space. Strong Bayes risk consistency of the classifiers is proved when distributions of the random pair of the observation vector and its class are absolutely continuous with respect to the Lebesgue measure. Attainable convergence rate of the Bayes risk is discussed. Details of the classification algorithm based on the Haar series and its properties are presented. Distribution-free consistency of the classifier is established. The performance of such a classifier is tested both on simulated data and on the standard benchmarks providing misclassification errors comparable to, or even better than the k nearest neighbors (k-NN) method