Measures of uncertainty in expert systems
Artificial Intelligence
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Regularization and statistical learning theory for data analysis
Computational Statistics & Data Analysis - Nonlinear methods and data mining
A Comparison of Several Approaches to Missing Attribute Values in Data Mining
RSCTC '00 Revised Papers from the Second International Conference on Rough Sets and Current Trends in Computing
Mining with rarity: a unifying framework
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets
On Classification with Incomplete Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Handling Missing Values when Applying Classification Models
The Journal of Machine Learning Research
Impact of imputation of missing values on classification error for discrete data
Pattern Recognition
Unifying practical uncertainty representations. II: Clouds
International Journal of Approximate Reasoning
Improving Classification under Changes in Class and Within-Class Distributions
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Monte Carlo theory as an explanation of bagging and boosting
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Regression analysis using the imprecise Bayesian normal model
International Journal of Data Analysis Techniques and Strategies
A comparison study of nonparametric imputation methods
Statistics and Computing
A machine learning algorithm for classification under extremely scarce information
International Journal of Data Analysis Techniques and Strategies
Knowledge and Information Systems
Hi-index | 0.00 |
A method for solving a classification problem when there is only partial information about some features is proposed. This partial information comprises the mean values of features for every class and the bounds of the features. In order to maximally exploit the available information, a set of probability distributions is constructed such that two distributions are selected from the set which define the minimax and minimin strategies. Random values of features are generated in accordance with the selected distributions by using the Monte Carlo technique. As a result, the classification problem is reduced to the standard model which is solved by means of the support vector machine. Numerical examples illustrate the proposed method.