Combining Uncertainty Sampling methods for supporting the generation of meta-examples
Information Sciences: an International Journal
Multi-view learning via probabilistic latent semantic analysis
Information Sciences: an International Journal
Cross-entropy measure of uncertain variables
Information Sciences: an International Journal
An efficient method for learning nonlinear ranking SVM functions
Information Sciences: an International Journal
Generation of a probabilistic fuzzy rule base by learning from examples
Information Sciences: an International Journal
A preference learning approach to sentence ordering for multi-document summarization
Information Sciences: an International Journal
Dynamic fusion method using Localized Generalization Error Model
Information Sciences: an International Journal
A twin-hypersphere support vector machine classifier and the fast learning algorithm
Information Sciences: an International Journal
Information Sciences: an International Journal
Expert Systems with Applications: An International Journal
Literature survey of active learning in multimedia annotation and retrieval
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
The Journal of Supercomputing
Boosting weighted ELM for imbalanced learning
Neurocomputing
An improved algorithm for calculating fuzzy attribute reducts
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
Sample selection is to select a number of representative samples from a large database such that a learning algorithm can have a reduced computational cost and an improved learning accuracy. This paper gives a new sample selection mechanism, i.e., the maximum ambiguity-based sample selection in fuzzy decision tree induction. Compared with the existing sample selection methods, this mechanism selects the samples based on the principle of maximal classification ambiguity. The major advantage of this mechanism is that the adjustment of the fuzzy decision tree is minimized when adding selected samples to the training set. This advantage is confirmed via the theoretical analysis of the leaf-nodes' frequency in the decision trees. The decision tree generated from the selected samples usually has a better performance than that from the original database. Furthermore, experimental results show that generalization ability of the tree based on our selection mechanism is far more superior to that based on random selection mechanism.