Meta-classifiers and selective superiority
IEA/AIE '00 Proceedings of the 13th international conference on Industrial and engineering applications of artificial intelligence and expert systems: Intelligent problem solving: methodologies and approaches
Understanding the Crucial Role of AttributeInteraction in Data Mining
Artificial Intelligence Review
A Recency Inference Engine for Connectionist Knowledge Bases
Applied Intelligence
IEEE Transactions on Knowledge and Data Engineering
Debiasing Training Data for Inductive Expert System Construction
IEEE Transactions on Knowledge and Data Engineering
Simulations for Comparing Knowledge Acquisition and Machine Learning
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Quantifying the Resilience of Inductive Classification Algorithms
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
A comparative assessment of classification methods
Decision Support Systems
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
Active learning with near misses
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 2
A scheme for feature construction and a comparison of empirical methods
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Concept learning depends on data character. To discover how, some researchers have used theoretical analysis to relate the behavior of idealized learning algorithms to classes of concepts. Others have developed pragmatic measures that relate the behavior of empirical systems such as ID3 and PLS1 to the kinds of concepts encountered in practice. But before learning behavior can be predicted, concepts and data must be characterized. Data characteristics include their number, error, “size,” and so forth. Although potential characteristics are numerous, they are constrained by the way one views concepts. Viewing concepts as functions over instance space leads to geometric characteristics such as concept size (the proportion of positive instances) and concentration (not too many “peaks”). Experiments show that some of these characteristics drastically affect the accuracy of concept learning. Sometimes data characteristics interact in non-intuitive ways; for example, noisy data may degrade accuracy differently depending on the size of the concept. Compared with effects of some data characteristics, the choice of learning algorithm appears less important: performance accuracy is degraded only slightly when the splitting criterion is replaced with random selection. Analyzing such observations suggests directions for concept learning research.