How to solve it: modern heuristics
How to solve it: modern heuristics
Complexity Measures of Supervised Classification Problems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Get Real! XCS with Continuous-Valued Inputs
Learning Classifier Systems, From Foundations to Applications
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Strength or Accuracy: Credit Assignment in Learning Classifier Systems
Strength or Accuracy: Credit Assignment in Learning Classifier Systems
Kernel-based, ellipsoidal conditions in the real-valued XCS classifier system
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Learning classifier systems: a survey
Soft Computing - A Fusion of Foundations, Methodologies and Applications
Classifier fitness based on accuracy
Evolutionary Computation
Beyond Homemade Artificial Data Sets
HAIS '09 Proceedings of the 4th International Conference on Hybrid Artificial Intelligence Systems
Learning classifier systems: a complete introduction, review, and roadmap
Journal of Artificial Evolution and Applications
Two-cornered learning classifier systems for pattern generation and classification
Proceedings of the 14th annual conference on Genetic and evolutionary computation
XCSR with computed continuous action
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
In existing artificial classification systems, the problem domain is created and controlled by humans. Humans set up and tune the problem domain, such as determining the problem's complexity. If humans can set up the problem appropriately then the machines can extract beneficial knowledge to solve classification task. This paper introduces an autonomous classification problem generation approach. The classification problem's difficulty is adapted based on the classification agent's performance within the defined attributes. An automated problem generator has been created to evolve the simulated datasets whilst the classification agent, in this case a learning classifier system, attempts to learn the evolving problem. The idea here is to tune the datasets autonomously such that the problem characteristics may be determined efficiently to empirically test the learning bounds of the classification agent by lowering human involvement. In this way, the effect of the problem's characteristics, which alter the classification agent's performance, becomes human readable. Tabu Search has been applied in the problem generator to discover the best combination of domain features in order to adjust the problem's complexity. Experiments confirm that the problem generator was able to tune the problem's complexity either to make the problem 'harder' or 'easier' so that it can either 'increase' or 'decrease' the classification agent's performance.