Classifiers that approximate functions
Natural Computing: an international journal
Get Real! XCS with Continuous-Valued Inputs
Learning Classifier Systems, From Foundations to Applications
Incremental Online Learning in High Dimensions
Neural Computation
Hyper-ellipsoidal conditions in XCS: rotation, linear approximation, and solution structure
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Prediction update algorithms for XCSF: RLS, Kalman filter, and gain adaptation
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Genetic Programming and Evolvable Machines
Generalization in the XCSF Classifier System: Analysis, Improvement, and Extension
Evolutionary Computation
Classifier fitness based on accuracy
Evolutionary Computation
Context-dependent predictions and cognitive arm control with XCSF
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Learning sensorimotor control structures with XCSF: redundancy exploitation and dynamic control
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
From Motor Learning to Interaction Learning in Robots
From Motor Learning to Interaction Learning in Robots
A comparative study: function approximation with LWPR and XCSF
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
IEEE Transactions on Evolutionary Computation
Filtering sensory information with XCSF: improving learning robustness and control performance
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
XCSF approximates function surfaces by evolving a suitable clustering of the input space, so that a simple -- typically linear -- predictor yields sufficient accuracy in each cluster. With an increasing number of distinct output dimensions, however, the accuracy of local predictions typically decreases. We analyze the performance of a single XCSF instance and compare it to the performance of a multiple-instance XCSF, where each instance predicts one dimension of the output. We show that dependent on the problem at hand, the multiple-instance XCSF approach is highly advantageous. In particular, we show that the more local linearity structures differ, the more a modularized approximation by multiple XCSF instances pays off. In fact, if modularization is not applied, the problem complexity may increase exponentially in the number of approximately orthogonally-structured output dimensions. To relate these results also to current XCSF application options, we show that the multiple-instance XCSF approach can also be applied to the problem of learning a compact model of the Jacobian of the forward-kinematics of a seven degree of freedom anthropomorphic robot arm for inverse robot arm control in simulation.