Classifier systems and genetic algorithms
Machine learning: paradigms and methods
Technical Note: \cal Q-Learning
Machine Learning
Multi-level direction of autonomous creatures for real-time virtual environments
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
A versatile navigation interface for virtual humans in collaborative virtual environments
VRST '97 Proceedings of the ACM symposium on Virtual reality software and technology
Multi-user interactions in the context of concurrent virtual world modelling
Proceedings of the Eurographics workshop on Virtual environments and scientific visualization '96
Learning Team Strategies: Soccer Case Studies
Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Level of Autonomy for Virtual Human Agents
ECAL '99 Proceedings of the 5th European Conference on Advances in Artificial Life
Modularity in Evolved Artificial Neural Networks
ECAL '99 Proceedings of the 5th European Conference on Advances in Artificial Life
Using communication to reduce locality in multi-robot learning
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Behavior analysis and training-a methodology for behaviorengineering
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning cooperation from classifier systems
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
Hi-index | 0.00 |
This paper presents a learning system based on Artificial Life for the animation of virtual entities. The model uses an extension of a classifiers system to build dynamically the behavior of agents by emergence. A behavior is selected into a set of binary rules that evolves continuously to ensure the maximization of predefined goals. The reinforcement allows to reward a rule and then to evaluate its efficiency faced to a given context. We investigate the interaction between virtual agents and a human controlled clone immersed in virtual soccer. In the simulation, each entity evolves in real-time by using the ability of cooperation and communication with teammates. We evaluate the benefits of the communication inside a team and present how it can improve the learning of a group thanks to a rule-sharing and a human intervention.