Genetic Algorithms and Grouping Problems
Genetic Algorithms and Grouping Problems
Some Observations about GA-Based Exam Timetabling
PATAT '97 Selected papers from the Second International Conference on Practice and Theory of Automated Timetabling II
RGFGA: An Efficient Representation and Crossover for Grouping Genetic Algorithms
Evolutionary Computation
Efficiency updates for the restricted growth function GA for grouping problems
Proceedings of the 9th annual conference on Genetic and evolutionary computation
A new representation and operators for genetic algorithms applied to grouping problems
Evolutionary Computation
Computers and Operations Research
A Metaheuristic Approach for the Vertex Coloring Problem
INFORMS Journal on Computing
Evaluating performance advantages of grouping genetic algorithms
Engineering Applications of Artificial Intelligence
Finding Feasible Timetables Using Group-Based Operators
IEEE Transactions on Evolutionary Computation
The tight bound of first fit decreasing bin-packing algorithm is FFD(I) ≤ 11/9OPT(I) + 6/9
ESCAPE'07 Proceedings of the First international conference on Combinatorics, Algorithms, Probabilistic and Experimental Methodologies
Autonomous shaping via coevolutionary selection of training experience
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part II
Shaping fitness function for evolutionary learning of game strategies
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem-its a priori dimension.