New upper bounds in Klee's measure problem
SIAM Journal on Computing
On the hardness of approximate reasoning
Artificial Intelligence
A note on a method for generating points uniformly on n-dimensional spheres
Communications of the ACM
Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimisation: NSGA-II
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
A Fast Algorithm for Computing the Contribution of a Point to the Hypervolume
ICNC '07 Proceedings of the Third International Conference on Natural Computation - Volume 04
Approximating the Volume of Unions and Intersections of High-Dimensional Geometric Objects
ISAAC '08 Proceedings of the 19th International Symposium on Algorithms and Computation
Approximating the Least Hypervolume Contributor: NP-Hard in General, But Fast in Practice
EMO '09 Proceedings of the 5th International Conference on Evolutionary Multi-Criterion Optimization
Performance assessment of multiobjective optimizers: an analysis and review
IEEE Transactions on Evolutionary Computation
A faster algorithm for calculating hypervolume
IEEE Transactions on Evolutionary Computation
Approximating the Least Hypervolume Contributor: NP-Hard in General, But Fast in Practice
EMO '09 Proceedings of the 5th International Conference on Evolutionary Multi-Criterion Optimization
An efficient algorithm for computing hypervolume contributions**
Evolutionary Computation
Illustration of fairness in evolutionary multi-objective optimization
Theoretical Computer Science
On sequential online archiving of objective vectors
EMO'11 Proceedings of the 6th international conference on Evolutionary multi-criterion optimization
EMO'11 Proceedings of the 6th international conference on Evolutionary multi-criterion optimization
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Convergence of hypervolume-based archiving algorithms I: effectiveness
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Achieving balance between proximity and diversity in multi-objective evolutionary algorithm
Information Sciences: an International Journal
A new multi-objective evolutionary algorithm based on a performance assessment indicator
Proceedings of the 14th annual conference on Genetic and evolutionary computation
GECCO 2012 tutorial on evolutionary multiobjective optimization
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
GECCO 2013 tutorial on evolutionary multiobjective optimization
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation
Speeding up many-objective optimization by Monte Carlo approximations
Artificial Intelligence
Hi-index | 0.00 |
The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1 + *** ) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P = NP) nor to approximate it (unless NP = BPP). Nevertheless, in the second part of the paper we present a very fast approximation algorithm for this problem. We prove that for arbitrarily given *** ,*** 0 it calculates a solution with contribution at most (1 + *** ) times the minimal contribution with probability at least (1 *** *** ). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.