Optimization by mean field annealing
Advances in neural information processing systems 1
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Kolmogorov complexity and its applications
Handbook of theoretical computer science (vol. A)
Approximating clique is almost NP-complete (preliminary version)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Average case complexity under the universal distribution equals worst-case complexity
Information Processing Letters
Proceedings of the Symposium on Logical Foundations of Computer Science: Logic at Botik '89
Approximating maximum clique with a Hopfield network
IEEE Transactions on Neural Networks
Adaptive, Restart, Randomized Greedy Heuristics for Maximum Clique
Journal of Heuristics
Payoff-Monotonic Game Dynamics and the Maximum Clique Problem
Neural Computation
Hi-index | 0.00 |
The problem of finding the size of the largest clique in an undirected graph isNP-hard, even to well-approximate, in the worst case. Simple algorithms,including some we study here, work quite well however, on graphs sampled fromu(n), the uniform distribution on n-vertex graphs. It is felt by many,however, that u(n) does not accurately reflect the nature ofinstances that come up in practice. It is argued that when the actualdistribution of instances is unknown, it is more appropriate to suppose thatinstances come from the Solomonoff–Levin or t universal distributionm(x) instead, which assigns higher weight to instances with shorterdescriptions (i.e., to those that are structured or compressible). We extend atheorem of Li and Vitanyi to show that the average-case performance ratio ofany approximation algorithm on random instances drawn from m(x) hasthe same asymptotic order as its worst-case performance ratio. Becausem(x) is neither computable nor samplable, we employ a realisticanalogue q(x) which lends itself to efficient empirical testing. Weexperimentally evaluate how well certain neural network algorithms for Maximum Clique perform on graphs drawn from q(x), as compared to those drawn from u(n). The experimental results are as follows. All nine algorithms we evaluated performed roughly equally-well on u(n), where as three of them — the simplest ones — performed markedly poorer than the other six on q(x). Our results suggest that q(x), while postulated as a more realistic distribution to test the performance of algorithms than u(n), also discriminates their performance better. Our q(x)sampler can be used to generate compressible instances of any discrete problem.