Theory of linear and integer programming
Theory of linear and integer programming
A “thermal” perceptron learning rule
Neural Computation
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
Parallel Optimization: Theory, Algorithms and Applications
Parallel Optimization: Theory, Algorithms and Applications
Incremental Subgradient Methods for Nondifferentiable Optimization
SIAM Journal on Optimization
Fast Line Detection Algorithms Based on Combinatorial Optimization
IWVF-4 Proceedings of the 4th International Workshop on Visual Form
Fast Heuristics for the Maximum Feasible Subsystem Problem
INFORMS Journal on Computing
A simple polynomial-time rescaling algorithm for solving linear programs
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
Large-scale linear programming techniques for the design of protein folding potentials
Mathematical Programming: Series A and B
Boundedness Theorems for the Relaxation Method
Mathematics of Operations Research
Boundedness Theorems for the Relaxation Method
Mathematics of Operations Research
A two-phase relaxation-based heuristic for the maximum feasible subsystem problem
Computers and Operations Research
A relaxable service selection algorithm for QoS-based web service composition
Information and Software Technology
Column Generation for the Minimum Hyperplanes Clustering Problem
INFORMS Journal on Computing
Hi-index | 0.00 |
In the Max FS problem, given an infeasible linear system Ax ≥ b, one wishes to find a feasible subsystem containing a maximum number of inequalities. This NP-hard problem has interesting applications in a variety of fields. In some challenging applications in telecommunications and computational biology one faces very large Max FS instances with up to millions of inequalities in thousands of variables. We propose to tackle large-scale instances of Max FS using randomized and thermal variants of the classical relaxation method for solving systems of linear inequalities. We present a theoretical analysis of one particular version of such a method in which we derive a lower bound on the probability that it identifies an optimal solution within a given number of iterations. This bound, which is expressed as a function of a condition number of the input data, implies that with probability 1 the randomized method identifies an optimal solution after finitely many iterations. We also present computational results obtained for medium- to large-scale instances arising in the planning of digital video broadcasts and in the modelling of the energy functions driving protein folding. Our experiments indicate that these methods perform very well in practice.