The power of geometric duality
BIT - Ellis Horwood series in artificial intelligence
Rapid dynamic programming algorithms for RNA secondary structure
Advances in Applied Mathematics
Geometric applications of a matrix searching algorithm
SCG '86 Proceedings of the second annual symposium on Computational geometry
Applications of random sampling in computational geometry, II
Discrete & Computational Geometry - Selected papers from the fourth ACM symposium on computational geometry, Univ. of Illinois, Urbana-Champaign, June 6 8, 1988
An almost linear time algorithm for generalized matrix searching
SIAM Journal on Discrete Mathematics
Solving query-retrieval problems by compacting Voronoi diagrams
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Reporting points in halfspaces
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Dynamic programming with convexity, concavity and sparsity
Theoretical Computer Science - Selected papers of the Combinatorial Pattern Matching School
Sparse dynamic programming I: linear cost functions
Journal of the ACM (JACM)
Sparse dynamic programming II: convex and concave cost functions
Journal of the ACM (JACM)
Perspectives of Monge properties in optimization
Discrete Applied Mathematics
On-line dynamic programming with applications to the prediction of RNA secondary structure
SODA '90 Proceedings of the first annual ACM-SIAM symposium on Discrete algorithms
Halfspace range search: an algorithmic application of K-sets
SCG '85 Proceedings of the first annual symposium on Computational geometry
A linear space algorithm for computing maximal common subsequences
Communications of the ACM
Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Fast algorithms for hierarchical range histogram construction
Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Optimal Histograms with Quality Guarantees
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Efficient dynamic programming using quadrangle inequalities
STOC '80 Proceedings of the twelfth annual ACM symposium on Theory of computing
The Knuth-Yao quadrangle-inequality speedup is a consequence of total-monotonicity
SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
Approximation and streaming algorithms for histogram construction problems
ACM Transactions on Database Systems (TODS)
Speeding up dynamic programming
SFCS '88 Proceedings of the 29th Annual Symposium on Foundations of Computer Science
Dynamic half-space reporting, geometric optimization, and minimum spanning trees
SFCS '92 Proceedings of the 33rd Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
We consider the problem of approximating a signal P with another signal F consisting of a few piecewise constant segments. This problem arises naturally in applications including databases (e.g., histogram construction), speech recognition, computational biology (e.g., denoising aCGH data) and many more. Specifically, let P = (P1, P2,..., Pn), Pi ∈ R for all i, be a signal and let C be a constant. Our goal is to find a function F: [n] → R which optimizes the following objective function: [EQUATION] The above optimization problem reduces to solving the following recurrence, which can be done using dynamic programming in O(n2) time: [EQUATION] This recurrence arises naturally in several applications where one wants to approximate a given signal P with a signal F which ideally consists of few piecewise constant segments. Such applications include histogram construction in databases, determining DNA copy numbers in cancer cells from micro-array data, speech recognition, data mining and many others. In this work we present two new techniques for optimizing dynamic programming that can handle cost functions not treated by other standard methods. The basis of our first algorithm is the definition of a constant-shifted variant of the objective function that can be efficiently approximated using state of the art methods for range searching. Our technique approximates the optimal value of our objective function within additive ε error and runs in Õ(n4/3+δ log (U/ε)) time, where δ is an arbitrarily small positive constant and U = max{√C, (|Pi|)i=1,..., n}. The second algorithm we provide solves a similar recurrence that's within a multiplicative factor of (1+ε) and runs in O(n log n/ε). The new technique introduced by our algorithm is the decomposition of the initial problem into a small (logarithmic) number of Monge optimization subproblems which we can speed up using existing techniques.