A linear-time algorithm for concave one-dimensional dynamic programming
Information Processing Letters
Elements of information theory
Elements of information theory
Dynamic programming with convexity, concavity and sparsity
Theoretical Computer Science - Selected papers of the Combinatorial Pattern Matching School
General and Efficient Multisplitting of Numerical Attributes
Machine Learning
Machine Learning
On Fast and Simple Algorithms for Finding Maximal Subarrays and Applications in Learning Theory
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Speeding Up the Search for Optimal Partitions
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Hi-index | 0.01 |
Dynamic programming has been studied extensively, e.g., in computational geometry and string matching. It has recently found a new application in the optimal multisplitting of numerical attribute value domains.We reflect the results obtained earlier to this problem and study whether they help to shed a new light on the inherent complexity of this time-critical subtask of machine learning and data mining programs. The concept of monotonicity has come up in earlier research. It helps to explain the different asymptotic time requirements of optimal multisplitting with respect to different attribute evaluation functions. As case studies we examine Training Set Error and Average Class Entropy functions. The former has a linear-time optimization algorithm, while the latter--like most well-known attribute evaluation functions--takes a quadratic time to optimize. It is shown that neither of them fulfills the strict monotonicity condition, but computing optimal Training Set Error values can be decomposed into monotone subproblems.