Efficient algorithms for inverting evolution
Journal of the ACM (JACM)
Absolute convergence: true trees from short sequences
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
Linear Concepts and Hidden Variables
Machine Learning
Combining polynomial running time and fast convergence for the disk-covering method
Journal of Computer and System Sciences - Computational biology 2002
When Can Two Unsupervised Learners Achieve PAC Separation?
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Hi-index | 0.01 |
The j-State General Markov Model of evolution (due to Steel) is a stochastic model concerned with the evolution of strings over an alphabet of size j. In particular, the Two-State General Markov Model of evolution generalises the well-known Cavender-Farris-Neyman model of evolution by removing the symmetry restriction (which requires that the probability that a `0' turns into a `1' along an edge is the same as the probability that a `1' turns into a `0' along the edge). Farach and Kannan showed how to PAC-learn Markov Evolutionary Trees in the Cavender-Farris-Neyman model provided that the target tree satisfies the additional restriction that all pairs of leaves have a sufficiently high probability of being the same. We show how to remove both restrictions and thereby obtain the first polynomial-time PAC-learning algorithm (in the sense of Kearns et al.) for the general class of Two-State Markov Evolutionary Trees.