Communications of the ACM
The complexity of Boolean functions
The complexity of Boolean functions
Knowledge base refinement by monitoring abstract control knowledge
International Journal of Man-Machine Studies - Special Issue: Knowledge Acquisition for Knowledge-based Systems. Part 5
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
The use of knowledge in analogy and induction
The use of knowledge in analogy and induction
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Boolean Feature Discovery in Empirical Learning
Machine Learning
Learning 2u DNF formulas and ku decision trees
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning read-once formulas over fields and extended bases
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Exact learning of read-twice DNF formulas (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Learning Boolean read-once formulas with arbitrary symmetric and constant fan-in gates
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning read-once formulas with queries
Journal of the ACM (JACM)
The Utility of Knowledge in Inductive Learning
Machine Learning
Compiling prior knowledge into an explicit basis
ML92 Proceedings of the ninth international workshop on Machine learning
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
Research Note on Decision Lists
Machine Learning
When won't membership queries help?
Selected papers of the 23rd annual ACM symposium on Theory of computing
Learning Boolean read-once formulas over generalized bases
Journal of Computer and System Sciences
Learning Arithmetic Read-Once Formulas
SIAM Journal on Computing
Read-twice DNF formulas are properly learnable
Information and Computation
Learning Logical Definitions from Relations
Machine Learning
Machine Learning
Machine Learning
Machine Learning
Machine Learning
Learning Belief Networks in the Presence of Missing Values and Hidden Variables
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Local learning in probabilistic networks with hidden variables
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Function Decomposition in Machine Learning
Machine Learning and Its Applications, Advanced Lectures
Learning Minimal Covers of Functional Dependencies with Queries
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Metareasoning for adaptation of classification knowledge
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Active learning with multiple views
Journal of Artificial Intelligence Research
On learning multivalued dependencies with queries
Theoretical Computer Science
Perceptually grounded self-diagnosis and self-repair of domain knowledge
Knowledge-Based Systems
Hi-index | 0.00 |
It is well known that prior knowledge or bias can speed up learning,at least in theory. It has proved difficult to make constructive use of prior knowledge, so that approximately correcthypotheses can be learned efficiently. In this paper, we consider aparticular form of bias which consists of a set of “determinations.”A set of attributes is said to determine a given attribute ifthe latter is purely a function of the former. The bias istree-structured if there is a tree of attributes such that theattribute at any node is determined by its children, where the leavescorrespond to input attributes and the root corresponds to the targetattribute for the learning problem. The set of allowed functions ateach node is called the basis. The tree-structured biasrestricts the target functions to those representable by a read-onceformula (a Boolean formula in which each variable occurs at most once)of a given structure over the basis functions.We show that efficient learning is possible using a given tree-structured bias from random examples and membership queries, provided that the basis class itself is learnable and obeys some mild closure conditions. The algorithm uses aform of controlled experimentation in order to learn each partof the overall function, fixing the inputs to the other parts of thefunction at appropriate values. We present empirical results showing that when a tree-structured bias is available, our method significantly improves upon knowledge-free induction. We also show that there are hard cryptographic limitations to generalizing these positive results to structured determinations in the form of a directed acyclic graph.