Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
CLASSIC: a structural data model for objects
SIGMOD '89 Proceedings of the 1989 ACM SIGMOD international conference on Management of data
Incremental version-space merging
Proceedings of the seventh international conference (1990) on Machine learning
Version spaces: an approach to concept learning.
Version spaces: an approach to concept learning.
Theoretical underpinnings of version spaces
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
A SAT-based version space algorithm for acquiring constraint satisfaction problems
ECML'05 Proceedings of the 16th European conference on Machine Learning
Hi-index | 0.00 |
Although version spaces provide a useful conceptual tool for inductive concept learning, they often face severe computational difficulties when implemented. For example, the G set of traditional boundary-set implementations of version spaces can have size exponential in the amount of data for even the most simple conjunctive description languages [Haussler, 1988]. This paper presents a new representation for version spaces that is more general than the traditional boundary-set representation, yet has worst-case time complexity that is polynomial in the amount of data when used for learning from attribute-value data with tree-structured feature hierarchies (which includes languages like Haussler's). The central idea underlying this new representation is to maintain the traditional S boundary set as usual, but use a list N of negative data rather than keeping a G set as is typically done.