Teachability in computational learning
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
On exact specification by examples
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A computational model of teaching
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning binary relations and total orders
SIAM Journal on Computing
The Power of Self-Directed Learning
Machine Learning
Journal of Computer and System Sciences
Being taught can be faster than asking questions
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Journal of Computer and System Sciences
A model of interactive teaching
Journal of Computer and System Sciences - special issue on complexity theory
Teachers, learners and black boxes
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Machine Learning
Machine Learning
Learning from Different Teachers
Machine Learning
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Learning of R.E. Languages from Good Examples
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Recent Developments in Algorithmic Teaching
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Teaching randomized learners with feedback
Information and Computation
Massive online teaching to bounded learners
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
Hi-index | 0.00 |
Within learning theory teaching has been studied in various ways. In a common variant the teacher has to teach all learners that are restricted to output only consistent hypotheses. The complexity of teaching is then measured by the maximum number of mistakes a consistent learner can make until successful learning. This is equivalent to the so-called teaching dimension. However, many interesting concept classes have an exponential teaching dimension and it is only meaningful to consider the teachability of finite concept classes. A refined approach of teaching is proposed by introducing a neighborhood relation over all possible hypotheses. The learners are then restricted to choose a new hypothesis from the neighborhood of their current one. Teachers are either required to teach finitely or in the limit. Moreover, the variant that the teacher receives the current hypothesis of the learner as feedback is considered. The new models are compared to existing ones and to one another in dependence of the neighborhood relations given. In particular, it is shown that feedback can be very helpful. Moreover, within the new model one can also study the teachability of infinite concept classes with potentially infinite concepts such as languages. Finally, it is shown that in our model teachability and learnability can be rather different.