An introduction to computational learning theory
An introduction to computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Towards a theory of incentives in machine learning
ACM SIGecom Exchanges
Automated design of scoring rules by learning from examples
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
The learnability of voting rules
Artificial Intelligence
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Kernel Methods for Revealed Preference Analysis
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
A revealed preference approach to computational complexity in economics
Proceedings of the 12th ACM conference on Electronic commerce
Efficiently learning from revealed preference
WINE'12 Proceedings of the 8th international conference on Internet and Network Economics
Hi-index | 0.00 |
A sequence of prices and demands are rationalizable if there exists a concave, continuous and monotone utility function such that the demands are the maximizers of the utility function over the budget set corresponding to the price. Afriat [1] presented necessary and sufficient conditions for a finite sequence to be rationalizable. Varian [20] and later Blundell et al. [3, 4] continued this line of work studying nonparametric methods to forecasts demand. Their results essentially characterize learnability of degenerate classes of demand functions and therefore fall short of giving a general degree of confidence in the forecast. The present paper complements this line of research by introducing a statistical model and a measure of complexity through which we are able to study the learnability of classes of demand functions and derive a degree of confidence in the forecasts.Our results show that the class of all demand functions has unbounded complexity and therefore is not learnable, but that there exist interesting and potentially useful classes that are learnable from finite samples. We also present a learning algorithm that is an adaptation of a new proof of Afriat's theorem due to Teo and Vohra [17].