Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
RBF Neural Networks and Descartes' Rule of Signs
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Constraint Classification: A New Approach to Multiclass Classification
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
A novel approach to neuro-fuzzy classification
Neural Networks
Simulation-based optimization of Markov decision processes: An empirical process theory approach
Automatica (Journal of IFAC)
Pattern Recognition Letters
A Distributional Interpretation of Robust Optimization
Mathematics of Operations Research
Classifying high-dimensional patterns using a fuzzy logic discriminant network
Advances in Fuzzy Systems - Special issue on Hybrid Biomedical Intelligent Systems
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part II
Differentially-private learning and information theory
Proceedings of the 2012 Joint EDBT/ICDT Workshops
Efficient protocols for distributed classification and optimization
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Approximation and estimation bounds for free knot splines
Computers & Mathematics with Applications
On the necessity of irrelevant variables
The Journal of Machine Learning Research
Universal learning using free multivariate splines
Neurocomputing
Hi-index | 0.00 |
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics.