A new polynomial-time algorithm for linear programming
Combinatorica
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Combinatorial optimization: algorithms and complexity
Combinatorial optimization: algorithms and complexity
Computational limitations on learning from examples
Journal of the ACM (JACM)
The minimum consistent DFA problem cannot be approximated within and polynomial
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Neural network design and the complexity of learning
Neural network design and the complexity of learning
On the computational complexity of approximating distributions by probabilistic automata
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Training a 3-node neural network is NP-complete
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Complexity Results on Learning by Neural Nets
Machine Learning
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Advances in neural information processing systems 2
Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Completeness in approximation classes
Information and Computation
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
The complexity of theorem-proving procedures
STOC '71 Proceedings of the third annual ACM symposium on Theory of computing
TRACKING DRIFTING CONCEPTS BY MINIMIZING DISAGREEMENTS
TRACKING DRIFTING CONCEPTS BY MINIMIZING DISAGREEMENTS
Polynomial learnability of linear threshold approximations
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Learning linear threshold functions in the presence of classification noise
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
On efficient agnostic learning of linear combinations of basis functions
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Learning to resolve natural language ambiguities: a unified approach
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Combining the Perceptron Algorithm with Logarithmic Simulated Annealing
Neural Processing Letters
Linear Concepts and Hidden Variables
Machine Learning
Logistic Regression, AdaBoost and Bregman Distances
Machine Learning
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Complexity in the case against accuracy estimation
Theoretical Computer Science
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
MUSP'09 Proceedings of the 9th WSEAS international conference on Multimedia systems & signal processing
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Designing neural networks for tackling hard classification problems
WSEAS TRANSACTIONS on SYSTEMS
A team of continuous-action learning automata for noise-tolerant learning of half-spaces
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Estimating the size of neural networks from the number of available training data
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Hi-index | 0.01 |
We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want to analyze how fast the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that a neuron cannot efficently find its probably almost optimal adjustment (unless RP = NP). If the weights and the threshold of the neuron have a fixed constant bound on their coding length, the situation is even worse: There is in general no polynomial time training method which bounds the resulting prediction error of the neuron by k.opt for a fixed constant k (unless RP = NP). Other variants of learning more complex concepts than halfspaces by single neurons are also investigated. We show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject-rate is efficiently possible (unless RP = NP).