Communications of the ACM
A new polynomial-time algorithm for linear programming
Combinatorica
A taxonomy of problems with fast parallel algorithms
Information and Control
Brains, machines, and mathematics (2nd ed.)
Brains, machines, and mathematics (2nd ed.)
The complexity of Boolean functions
The complexity of Boolean functions
RSA and Rabin functions: certain parts are as hard as the whole
SIAM Journal on Computing - Special issue on cryptography
Computational limitations on learning from examples
Journal of the ACM (JACM)
Bounded-width polynomial-size branching programs recognize exactly those languages in NC1
Journal of Computer and System Sciences - 18th Annual ACM Symposium on Theory of Computing (STOC), May 28-30, 1986
A general lower bound on the number of examples needed for learning
Information and Computation
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Neural network design and the complexity of learning
Neural network design and the complexity of learning
What size net gives valid generalization?
Neural Computation
Complexity Results on Learning by Neural Nets
Machine Learning
A catalog of complexity classes
Handbook of theoretical computer science (vol. A)
Learnability with respect to fixed distributions
Theoretical Computer Science
Polynomial uniform convergence and polynomial-sample learnability
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Journal of Computer and System Sciences
Reduced order LQG controllers for linear time varying plants
Systems & Control Letters
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Threshold circuits of bounded depth
Journal of Computer and System Sciences
Finite automata, formal logic, and circuit complexity
Finite automata, formal logic, and circuit complexity
Neural nets with superlinear VC-dimension
Neural Computation
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
Some optimal inapproximability results
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Simulating Threshold Circuits by Majority Circuits
SIAM Journal on Computing
Theoretical Advances in Neural Computation and Learning
Theoretical Advances in Neural Computation and Learning
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
The parallel complexity of deterministic and probabilistic automata
Journal of Automata, Languages and Combinatorics
Machine Learning
Machine Learning
Can Complexity Theory Benefit from Learning Theory?
ECML '93 Proceedings of the European Conference on Machine Learning
ICALP '01 Proceedings of the 28th International Colloquium on Automata, Languages and Programming,
Hardness Results for Neural Network Approximation Problems
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
On the Difficulty of Approximately Maximizing Agreements
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
The complexity of theorem-proving procedures
STOC '71 Proceedings of the third annual ACM symposium on Theory of computing
Proof verification and hardness of approximation problems
SFCS '92 Proceedings of the 33rd Annual Symposium on Foundations of Computer Science
On the load distribution and performance of meta-computing systems
ISPDC'03 Proceedings of the Second international conference on Parallel and distributed computing
Hi-index | 0.00 |
We survey some relationships between computational complexity and neural network theory. Here, only networks of binary threshold neurons are considered.We begin by presenting some contributions of neural networks in structural complexity theory. In parallel complexity, the class TCk0 of problems solvable by feed-forward networks with k levels and a polynomial number of neurons is considered. Separation results are recalled and the relation between TC0 = 驴TCk0 and NC1 is analyzed. In particular, under the conjecture TC 驴 NC1, we characterize the class of regular languages accepted by feed-forward networks with a constant number of levels and a polynomial number of neurons.We also discuss the use of complexity theory to study computational aspects of learning and combinatorial optimization in the context of neural networks. We consider the PAC model of learning, emphasizing some negative results based on complexity theoretic assumptions. Finally, we discussed some results in the realm of neural networks related to a probabilistic characterization of NP.