Improving the performance guarantee for approximate graph coloring
Journal of the ACM (JACM)
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Scaling relationships in back-propagation learning
Complex Systems
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
An O(n0.4)-approximation algorithm for 3-coloring
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Neural network design and the complexity of learning
Neural network design and the complexity of learning
What size net gives valid generalization?
Advances in neural information processing systems 1
Training a 3-node neural network in NP-complete
Advances in neural information processing systems 1
Learning in threshold networks
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Sigmoids distinguish more efficiently than heavisides
Neural Computation
Constructive higher-order network that is polynomial time
Neural Networks
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Geometrical concept learning and convex polytopes
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Noise-tolerant distribution-free learning of general geometric concepts
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
On the complexity of learning for a spiking neuron (extended abstract)
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Combining the Perceptron Algorithm with Logarithmic Simulated Annealing
Neural Processing Letters
Incremental Learning with Respect to New Incoming Input Attributes
Neural Processing Letters
Hardness results for neural network approximation problems
Theoretical Computer Science
Training a single sigmoidal neuron is hard
Neural Computation
Computational complexity of neural networks: a survey
Nordic Journal of Computing
Structural Complexity and Neural Networks
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Hardness Results for Neural Network Approximation Problems
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
On Approximate Learning by Multi-layered Feedforward Circuits
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
On the difficulty of approximately maximizing agreements
Journal of Computer and System Sciences
Subspace clustering for high dimensional data: a review
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets
Some Dichotomy Theorems for Neural Learning Problems
The Journal of Machine Learning Research
On data classification by iterative linear partitioning
Discrete Applied Mathematics - Discrete mathematics & data mining (DM & DM)
Efficient Feature Selection via Analysis of Relevance and Redundancy
The Journal of Machine Learning Research
Toward Integrating Feature Selection Algorithms for Classification and Clustering
IEEE Transactions on Knowledge and Data Engineering
Loading Deep Networks Is Hard: The Pyramidal Case
Neural Computation
2005 Special Issue: The loading problem for recursive neural networks
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Hardness of approximate two-level logic minimization and PAC learning with membership queries
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
On approximate learning by multi-layered feedforward circuits
Theoretical Computer Science - Algorithmic learning theory (ALT 2000)
On the complexity of deriving position specific score matrices from positive and negative sequences
Discrete Applied Mathematics
A Stochastic Algorithm for Feature Selection in Pattern Recognition
The Journal of Machine Learning Research
Parallel computations and committee constructions
Automation and Remote Control
Feature selection methods for text classification
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
The complexity of properly learning simple concept classes
Journal of Computer and System Sciences
Hierarchical fuzzy filter method for unsupervised feature selection
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Developing a feature weight self-adjustment mechanism for a K-means clustering algorithm
Computational Statistics & Data Analysis
Entropy-based associative classification algorithm for mining manufacturing data
International Journal of Computer Integrated Manufacturing
Hardness of approximate two-level logic minimization and PAC learning with membership queries
Journal of Computer and System Sciences
Cryptographic hardness for learning intersections of halfspaces
Journal of Computer and System Sciences
Similarity-Based Feature Selection for Learning from Examples with Continuous Values
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
MUSP'09 Proceedings of the 9th WSEAS international conference on Multimedia systems & signal processing
Designing neural networks for tackling hard classification problems
WSEAS TRANSACTIONS on SYSTEMS
An efficient approach for building customer profiles from business data
Expert Systems with Applications: An International Journal
On data classification by iterative linear partitioning
Discrete Applied Mathematics
On detecting nonlinear patterns in discriminant problems
Information Sciences: an International Journal
International Journal of Computers and Applications
Output partitioning of neural networks
Neurocomputing
Automation and Remote Control
Breast-Cancer identification using HMM-fuzzy approach
Computers in Biology and Medicine
Playing monotone games to understand learning behaviors
Theoretical Computer Science
Estimating the size of neural networks from the number of available training data
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Optimal bounds for sign-representing the intersection of two halfspaces by polynomials
Proceedings of the forty-second ACM symposium on Theory of computing
A random-sampling-based algorithm for learning intersections of halfspaces
Journal of the ACM (JACM)
Presenting and analyzing the results of ai experiments: data averaging and data snooping
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Scaling up feature selection by means of democratization
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part II
A hypergraph-based approach to feature selection
CAIP'11 Proceedings of the 14th international conference on Computer analysis of images and patterns - Volume Part I
A new clustering algorithm with the convergence proof
KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part I
Approximation algorithms for minimizing empirical error by axis-parallel hyperplanes
ECML'05 Proceedings of the 16th European conference on Machine Learning
The application of adaptive partitioned random search in feature selection problem
ADMA'05 Proceedings of the First international conference on Advanced Data Mining and Applications
Automated user modeling for personalized digital libraries
International Journal of Information Management: The Journal for Information Professionals
A clustering ensemble framework based on elite selection of weighted clusters
Advances in Data Analysis and Classification
Hi-index | 0.00 |
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs. We show that it is NP-complete to decide whether there exist weights and thresholds for this network so that it produces output consistent with a given set of training examples. We extend the result to other simple networks. We also present a network for which training is hard but where switching to a more powerful representation makes training easier. These results suggest that those looking for perfect training algorithms cannot escape inherent computational difficulties just by considering only simple or very regular networks. They also suggest the importance, given a training problem, of finding an appropriate network and input encoding for that problem. It is left as an open problem to extend our result to nodes with nonlinear functions such as sigmoids.