Communications of the ACM
Nonlinear dynamics of artificial neural systems
AIP Conference Proceedings 151 on Neural Networks for Computing
Higher-order Boltzmann machines
AIP Conference Proceedings 151 on Neural Networks for Computing
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Cortical computational maps for auditory imaging
Neural Networks
A general framework for parallel distributed processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The logic of activation functions
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Higher order recurrent networks and grammatical inference
Advances in neural information processing systems 2
Sigma-Pi learning: on radial basis functions and cortical associative learning
Advances in neural information processing systems 2
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Multiplying with synapses and neurons
Single neuron computation
Lower Bound Methods and Separation Results for On-Line Learning Models
Machine Learning - Computational learning theory
Induction of finite-state languages using second-order recurrent networks
Neural Computation
NMDA-based pattern discrimination in a modeled cortical neuron
Neural Computation
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Constructive higher-order network that is polynomial time
Neural Networks
VC dimension and uniform learnability of sparse polynomials and rational functions
SIAM Journal on Computing
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
Neural nets with superlinear VC-dimension
Neural Computation
Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension
Machine Learning - Special issue on computational learning theory
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
Machine Learning - Special issue on structured connectionist systems
Discrete neural computation: a theoretical foundation
Discrete neural computation: a theoretical foundation
Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers
Machine Learning - Special issue on COLT '93
Characterizations of learnability for classes of {0, …, n}-valued functions
Journal of Computer and System Sciences
Classification by polynomial surfaces
Discrete Applied Mathematics
Agnostic PAC learning of functions on analog neural nets
Neural Computation
Constructing deterministic finite-state automata in recurrent neural networks
Journal of the ACM (JACM)
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Vapnik-Chervonenkis dimension of recurent neural networks
Discrete Applied Mathematics - Special issue: Vapnik-Chervonenkis dimension
Localization vs. Identification of Semi-Algebraic Sets
Machine Learning
Almost linear VC-dimension bounds for piecewise polynomial networks
Neural Computation
The handbook of brain theory and neural networks
Vapnik-Chervonenkis dimension of neural networks
The handbook of brain theory and neural networks
On the Sample Complexity for Nonoverlapping Neural Networks
Machine Learning
Implementation of frequency-warped recursive filters
Signal Processing
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Pattern Recognition Properties of Various Feature Spaces for Higher Order Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training Product Unit Neural Networks with Genetic Algorithms
IEEE Expert: Intelligent Systems and Their Applications
Learning Sequential Tasks by Incrementally Adding Higher Orders
Advances in Neural Information Processing Systems 5, [NIPS Conference]
VC Dimension in Circuit Complexity
CCC '96 Proceedings of the 11th Annual IEEE Conference on Computational Complexity
Spatial transformations in the parietal cortex using basis functions
Journal of Cognitive Neuroscience
Iterative generation of higher-order nets in polynomial time using linear programming
IEEE Transactions on Neural Networks
Long-term attraction in higher order neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A framework for improved training of Sigma-Pi networks
IEEE Transactions on Neural Networks
Neural networks with local receptive fields and superlinear VC Dimension
Neural Computation
Improving Supervised Learning by Feature Decomposition
FoIKS '02 Proceedings of the Second International Symposium on Foundations of Information and Knowledge Systems
Product Unit Neural Networks with Constant Depth and Superlinear VC Dimension
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
A Solution for the N-bit Parity Problem Using a Single Translated Multiplicative Neuron
Neural Processing Letters
Evolutionary product unit based neural networks for regression
Neural Networks
Feature set decomposition for decision trees
Intelligent Data Analysis
Time series prediction with single multiplicative neuron model
Applied Soft Computing
Computers and Electronics in Agriculture
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Computers and Electronics in Agriculture
Evolutionary product-unit neural networks classifiers
Neurocomputing
Expert Systems with Applications: An International Journal
Hybrid evolutionary algorithm with product-unit neural networks for classification
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
A two-stage algorithm in evolutionary product unit neural networks for classification
Expert Systems with Applications: An International Journal
Prediction of chaotic time series using computational intelligence
Expert Systems with Applications: An International Journal
Evolutionary product-unit neural networks for classification
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Impact of neuron models and network structure on evolving modular robot neural network controllers
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Hi-index | 0.01 |
In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks.We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudodimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with superlinear VC dimension.For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds.Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.