Multilayer feedforward networks are universal approximators
Neural Networks
Approximation capabilities of multilayer feedforward networks
Neural Networks
An Efficient Hardware Implementation of Feed-Forward Neural Networks
Proceedings of the 14th International conference on Industrial and engineering applications of artificial intelligence and expert systems: engineering of intelligent systems
MAMECTIS'10 Proceedings of the 12th WSEAS international conference on Mathematical methods, computational techniques and intelligent systems
Marginalized neural network mixtures for large-scale regression
IEEE Transactions on Neural Networks
Self-adaptive artificial neural network in numerical models calibration
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
Comparisons of single- and multiple-hidden-layer neural networks
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part I
Financial ratings with scarce information: A neural network approach
Expert Systems with Applications: An International Journal
ICNC'06 Proceedings of the Second international conference on Advances in Natural Computation - Volume Part I
Effects of using different neural network structures and cost functions in locomotion control
ICNC'06 Proceedings of the Second international conference on Advances in Natural Computation - Volume Part I
Gaussian process occupancy maps*
International Journal of Robotics Research
Mathematical and Computer Modelling: An International Journal
Approximation properties of local bases assembled from neural network transfer functions
Mathematical and Computer Modelling: An International Journal
Data fusion with Gaussian processes
Robotics and Autonomous Systems
Neural network modeling of vector multivariable functions in ill-posed approximation problems
Journal of Computer and Systems Sciences International
Hi-index | 0.00 |
We show that standard feedforward networks with as few as a single hidden layer can uniformly approximate continuous functions on compacta provided that the activation function @j is locally Riemann integrable and nonpolynomial, and have universal L^p (@m) approximation capabilities for finite and compactly supported input environment measures @m provided that @j is locally bounded and nonpolynomial. In both cases, the input-to-hidden weights and hidden layer biases can be constrained to arbitrarily small sets; if in addition @j is locally analytic a single universal bias will do.