Simultaneous Lp-approximation order for neural networks
Neural Networks
Brief paper: An adaptive optimization scheme with satisfactory transient performance
Automatica (Journal of IFAC)
The approximation operators with sigmoidal functions
Computers & Mathematics with Applications
Large scale nonlinear control system fine-tuning through learning
IEEE Transactions on Neural Networks
Neural Network Control of Unknown Nonlinear Systems with Efficient Transient Performance
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Multivariate sigmoidal neural network approximation
Neural Networks
The errors of simultaneous approximation of multivariate functions by neural networks
Computers & Mathematics with Applications
ICONIP'06 Proceedings of the 13th international conference on Neural information processing - Volume Part III
Essential rate for approximation by spherical neural networks
Neural Networks
Pointwise approximation for neural networks
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Approximation bound of mixture networks in Lwp spaces
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
The essential approximation order for neural networks with trigonometric hidden layer units
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Approximation bound for fuzzy-neural networks with bell membership function
FSKD'05 Proceedings of the Second international conference on Fuzzy Systems and Knowledge Discovery - Volume Part I
ADMA'05 Proceedings of the First international conference on Advanced Data Mining and Applications
The errors of approximation for feedforward neural networks in the Lp metric
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.01 |
We consider the approximation of smooth multivariate functions in C(Rd) by feedforward neural networks with a single hidden layer of nonlinear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain Rd, thereby generalizing available results for compact domains. We extend the approximation results to the so-called mixture of expert architecture, which has received considerable attention in recent years, showing that the same type of approximation bound may be achieved