Robust adaptive control
Weighted Means in Stochastic Approximation of Minima
SIAM Journal on Control and Optimization
Nonlinear and Adaptive Control Design
Nonlinear and Adaptive Control Design
Gradient Convergence in Gradient methods with Errors
SIAM Journal on Optimization
Letters: Convex incremental extreme learning machine
Neurocomputing
Approximation bounds for smooth functions in C(Rd) by neural and mixture networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Neural Network Control of Unknown Nonlinear Systems with Efficient Transient Performance
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
IEEE Transactions on Neural Networks
Multi-robot three-dimensional coverage of unknown areas
International Journal of Robotics Research
Hi-index | 0.00 |
Despite the continuous advances in the fields of intelligent control and computing, the design and deployment of efficient large scale nonlinear control systems (LNCSs) requires a tedious fine-tuning of the LNCS parameters before and during the actual system operation. In the majority of LNCSs the fine-tuning process is performed by experienced personnel based on field observations via experimentation with different combinations of controller parameters, without the use of a systematic approach. The existing adaptive/neural/fuzzy control methodologies cannot be used towards the development of a systematic, automated fine-tuning procedure for general LNCS due to the strict assumptions they impose on the controlled system dynamics; on the other hand, adaptive optimization methodologies fail to guarantee an efficient and safe performance during the fine-tuning process, mainly due to the fact that these methodologies involve the use of random perturbations. In this paper, we introduce and analyze, both by means of mathematical arguments and simulation experiments, a new learning/adaptive algorithm that can provide with convergent, an efficient and safe fine-tuning of general LNCS. The proposed algorithm consists of a combination of two different algorithms proposed by Kosmatopoulos et al. (2007 and 2008) and the incremental-extreme learning machine neural networks (I-ELM-NNs). Among the nice properties of the proposed algorithm is that it significantly outperforms the algorithms proposed by Kosmatopoulos et al. as well as other existing adaptive optimization algorithms. Moreover, contrary to the algorithms proposed by Kosmatopoulos et al., the proposed algorithm can operate efficiently in the case where the exogenous system inputs (e.g., disturbances, commands, demand, etc.) are unbounded signals.