Multilayer feedforward networks are universal approximators
Neural Networks
Parameter Selection in Particle Swarm Optimization
EP '98 Proceedings of the 7th International Conference on Evolutionary Programming VII
Multi-class pattern classification using neural networks
Pattern Recognition
A successful interdisciplinary course on computational intelligence
IEEE Computational Intelligence Magazine
Optimal Power Scheduling for Correlated Data Fusion in Wireless Sensor Networks via Constrained PSO
IEEE Transactions on Wireless Communications
Variable neural networks for adaptive control of nonlinear systems
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems
IEEE Transactions on Evolutionary Computation
A comparison between neural-network forecasting techniques-case study: river flow forecasting
IEEE Transactions on Neural Networks
Support Vector Echo-State Machine for Chaotic Time-Series Prediction
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Feedforward neural networks such as multilayer perceptrons (MLP) and recurrent neural networks are widely used for pattern classification, nonlinear function approximation, density estimation and time series prediction. A large number of neurons are usually required to perform these tasks accurately, which makes the MLPs less attractive for computational implementations on resource constrained hardware platforms. This paper highlights the benefits of feedforward and recurrent forms of a compact neural architecture called generalized neuron (GN). This paper demonstrates that GN and recurrent GN (RGN) can perform good classification, nonlinear function approximation, density estimation and chaotic time series prediction. Due to two aggregation functions and two activation functions, GN exhibits resilience to the nonlinearities of complex problems. Particle swarm optimization (PSO) is proposed as the training algorithm for GN and RGN. Due to a small number of trainable parameters, GN and RGN require less memory and computational resources. Thus, these structures are attractive choices for fast implementations on resource constrained hardware platforms.