Approximation capabilities of multilayer feedforward networks
Neural Networks
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Expert Systems with Applications: An International Journal
Fault diagnosis of ball bearings using machine learning methods
Expert Systems with Applications: An International Journal
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
A hybrid linear/nonlinear training algorithm for feedforward neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Stochastic choice of basis functions in adaptive function approximation and the functional-link net
IEEE Transactions on Neural Networks
Hi-index | 12.05 |
This paper studies sparse algorithms for training Random Weight Networks (RWN) and their applications. The proposed algorithms contain three principal steps: initialization of networks structure, simplification of RWN structure based on sparse coding, and relearning process with renewed nodes. A key of the algorithms is sparse coding of hidden layer neurons by adding an initialization process to simplify the networks structure. Specially, the new algorithms, to some extent, can avoid the over-fitting phenomenon efficiently. As applications, the algorithms are used to diagnose the fault of switch reluctance motor (SRM) and to recognize the human face. Compared with the traditional back-propagation (BP) and RWN algorithms, the experimental results show that the proposed algorithms have effective performances on the accuracy or time. These methodologies can also be conceived as support tools for the practical fault diagnosis of SRM and the human face pattern recognition.