Multilayer feedforward networks are universal approximators
Neural Networks
Ten lectures on wavelets
Neural network design
Orthogonal Transforms for Digital Signal Processing
Orthogonal Transforms for Digital Signal Processing
Writer identification using global wavelet-based features
Neurocomputing
The wavelet transform, time-frequency localization and signal analysis
IEEE Transactions on Information Theory
An overview of statistical learning theory
IEEE Transactions on Neural Networks
A self-organizing map for adaptive processing of structured data
IEEE Transactions on Neural Networks
Coded output support vector machine
ICIC'12 Proceedings of the 8th international conference on Intelligent Computing Theories and Applications
Binary coded output support vector machine
ICIC'13 Proceedings of the 9th international conference on Intelligent Computing Theories and Technology
Hi-index | 0.01 |
The process neural network (PrNN) is an ANN model suited for solving the learning problems with signal inputs, whose elementary unit is the process neuron (PN), an emerging neuron model. There is an essential difference between the process neuron and traditional neurons, but there also exists a relation between them. The former can be approximated by the latter within any precision. First, the PN model and some PrNNs are introduced in brief. And then, two PN approximating theorems are presented and proved in detail. Each theorem gives an approximating model to the PN model, i.e., the time-domain feature expansion model and the orthogonal decomposition feature expansion model. Some corollaries are given for the PrNNs based on these two theorems. Thereafter, simulation studies are performed on some simulated signal sets and a real dataset. The results show that the PrNN can effectively suppress noises polluting the signals and generalize quite well. Finally some problems on PrNNs are discussed and further research directions are suggested.