Improved Fast Gauss Transform and Efficient Kernel Density Estimation
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Biophysics of Computation: Information Processing in Single Neurons (Computational Neuroscience Series)
Geometrical multi-resolution network based on ridgelet frame
Signal Processing
The Homogeneous Approximation Property for wavelet frames
Journal of Approximation Theory
Neurocomputing
Wavelet neural networks for function learning
IEEE Transactions on Signal Processing
The finite ridgelet transform for image representation
IEEE Transactions on Image Processing
Multiwavelet neural network and its approximation properties
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
An Optimization Methodology for Neural Network Weights and Architectures
IEEE Transactions on Neural Networks
Density-Driven Generalized Regression Neural Networks (DD-GRNN) for Function Approximation
IEEE Transactions on Neural Networks
Analysis and synthesis of feedforward neural networks using discrete affine wavelet transformations
IEEE Transactions on Neural Networks
Multidimensional wavelet frames
IEEE Transactions on Neural Networks
IScIDE'11 Proceedings of the Second Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
Hi-index | 0.01 |
The multiscale properties of the reception field of the human visual cortex have illuminated the research of wavelet neural network (WNN). Findings in neurophysiology indicate that in the human visual system there are specialized areas in the visual cortex that respond for particular orientations. In other words, the reception field of visual cortex has the multiresolution properties in direction, as well as in localization and scale. Enlightened by these facts, a three layer feed-forward neural network (FNN) is presented by employing ridgelet as the activation function in the hidden layer. To get rapid learning when dealing with high dimensional samples, we proposed an efficient linear learning algorithm inspired by traditional kernel smoothing method, which has low computation complexity proportional to the number and dimension of samples. At the cost of a little degradation in accuracy, the network can achieve rapid learning. Some simulation experiments about function approximation are taken, and several commonly used regression ways are considered under the same conditions to give a comparison result. The results show that the proposed linear ridgelet network can overcome the curse of dimensionality in the training of FNNs and exhibit better performance in high dimension than its counterparts, especially when some spatial inhomogeneities exist in the function.