Introduction to the theory of neural computation
Introduction to the theory of neural computation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Neural networks and the bias/variance dilemma
Neural Computation
Ill-conditioning in neural network training problems
SIAM Journal on Scientific Computing
Pyramid-based texture analysis/synthesis
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Machine Learning
A feedforward neural network with function shape autotuning
Neural Networks
I3D '01 Proceedings of the 2001 symposium on Interactive 3D graphics
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Networks with trainable amplitude of activation functions
Neural Networks
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Solving the Ill-Conditioning in Neural Network Learning
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Centering Neural Network Gradient Factors
Centering Neural Network Gradient Factors
A new evolutionary system for evolving artificial neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The issue of reconstruction of missing or unreliable parts of an image is one of the basic problems in image processing. For example, there are a number of methods for texture generation on the basis of a small sample. This paper presents a method that 'bottlenecks' an image processing feedforward neural network so that only some basic traits of the image are preserved. These basic traits are in turn used to generalize the image, thus filtering out any unusual parts of the image. The ability of neural networks and several other learning machines to generalize is based on the premise of smoothness of the generalizing function. Thus, in order to detect advanced patterns that exhibit complex traits like repetitiveness, instead of training these machines directly with raw data, transforms of the patterns like the Fast Fourier Transform are sometimes performed. In this paper it is shown, that a simple feedforward neural network, without any pre-processing of the training data, using the described 'bottleneck' architecture, can properly predict a stochastically repetitive pattern in a raster image.