Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
An alternative method of stochastic discrimination with applications to pattern recognition
An alternative method of stochastic discrimination with applications to pattern recognition
On the Algorithmic Implementation of Stochastic Discrimination
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks
Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks
Meta-Patterns and Higher Order Meta-Patterns in Cellular Systems
Artificial Intelligence Review
ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
Deep Teleology in Artificial Systems
Minds and Machines
Hi-index | 0.00 |
For a particular type of elementary function, stochastic discrimination is shown to have an analytic limit function. Classifications can be performed directly by this limit function instead of by a sampling procedure. The limit function has an interpretation in terms of fields that originate from the training examples of a classification problem. Fields depend on the global configuration of the training points. The classification of a point in input space is known when the contributions of all fields are summed. Two modifications of the limit function are proposed. First, for nonlinear problems like high-dimensional parity problems, fields can be quantized. This leads to classification functions with perfect generalization for high-dimensional parity problems. Second, fields can be provided with adaptable amplitudes. The classification corresponding to a limit function is taken as an initialization; subsequently, amplitudes are adapted until an error function for the test set reaches minimal value. It is illustrated that this increases the performance of stochastic discrimination. Due to the nature of the fields, generalization improves even if the amplitude of every training example is adaptable.