The :20Brain-state-in-a-box" Neural model is a gradient descent algorithm
Journal of Mathematical Psychology
A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Similarity, typicality, and categorization
Similarity and analogical reasoning
Maximum likelihood competitive learning
Advances in neural information processing systems 2
Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Simulating Neural Networks with Mathematica
Simulating Neural Networks with Mathematica
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
Reasoning Processes in Humans and Computers: Theory and Research in Psychology and Artificial Intelligence
IEEE Transactions on Computers
Hi-index | 0.00 |
Learning environmental biases is a rational behavior: by using prior odds, Bayesian networks rapidly became a benchmark in machine learning. Moreover, a growing body of evidence now suggests that humans are using base rate information. Unsupervised connectionist networks are used in computer science for machine learning and in psychology to model human cognition, but it is unclear whether they are sensitive to prior odds. In this paper, we show that hard competitive learners are unable to use environmental biases while recurrent associative memories use frequency of exemplars and categories independently. Hence, it is concluded that recurrent associative memories are more useful than hard competitive networks to model human cognition and have a higher potential in machine learning.