Computational models of neuromodulation
Neural Computation
Recurrent sampling models for the Helmholtz machine
Neural Computation
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Metalearning and neuromodulation
Neural Networks - Computational models of neuromodulation
Acetylcholine in cortical inference
Neural Networks - Computational models of neuromodulation
Sequential Bayesian decoding with a population of neurons
Neural Computation
Online Model Selection Based on the Variational Bayes
Neural Computation
Inferring parameters and structure of latent variable models by variational bayes
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
A brain needs to detect an environmental change and to quickly learn internal representations necessary in a new environment. This paper presents a theoretical model of cortical representation learning that can adapt to dynamic environments, incorporating the results by previous studies on the functional role of acetylcholine (ACh). We adopt the probabilistic principal component analysis (PPCA) as a functional model of cortical representation learning, and present an on-line learning method for PPCA according to Bayesian inference, including a heuristic criterion for model selection. Our approach is examined in two types of simulations with synthesized and realistic data sets, in which our model is able to re-learn new representation bases after the environment changes. Our model implies the possibility that a higher-level recognition regulates the cortical ACh release in the lower-level, and that the ACh level alters the learning dynamics of a local circuit in order to continuously acquire appropriate representations in a dynamic environment.