From synapse to psychology: emergence of a language, speech, and vision engine from bottom-up brain modeling

  • Authors:
  • Richard H. Granger;Andrew C. Felch

  • Affiliations:
  • University of California, Irvine;University of California, Irvine

  • Venue:
  • From synapse to psychology: emergence of a language, speech, and vision engine from bottom-up brain modeling
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

After 50 years and billions of dollars in research, computers still cannot perform even trivial human tasks. The focus of Artificial Intelligence has been the production of highly accurate intelligence in formalized worlds, whereas children learn language from the sounds of speech and slowly gain expressive power over years of unconscious observation. Neocortex, which comprises 70% of the human brain, is responsible for most auditory, visual, and language processing; yet the architecture remains consistent across brain regions and even mammalian species. The implication is that a particular set of algorithms are particularly useful for processing in vastly different domains. Derivation of these algorithms will provide great insight into the architecture required for computers to approach real intelligence. To this end, research into the circuitry and architecture of the brain was undertaken and abstractions were modeled. The brain operates in the seemingly prohibitive computational environment of very sparse connectivity, where the probability of connection between even close neurons is very low. To make an informed decision as to the proper method for assigning these sparse connections, several random connectivity patterns were studied and a single pattern emerges as optimal across a range of tasks. After this, an analysis of similarity metrics for sparse binary neural network firing patterns was conducted and a novel similarity metric was discovered to yield almost perfect performance at a task where most other metrics behave no better than random guessing. To derive a learning rule, Long Term Potentiation (LTP) was augmented with LTP-Reversal, and comparisons with Hebbian plasticity are made at tasks designed to produce the effect of catastrophic forgetting. Characteristics of sleep also emerge, where network performance degrades until LTP-Reversal is performed. These findings are used to derive a sequence processor with remarkable human psychological properties, capable of iterative and hierarchical clustering. The network is approximated to make a one-million neuron network computable, and a state-of-the-art unsupervised and reinforcement grammar learner is the consequence. In the end, the grammar learner is abstracted further to operate on visual and auditory input, and examples in these domains are shown. Hardware issues are also considered for future acceleration.