Unsupervised multimodal processing

  • Authors:
  • Abel Nyamapfene;Khurshid Ahmad

  • Affiliations:
  • School of Engineering, Computer Science and Mathematics, University of Exeter, Exeter, United Kingdom;Department of Computer Science, O'Reilly Institute, Trinity College, Dublin, Ireland

  • Venue:
  • AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present two separate algorithms for unsupervised multimodal processing. Our first proposal, the single-pass Hebbian linked self-organising map network, significantly reduces the training of Hebbian-linked self-organising maps by computing in a single epoch the weights of the links associating the separate modal maps. Our second proposal, based on the counterpropagation network algorithm, implements multimodal processing on a single self-organising map, thereby eliminating the network complexity associated with Hebbian linked self organising maps. When assessed on two bimodal datasets, an audio-acoustic speech utterance dataset and a phonological-semantics child utterance dataset, both approaches achieve smaller computation times and lower crossmodal mean squared errors than traditional Hebbian linked self-organising maps. In addition, the modified counterpropagation network leads to higher crossmodal classification percentages than either of the two Hebbian-linked self-organising map approaches.