Autoencoding ground motion data for visualisation

  • Authors:
  • Nikolaos Gianniotis;Carsten Riggelsen;Nicolas Kühn;Frank Scherbaum

  • Affiliations:
  • Institute of Earth and Environmental Science, University of Potsdam, Potsdam-Golm, Germany;Institute of Earth and Environmental Science, University of Potsdam, Potsdam-Golm, Germany;Institute of Earth and Environmental Science, University of Potsdam, Potsdam-Golm, Germany;Institute of Earth and Environmental Science, University of Potsdam, Potsdam-Golm, Germany

  • Venue:
  • ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new visualisation method for physical data based on the autoencoder that allows a transparent interpretation of the induced visualisation. The autoencoder is a neural network that compresses high-dimensional data into low-dimensional representations. It defines a fan-in fan-out architecture, with the middle layer composed of a small number of neurons referred to as the 'bottleneck'. When data are propagated through the network, the bottleneck forces the autoencoder to reduce the dimensionality of the data. Physical data are manifestations of physical models that express domain knowledge. Such knowledge should be reflected in the visualisation in order to help the analyst understand why the data are projected to their particular locations. In this work we endow the standard autoencoder with this capability by extending it with extra layers. We apply our approach on a dataset of ground motions and discuss how the visualisation reflects physical aspects.