Avoiding the dangers of averaging across subjects when using multidimensional scaling

  • Authors:
  • Michael D. Lee;Kenneth J. Pope

  • Affiliations:
  • Department of Psychology, University of Adelaide, SA 5005, Australia;School of Informatics and Engineering, Flinders University of South Australia, Australia

  • Venue:
  • Journal of Mathematical Psychology
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ashby, Maddox and Lee (Psychological Science, 5 (3) 144) argue that it can be inappropriate to fit multidimensional scaling (MDS) models to similarity or dissimilarity data that have been averaged across subjects. They demonstrate that the averaging process tends to make dissimilarity data more amenable to metric representations, and conduct a simulation study showing that noisy data generated using one distance metric, when averaged, may be better fit using a different distance metric. This paper argues that a Bayesian measure of MDS models has the potential to address these difficulties, because it takes into account data-fit, the number of dimensions used by an MDS representation, and the precision of the data. A method of analysis based on the Bayesian measure is demonstrated through two simulation studies with accompanying theoretical analysis. In the first study, it is shown that the Bayesian analysis rejects those MDS models showing better fit to averaged data using the incorrect distance metric, while accepting those that use the correct metric. In the second study, different groups of simulated 'subjects' are assumed to use different underlying configurations. In this case, the Bayesian analysis rejects MDS representations where a significant proportion of subjects use different configurations, or when their dissimilarity judgments contain significant amounts of noise. It is concluded that the Bayesian analysis provides a simple and principled means for systematically accepting and rejecting MDS models derived from averaged data.