Learning correlations using the mixture-of-subsets model

  • Authors:
  • Manas Somaiya;Christopher Jermaine;Sanjay Ranka

  • Affiliations:
  • University of Florida, Gainesville, FL;University of Florida, Gainesville, FL;University of Florida, Gainesville, FL

  • Venue:
  • ACM Transactions on Knowledge Discovery from Data (TKDD)
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Using a mixture of random variables to model data is a tried-and-tested method common in data mining, machine learning, and statistics. By using mixture modeling it is often possible to accurately model even complex, multimodal data via very simple components. However, the classical mixture model assumes that a data point is generated by a single component in the model. A lot of datasets can be modeled closer to the underlying reality if we drop this restriction. We propose a probabilistic framework, the mixture-of-subsets (MOS) model, by making two fundamental changes to the classical mixture model. First, we allow a data point to be generated by a set of components, rather than just a single component. Next, we limit the number of data attributes that each component can influence. We also propose an EM framework to learn the MOS model from a dataset, and experimentally evaluate it on real, high-dimensional datasets. Our results show that the MOS model learned from the data represents the underlying nature of the data accurately.