Time Series Learning With Probabilistic Network Composites

  • Authors:
  • William H Hsu

  • Affiliations:
  • -

  • Venue:
  • Time Series Learning With Probabilistic Network Composites
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

The purpose of this research is to extend the theory of uncertain reasoning over time through integrated, multi-strategy learning. Its focus is on decomposable, concept learning problems for classification of spatiotemporal sequences. Systematic methods of task decomposition using attribute-driven methods, especially atribute partitioning, are investigated. This leads to a novel and important type of unsupervised learning in which the feature construction (or extraction) step is modified to account for multiple sources of data and to systematically search for embedded temporal patterns. This modified technique is combined with traditional cluster definition methods to provide an effective mechanism for decomposition of time series learning problems. The decomposition process interacts with model selection from a collection of probabilistic models such as temporal artificial neural networks and temporal Bayesian networks. Models are chosen using a new quantitative (metric-based) approach that estimates expected performance of a learning architecture, algorithm, and mixture model on a newly defined subproblem. By mapping subproblems to customized configurations of probabilistic networks for time series learning, a hierarchical, supervised learning system with enhanced generalization quality can be automatically built. The system can improve data fusion capability (overall localization accuracy and precision), classification accuracy, and network complexity on a variety of decomposable time series learning problems. Experimental evaluation indicates potential advances in large-scale, applied time series analysis (especially prediction and monitoring of complex processes). The research reported in this dissertation contributes to the theoretical understanding of so-called wrapper systems for high-level parameter adjustment in inductive learning.