Introduction to the theory of neural computation
Introduction to the theory of neural computation
Elements of information theory
Elements of information theory
Detection of abrupt changes: theory and application
Detection of abrupt changes: theory and application
Neural Computation
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Backpropagation: the basic theory
Backpropagation
Modeling temporal structure of time series with hidden markov experts
Modeling temporal structure of time series with hidden markov experts
Information Sciences: an International Journal
Design of a two-stage fuzzy classification model
Expert Systems with Applications: An International Journal
Design of adaptive fuzzy model for classification problem
Engineering Applications of Artificial Intelligence
A review on time series data mining
Engineering Applications of Artificial Intelligence
Application of mixture of experts to construct real estate appraisal models
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part I
Investigation of mixture of experts applied to residential premises valuation
ACIIDS'13 Proceedings of the 5th Asian conference on Intelligent Information and Database Systems - Volume Part II
Hi-index | 0.15 |
This article introduces a new tool for exploratory data analysis and data mining called Scale-Sensitive Gated Experts (SSGE) which can partition a complex nonlinear regression surface into a set of simpler surfaces (which we call features). The set of simpler surfaces has the property that each element of the set can be efficiently modeled by a single feedforward neural network. The degree to which the regression surface is partitioned is controlled by an external scale parameter. The SSGE consists of a nonlinear gating network and several competing nonlinear experts. Although SSGE is similar to the mixture of experts model of Jacobs et al. [10] the mixture of experts model gives only one partitioning of the input-output space, and thus a single set of features, whereas the SSGE gives the user the capability to discover families of features. One obtains a new member of the family of features for each setting of the scale parameter. In this paper, we derive the Scale-Sensitive Gated Experts and demonstrate its performance on a time series segmentation problem. The main results are: 1) the scale parameter controls the granularity of the features of the regression surface, 2) similar features are modeled by the same expert and different kinds of features are modeled by different experts, and 3) for the time series problem, the SSGE finds different regimes of behavior, each with a specific and interesting interpretation.