Finding generalized projected clusters in high dimensional spaces
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
SECRET: a scalable linear regression tree algorithm
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
CURLER: finding and visualizing nonlinear correlation clusters
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
MauveDB: supporting model-based user views in database systems
Proceedings of the 2006 ACM SIGMOD international conference on Management of data
Scalable look-ahead linear regression trees
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Querying continuous functions in a database system
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Employing correlation clustering for the identification of piecewise affine models
Proceedings of the 2011 workshop on Knowledge discovery, modeling and simulation
A clustering technique for the identification of piecewise affine systems
Automatica (Journal of IFAC)
On the hinge-finding algorithm for hingeing hyperplanes
IEEE Transactions on Information Theory
Hinging hyperplanes for regression, classification, and function approximation
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Model-based learning for predicting continuous values involves building an explicit generalization of the training data. Simple linear regression and piecewise linear regression techniques are well suited for this task, because, unlike neural networks, they yield an interpretable model. The hinging hyperplane approach is a nonlinear learning technique which computes a continuous model. It consists of linear submodels over individual partitions in the regressor space. However, it is only designed for one predicted variable. In the case of r predicted variables the number of partitions grows quickly with r and the result is no longer being compact or interpretable. We propose a generalization of the hinging hyperplane approach for several predicted variables. The algorithm considers all predicted variables simultaneously. It enforces common hinges, while at the same time restoring the continuity of the resulting functions. The model complexity no longer depends on the number of predicted variables, remaining compact and interpretable.