A Neural Network for PCA and Beyond
Neural Processing Letters
EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation
International Journal of Computer Vision
Recurrent sampling models for the Helmholtz machine
Neural Computation
Feature extraction through LOCOCODE
Neural Computation
Unsupervised Extraction of Structural Information from HighDimensional Visual Data
Applied Intelligence
Deterministic Generative Models for Fast Feature Discovery
Data Mining and Knowledge Discovery
Preintegration lateral inhibition enhances unsupervised learning
Neural Computation
Optimal Extraction of Hidden Causes
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Topic Extraction from Text Documents Using Multiple-Cause Networks
PRICAI '02 Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
On the use of linear programming for unsupervised text classification
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Competition and multiple cause models
Neural Computation
Maximal Causes for Non-linear Component Extraction
The Journal of Machine Learning Research
Generalized softmax networks for non-linear component extraction
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A nonnegative blind source separation model for binary test data
IEEE Transactions on Circuits and Systems Part I: Regular Papers
Expectation Truncation and the Benefits of Preselection In Training Generative Models
The Journal of Machine Learning Research
Weighted topological clustering for categorical data
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part I
The dual-sparse topic model: mining focused topics and focused terms in short text
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.01 |
This paper presents a formulation for unsupervised learning ofclusters reflecting multiple causal structure in binary data.Unlike the "hard" k-means clustering algorithm and the"soft" mixture model, each of which assumes that a single hiddenevent generates each data point, a multiple cause model accountsfor observed data by combining assertions from many hidden causes,each of which can pertain to varying degree to any subset of theobservable dimensions. We employ an objective function anditerative gradient descent learning algorithm resembling theconventional mixture model. A crucial issue is the mixingfunction for combining beliefs from different cluster centersin order to generate data predictions whose errors are minimizedboth during recognition and learning. The mixing functionconstitutes a prior assumption about underlying structuralregularities of the data domain; we demonstrate a weakness inherentto the popular weighted sum followed by sigmoid squashing, andoffer alternative forms of the nonlinearity for two types of datadomain. Results are presented demonstrating the algorithm's abilitysuccessfully to discover coherent multiple causal representationsin several experimental data sets.