EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Learning and Design of Principal Curves
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Visualization of large-scale data inherently requires dimensionality reduction to 1D, 2D, or 3D space. Autoassociative neural networks with bottleneck layer are commonly used as a nonlinear dimensionality reduction technique. However, many real-world problems suffer from incomplete data sets, i.e. some values may be missing. Common methods dealing with missing data include deletion of all cases with missing values from the data set or replacement with mean or “normal” values for specific variables. Such methods are appropriate when just a few values are missing. But in the case when a substantial portion of data is missing, these methods may significantly bias the results of modeling. To overcome this difficulty, we propose a modified learning procedure for the autoassociative neural network that directly takes into account missing values. The outputs of the trained network may be used for substitution of the missing values in the original data set.