Unsupervised Layer-Wise Model Selection in Deep Neural Networks

  • Authors:
  • Ludovic Arnold;Hélène Paugam-Moisy;Michèle Sebag

  • Affiliations:
  • Université Paris Sud 11 --CNRS, LIMSI, Ludovic.Arnold@lri.fr;Université de Lyon, TAO --INRIA Saclay, Helene.Paugam-Moisy@univ-lyon2.fr;TAO --INRIA Saclay, CNRS, LRI, Michele.Sebag@lri.fr

  • Venue:
  • Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Deep Neural Networks (DNN) propose a new and efficient ML architecture based on the layer-wise building of several representation layers. A critical issue for DNNs remains model selection, e.g. selecting the number of neurons in each DNN layer. The hyper-parameter search space exponentially increases with the number of layers, making the popular grid search-based approach used for finding good hyper-parameter values intractable. The question investigated in this paper is whether the unsupervised, layer-wise methodology used to train a DNN can be extended to model selection as well. The proposed approach, considering an unsupervised criterion, empirically examines whether model selection is a modular optimization problem, and can be tackled in a layer-wise manner. Preliminary results on the MNIST data set suggest the answer is positive. Further, some unexpected results regarding the optimal size of layers depending on the training process, are reported and discussed.