Recognizing ingredients at cutting process by integrating multimodal features

  • Authors:
  • Atsushi Hashimoto;Jin Inoue;Kazuaki Nakamura;Takuya Funatomi;Mayumi Ueda;Yoko Yamakata;Michihiko Minoh

  • Affiliations:
  • Kyoto University, Kyoto, Japan;Kyoto University, Kyoto, Japan;Osaka University, Osaka, Japan;Kyoto University, Kyoto, Japan;University of Marketing and Distribution Sciences, Kobe, Japan;Kyoto University, Kyoto, Japan;Kyoto University, Kyoto, Japan

  • Venue:
  • Proceedings of the ACM multimedia 2012 workshop on Multimedia for cooking and eating activities
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method for recognizing ingredients in food preparing activity. The research for object recognition mainly focuses on only visual information; however, ingredients are difficult to recognize only by visual information because of their limited color variations and larger within-class difference than inter-class difference in shapes. In this paper, we propose a method that involves some physical signals obtained in a cutting process by attaching load and sound sensors to the chopping board. The load may depend on an ingredient's hardness. The sound produced when a knife passes through an ingredient reflects the structure of the ingredient. Hence, these signals are expected to facilitate more precise recognition. We confirmed the effectiveness of the integration of the three modalities (visual, auditory, and load) through experiments in which the developed method was applied to 23 classes of ingredients.