Segmentation of textures defined on flat vs. layered surfaces using neural networks: Comparison of 2D vs. 3D representations

  • Authors:
  • Sejong Oh;Yoonsuck Choe

  • Affiliations:
  • Department of Computer Science, Texas A&M University, 3112 TAMU, College Station, TX 77843 3112, USA and Republic of Korea Airforce, Korea;Department of Computer Science, Texas A&M University, 3112 TAMU, College Station, TX 77843 3112, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Texture boundary detection (or segmentation) is an important capability in human vision. Usually, texture segmentation is viewed as a 2D problem, as the definition of the problem itself assumes a 2D substrate. However, an interesting hypothesis emerges when we ask a question regarding the nature of textures: What are textures, and why did the ability to discriminate texture evolve or develop? A possible answer to this question is that textures naturally define physically distinct (i.e., occluded) surfaces. Hence, we can hypothesize that 2D texture segmentation may be an outgrowth of the ability to discriminate surfaces in 3D. In this paper, we conducted computational experiments with artificial neural networks to investigate the relative difficulty of learning to segment textures defined on flat 2D surfaces vs. those in 3D configurations where the boundaries are defined by occluding surfaces and their change over time due to the observer's motion. It turns out that learning is faster and more accurate in 3D, very much in line with our expectation. Furthermore, our results showed that the neural network's learned ability to segment texture in 3D transfers well into 2D texture segmentation, bolstering our initial hypothesis, and providing insights on the possible developmental origin of 2D texture segmentation function in human vision.