Visual Learning with Navigation as an Example

  • Authors:
  • Juyang Weng;Shaoyun Chen

  • Affiliations:
  • -;-

  • Venue:
  • IEEE Intelligent Systems
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes Shoslif, an appearance-based approach for vision-system-based control. The state-based learning method presented in this article is applicable to virtually any vision-based control problem. The authors use navigation as an example. The authors have applied this approach to Shoslif-N, a prototype navigation system for environments that are not fully predictable before the system design. Such an application requires that the system is trainable at the application sites for various driving conditions. So, for human designers to predefine the visual features that the navigation system will use is impractical. Shoslif-N uses a learning-based method to automatically derive, during training, the visual features that are best suited for the navigation task. It then uses these features to organize and store the information learned through its navigation experience. Furthermore, the authors explain why a system state is important for navigation and how to incorporate the state into the appearance-based framework. System states let the system use both local and global views for navigation. So, the system can disregard unrelated scene parts according to the visual context and achieve better generalization. The state-based navigation system is trained interactively, incrementally, online, in real time. A general-purpose workstation performs the real-time computation, without any special-purpose image-processing hardware. The authors have successfully tested the system in a relatively extensive indoor environment along an extended navigation course, in the presence of passers-by.