Novel Applications of VR: Efficient mobile AR technology using scalable recognition and tracking based on server-client model

  • Authors:
  • Jinki Jung;Jaewon Ha;Sang-Wook Lee;Francisco A. Rojas;Hyun S. Yang

  • Affiliations:
  • Department of Computer Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea;Department of Computer Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea;Robotics Program, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea;Department of Computer Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea;Department of Computer Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea

  • Venue:
  • Computers and Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Advancements in mobile devices and vision technology have enabled mobile Augmented Reality (AR) to be serviced in real-time using natural features. However, in viewing AR while moving around in the real world, users often encounter new and diverse target objects. Whether the AR system is scalable to the number of target objects is a very crucial issue for mobile AR services in the real world. This scalability, however, has been severely limited because of the small internal storage capacity and memory of the mobile devices. In this paper, a new framework is proposed that achieves scalability for mobile AR. The scalability is achieved with a bag-of-visual-words based recognition module on the server side that is connected to the clients, which are mobile devices, through a conventional Wi-Fi network. On the client side, the coarse-to-fine tracking module enables robust tracking performance with natural features in real-time. In this study, we optimized modules in mobile devices for expediting pose-tracking processing and simultaneously enabled 3D rendering and animation in real-time. We also propose an efficient recognition method in which metadata are provided by the sensors of mobile devices. In the experiment, it takes approximately 0.2s for the cold start of an AR service initiated on a 10K object database with a recognition accuracy of 99.87%, which should be acceptable for a variety of real-world mobile AR applications.