View management for virtual and augmented reality
Proceedings of the 14th annual ACM symposium on User interface software and technology
IEEE Computer Graphics and Applications
Evaluating Label Placement for Augmented Reality View Management
ISMAR '03 Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality
Dynamic Labeling Management in Virtual and Augmented Environments
CAD-CG '05 Proceedings of the Ninth International Conference on Computer Aided Design and Computer Graphics
Semi-automatic Annotations in Unknown Environments
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Personal information annotation on wearable computer users with hybrid peer-to-peer communication
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
Dynamic text management for see-through wearable and heads-up display systems
Proceedings of the 2013 international conference on Intelligent user interfaces
Hi-index | 0.01 |
In annotation overlay applications using augmented reality (AR), view management is widely used for improving readability and intelligibility of the annotations. In order to recognize the visible portions of objects in the user's view, the positions, orientations, and shapes of the objects should be known in the case of conventional view management methods. However, it is difficult for a wearable AR system to obtain the positions, orientations and shapes of objects because the target object is usually moving or non-rigid. In this paper, we propose a view management method to overlay annotations of moving or non-rigid objects for networked wearable AR. The proposed method obtains positions and shapes of target objects via a network in order to estimate the visible portions of the target objects in the user's view. Annotations are located by minimizing penalties related to the overlap of an annotation, occlusion of target objects, length of a line between the annotation and the target object, and distance of the annotation in sequential frames. Through experiments, we have proven that the prototype system can correctly provide each user with annotations on multiple users of wearable AR systems.