Human-Assisted Virtual Environment Modeling for Robots
Autonomous Robots
Augmented Scene Modeling and Visualization by Optical and Acoustic Sensor Integration
IEEE Transactions on Visualization and Computer Graphics
Hi-index | 0.00 |
This paper describes a system that semi-automatically builds a virtual world for remote operations by constructing 3-D models of a robot's work environment. With a minimum of human interaction, planar and quadric surface representations of objects typically found in man-made facilities are generated from laser rangefinder data. The surface representations are used to recognize complex models of objects in the scene. These object models are incorporated into a larger world model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to command the robot through graphical programming and other high level constructs. Limited operator interaction, combined with assumptions about the robots task environment, make the problem of modeling and recognizing objects tractable and yields a solution that can be readily incorporated into many telerobotic control schemes.