From Uncertainty to Visual Exploration
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part I
Structure and Process: Learning of Visual Models and Construction Plans for Complex Objects
Revised Papers from the International Workshop on Sensor Based Intelligent Robots
Memorizing Visual Knowledge for Assembly Process Monitoring
Proceedings of the 23rd DAGM-Symposium on Pattern Recognition
Recognizing Assembly Tasks Through Human Demonstration
International Journal of Robotics Research
Evolutionary computation for sensor planning: the task distribution plan
EURASIP Journal on Applied Signal Processing
Active-Vision System Reconfiguration for Form Recognition in the Presence of Dynamic Obstacles
AMDO '08 Proceedings of the 5th international conference on Articulated Motion and Deformable Objects
A generative theory of shape
Active vision in robotic systems: A survey of recent developments
International Journal of Robotics Research
Intelligent multi-camera video surveillance: A review
Pattern Recognition Letters
Pipeline-Architecture Based Real-Time Active-Vision for Human-Action Recognition
Journal of Intelligent and Robotic Systems
Hi-index | 0.14 |
This paper describes a method of systematically generating visual sensing strategies based on knowledge of the assembly task to be performed. Since visual sensing is usually performed with limited resources, visual sensing strategies should be planned so that only necessary information is obtained efficiently. The generation of the appropriate visual sensing strategy entails knowing what information to extract, where to get it, and how to get it. This is facilitated by the knowledge of the task, which describes what objects are involved in the operation, and how they are assembled. In the proposed method, using the task analysis based on face contact relations between objects, necessary information for the current operation is first extracted. Then, visual features to be observed are determined using the knowledge of the sensor, which describes the relationship between a visual feature and information to be obtained. Finally, feasible visual sensing strategies are evaluated based on the predicted success probability, and the best strategy is selected. Our method has been implemented using a laser range finder as the sensor. Experimental results show the feasibility of the method, and point out the importance of task-oriented evaluation of visual sensing strategies.