Intraoperative Navigation of an Optically Tracked Surgical Robot
MICCAI '08 Proceedings of the 11th International Conference on Medical Image Computing and Computer-Assisted Intervention, Part II
Dynamic Active Constraints for Hyper-Redundant Flexible Robots
MICCAI '09 Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention: Part I
Control of articulated snake robot under dynamic active constraints
MICCAI'10 Proceedings of the 13th international conference on Medical image computing and computer-assisted intervention: Part III
Certifying the safe design of a virtual fixture control algorithm for a surgical robot
Proceedings of the 16th international conference on Hybrid systems: computation and control
Hi-index | 0.00 |
This paper describes a spatial-motion-constraints-generation approach for a human-machine collaborative surgical-assistant system from registered computer tomography models. We extend constrained optimization formulation incorporating task goals, anatomy-based constraints, "no fly zones," etc. We use a fast potential-collision-constraint-detection method based on a 3-D surface model and covariance tree data structure. These boundary constraints, along with task behaviors and joint limits, serve as constraint conditions for constrained robot control. We are able to follow a complex path inside a human skull, phantom represented by a surface model composed of 99 000 vertices and 182 000 triangles in real time. Our approach enables real-time task-based control of a surgical robot in a precise interactive minimally invasive surgery task. We illustrate our approach based on two example tasks which are analogous to the procedures in endoscopic sinus surgery, and analyze the user's performance on both teleoperation and cooperative control for one of the example tasks. The experimental results show that a robotic assistant employing our approach on spatial motion constraints can assist the user in skilled manipulation tasks, while maintaining desired properties. Our approach is equally applicable to teleoperative and cooperative controlled robots