SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
OBBTree: a hierarchical structure for rapid interference detection
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Fast, minimum storage ray-triangle intersection
Journal of Graphics Tools
Six degree-of-freedom haptic rendering using voxel sampling
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
An improved illumination model for shaded display
Communications of the ACM
Dexterous Object Manipulation Based on Collision Response
VR '03 Proceedings of the IEEE Virtual Reality 2003
Learning Visual Features to Recommend Grasp Configurations TITLE2:
Learning Visual Features to Recommend Grasp Configurations TITLE2:
VR '05 Proceedings of the 2005 IEEE Conference 2005 on Virtual Reality
Extending the friction cone algorithm for arbitrary polygon based haptic objects
HAPTICS'04 Proceedings of the 12th international conference on Haptic interfaces for virtual environment and teleoperator systems
Whole-hand kinesthetic feedback and haptic perception in dextrous virtual manipulation
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.10 |
Natural grasp interaction plays an important role in enhancing users' immersion experience in virtual environments. However, visually distracting artifacts such as the interpenetration of the hand and the grasped objects are always accompanied during grasp interaction due to a simplified whole-hand collision model, discrete control data used for detecting collisions and the interference of device noises. In addition, complicated distribution of forces from multi-finger contacts makes the natural grasp and manipulation of a virtual object difficult. In order to solve these problems, this paper presents a novel approach for grasp interaction in virtual environments. Based on the research in Neurophysiology, we first construct finger's grasp trajectories and detect collisions between the objects and the trajectories instead of the whole-hand collision model, then deduce the grasp configuration using collision detection results, and finally compute feedback forces according to grasp identification conditions. Our approach has been verified in a CAVE-based virtual environment.