SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
3-D sound for virtual reality and multimedia
3-D sound for virtual reality and multimedia
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
OBBTree: a hierarchical structure for rapid interference detection
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
ArtDefo: accurate real time deformable objects
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Inverse global illumination: recovering reflectance models of real scenes from photographs
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Real-time acoustic modeling for distributed virtual environments
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Surface light fields for 3D photography
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Scanning physical interaction behavior of 3D objects
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Synthesizing sounds from physically based motion
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A practical model for subsurface light transport
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Modeling acoustics in virtual environments using the uniform theory of diffraction
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Real Sound Synthesis for Interactive Applications
Real Sound Synthesis for Interactive Applications
Synthesizing sounds from rigid-body simulations
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics
ACM SIGGRAPH 2003 Papers
Everyday listening and auditory icons
Everyday listening and auditory icons
Interactive simulation of complex audiovisual scenes
Presence: Teleoperators and Virtual Environments - Special section: Advances in interactive multimodal telepresent systems
Perceptual audio rendering of complex virtual environments
ACM SIGGRAPH 2004 Papers
Image-based BRDF measurement including human skin
EGWR'99 Proceedings of the 10th Eurographics conference on Rendering
Multi-modal exploration of small artifacts: an exhibition at the Gold Museum in Bogota
Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
iSpooks: an audio focused game design
Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound
A 3-D immersive synthesizer for environmental sounds
IEEE Transactions on Audio, Speech, and Language Processing
Cultural Heritage: A novel approach to documenting artifacts at the Gold Museum in Bogota
Computers and Graphics
VAST'09 Proceedings of the 10th International conference on Virtual Reality, Archaeology and Cultural Heritage
Example-guided physically based modal sound synthesis
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
We describe a methodology for virtual reality designers to capture and resynthesize the variations in sound made by objects when we interact with them through contact such as touch. The timbre of contact sounds can vary greatly, depending on both the listener's location relative to the object, and the interaction point on the object itself. We believe that an accurate rendering of this variation greatly enhances the feeling of immersion in a simulation. To do this, we model the variation with an efficient algorithm based on modal synthesis. This model contains a vector field that is defined on the product space of contact locations and listening positions around the object. The modal data are sampled on this high dimensional space using an automated measuring platform. A parameter-fitting algorithm is presented that recovers the parameters from a large set of sound recordings around objects and creates a continuous timbre field by interpolation. The model is subsequently rendered in a real-time simulation with integrated haptic, graphic, and audio display. We describe our experience with an implementation of this system and an informal evaluation of the results.