SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Synthesizing sounds from physically based motion
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A practical model for subsurface light transport
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Synthesizing sounds from rigid-body simulations
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion capture assisted animation: texturing and synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Synchronizing Computer Graphics Animation and Audio
IEEE MultiMedia
Synthesizing Sound Textures through Wavelet Tree Learning
IEEE Computer Graphics and Applications
Integrating Sounds and Motions in Virtual Environments
Presence: Teleoperators and Virtual Environments
Hypercube sweeping algorithm for subsequence motion matching in large motion databases
Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications
ACM SIGGRAPH 2011 papers
Motion-driven concatenative synthesis of cloth sounds
ACM Transactions on Graphics (TOG) - SIGGRAPH 2012 Conference Proceedings
Hi-index | 0.00 |
We present the first algorithm for automatically generating soundtracks for input animation based on other animations' soundtrack. This technique can greatly simplify the production of soundtracks in computer animation and video by re-targeting existing soundtracks. A segment of source audio is used to train a statistical model which is then used to generate variants of the original audio to fit particular constraints. These constraints can either be specified explicitly by the user in the form of large-scale properties of the sound texture, or determined automatically and semi-automatically by matching similar motion events in a source animation to those in the target animation.