Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Improv: a system for scripting interactive actors in virtual worlds
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
AutoBrief: a multimedia presentation system for assisting data analysis
Computer Standards & Interfaces
Generating coordinated natural language and 3D animations for complex spatial explanations
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Building natural language generation systems
Building natural language generation systems
The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Language and Spatial Cognition
Language and Spatial Cognition
Embodied agents for multi-party dialogue in immersive virtual worlds
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Using space to describe space: Perspective inspeech, sign, and gesture
Spatial Cognition and Computation
Toward a New Generation of Virtual Humans for Interactive Experiences
IEEE Intelligent Systems
Synthesizing multimodal utterances for conversational agents: Research Articles
Computer Animation and Virtual Worlds
Coordination and context-dependence in the generation of embodied conversation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Representing coordination and non-coordination in an american sign language animation
Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility
Judging Laura: perceived qualities of a mediated human versus an embodied agent
Lecture Notes in Computer Science
Towards the integration of shape-related information in 3-D gestures and speech
Proceedings of the 8th international conference on Multimodal interfaces
Automated generation of non-verbal behavior for virtual embodied characters
Proceedings of the 9th international conference on Multimodal interfaces
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
OCSC '09 Proceedings of the 3d International Conference on Online Communities and Social Computing: Held as Part of HCI International 2009
American sign language generation: multimodal NLG with multiple linguistic channels
ACLstudent '05 Proceedings of the ACL Student Research Workshop
Using the journalistic metaphor to design user interfaces that explain sensor data
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part III
Humans and smart environments: a novel multimodal interaction approach
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Creativity meets automation: combining nonverbal action authoring with rules and machine learning
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
A story about gesticulation expression
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Evaluating affective feedback of the 3d agent max in a competitive cards game
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Systematicity and idiosyncrasy in iconic gesture use: empirical analysis and computational modeling
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
Changes in verbal and nonverbal conversational behavior in long-term interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
International Journal of Human-Computer Studies
Guest Editorial: Gesture and speech in interaction: An overview
Speech Communication
Hi-index | 0.00 |
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, it is common to see people making gestures that indicate the shape of buildings, or outline a route to be taken by the listener, and these gestures are essential to the understanding of the directions. Based on results from an ongoing study on language and gesture in direction-giving, we propose a framework to analyze such gestural images into semantic units (image description features), and to link these units to morphological features (hand shape, trajectory, etc.). This feature-based framework allows us to generate novel iconic gestures for embodied conversational agents, without drawing on a lexicon of canned gestures. We present an integrated microplanner that derives the form of both coordinated natural language and iconic gesture directly from given communicative goals, and serves as input to the speech and gesture realization engine in our NUMACK project.