Plans and situated actions: the problem of human-machine communication
Plans and situated actions: the problem of human-machine communication
The use of eye movements in human-computer interaction techniques: what you look at is what you get
ACM Transactions on Information Systems (TOIS) - Special issue on computer—human interaction
Automatic discovery of salient segments in imperfect speech transcripts
Proceedings of the tenth international conference on Information and knowledge management
Virtual environments for social skills training: the importance of scaffolding in practice
Proceedings of the fifth international ACM conference on Assistive technologies
Face recognition: A literature survey
ACM Computing Surveys (CSUR)
Independent motion detection directly from compressed surveillance video
IWVS '03 First ACM SIGMM international workshop on Video surveillance
Is that a smile?: gaze dependent facial expressions
Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering
Semantic video adaptation based on automatic annotation of sport videos
Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval
Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10
Proceedings of the 2005 conference on Interaction design and children
VACA: a tool for qualitative video analysis
CHI '06 Extended Abstracts on Human Factors in Computing Systems
SIDES: a cooperative tabletop computer game for social skills development
CSCW '06 Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work
Spontaneous vs. posed facial behavior: automatic analysis of brow actions
Proceedings of the 8th international conference on Multimodal interfaces
Practical Statistics for Medical Research
Practical Statistics for Medical Research
Let's get emotional: emotion research in human computer interaction
CHI '07 Extended Abstracts on Human Factors in Computing Systems
Interactive technologies for autism
CHI '07 Extended Abstracts on Human Factors in Computing Systems
Learning Spectral Clustering, With Application To Speech Separation
The Journal of Machine Learning Research
Linear State-Space Models for Blind Source Separation
The Journal of Machine Learning Research
Synergistic Face Detection and Pose Estimation with Energy-Based Models
The Journal of Machine Learning Research
Implicit speech recognition: making speech a first class object on computers
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Affective multimodal mirror: sensing and eliciting laughter
Proceedings of the international workshop on Human-centered multimedia
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
How to distinguish posed from spontaneous smiles using geometric features
Proceedings of the 9th international conference on Multimodal interfaces
Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder
Proceedings of the 9th international conference on Multimodal interfaces
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
Biometric valence and arousal recognition
OZCHI '07 Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces
Video motion detection beyond reasonable doubt
Proceedings of the 1st international conference on Forensic applications and techniques in telecommunications, information, and multimedia and workshop
VCode and VData: illustrating a new framework for supporting the video annotation workflow
AVI '08 Proceedings of the working conference on Advanced visual interfaces
Fusion of audio and visual cues for laughter detection
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
A3: a coding guideline for HCI+autism research using video annotation
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Playing with virtual peers: bootstrapping contingent discourse in children with autism
ICLS'08 Proceedings of the 8th international conference on International conference for the learning sciences - Volume 2
Context information exchange and sharing in a peer-to-peer community: a video annotation scenario
Proceedings of the 27th ACM international conference on Design of communication
A social approach to authoring media annotations
Proceedings of the 10th ACM symposium on Document engineering
Didactic software for autistic children
ADNTIIC'10 Proceedings of the First international conference on Advances in new technologies, interactive interfaces, and communicability
Accessible education for autistic children: ABA-based didactic software
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: applications and services - Volume Part IV
Proceedings of the Designing Interactive Systems Conference
Designing ABA-Based software for low-functioning autistic children
ADNTIIC'11 Proceedings of the Second international conference on Advances in New Technologies, Interactive Interfaces and Communicability
Hi-index | 0.00 |
HCI studies assessing nonverbal individuals (especially those who do not communicate through traditional linguistic means: spoken, written, or sign) are a daunting undertaking. Without the use of directed tasks, interviews, questionnaires, or question-answer sessions, researchers must rely fully upon observation of behavior, and the categorization and quantification of the participant’s actions. This problem is compounded further by the lack of metrics to quantify the behavior of nonverbal subjects in computer-based intervention contexts. We present a set of dependent variables called A3 (pronounced A-Cubed) or Annotation for ASD Analysis, to assess the behavior of this demographic of users, specifically focusing on engagement and vocalization. This paper demonstrates how theory from multiple disciplines can be brought together to create a set of dependent variables, as well as demonstration of these variables, in an experimental context. Through an examination of the existing literature, and a detailed analysis of the current state of computer vision and speech detection, we present how computer automation may be integrated with the A3 guidelines to reduce coding time and potentially increase accuracy. We conclude by presenting how and where these variables can be used in multiple research areas and with varied target populations.