The cognitive walkthrough method: a practitioner's guide
Usability inspection methods
The Psychology of Human-Computer Interaction
The Psychology of Human-Computer Interaction
Mapping performer parameters to synthesis engines
Organised Sound
The 'E' in NIME: musical expression with new computer interfaces
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Identifying usability and fun problems in a computer game during first use and after some practice
International Journal of Human-Computer Studies - Human-computer interaction research in the managemant information systems discipline
Virtual Music: Computer Synthesis of Musical Style
Virtual Music: Computer Synthesis of Musical Style
International Journal of Human-Computer Studies
Evaluation of Input Devices for Musical Expression: Borrowing Tools from HCI
Computer Music Journal
B-Keeper: a beat-tracker for live performance
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
International Journal of Human-Computer Studies
Design and evaluation of human-computer rhythmic interaction in a tutoring system
Computer Music Journal
Mutual engagement and collocation with shared representations
International Journal of Human-Computer Studies
Towards a more flexible and creative music mixing interface
CHI '13 Extended Abstracts on Human Factors in Computing Systems
moosikMasheens: music, motion and narrative with young people who have complex needs
Proceedings of the 12th International Conference on Interaction Design and Children
Playing vibrotactile music: A comparison between the Vibrochord and a piano keyboard
International Journal of Human-Computer Studies
Hi-index | 0.00 |
Live music-making using interactive systems is not completely amenable to traditional HCI evaluation metrics such as task-completion rates. In this paper we discuss quantitative and qualitative approaches which provide opportunities to evaluate the music-making interaction, accounting for aspects which cannot be directly measured or expressed numerically, yet which may be important for participants. We present case studies in the application of a qualitative method based on Discourse Analysis, and a quantitative method based on the Turing Test. We compare and contrast these methods with each other, and with other evaluation approaches used in the literature, and discuss factors affecting which evaluation methods are appropriate in a given context.