Video helps remote work: speakers who need to negotiate common ground benefit from seeing each other
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Nomadic radio: speech and audio interaction for contextual messaging in nomadic environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction with mobile systems
Spatialized audioconferencing: what are the benefits?
CASCON '03 Proceedings of the 2003 conference of the Centre for Advanced Studies on Collaborative research
Meeting central: making distributed meetings more effective
CSCW '04 Proceedings of the 2004 ACM conference on Computer supported cooperative work
Private communications in public meetings
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Using visualizations to review a group's interaction dynamics
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Improving audio conferencing: are two ears better than one?
CSCW '06 Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work
An empirical study of the use of visually enhanced voip audio conferencing: the case of IEAC
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The effects of network delays on group work in real-time groupware
ECSCW'01 Proceedings of the seventh conference on European Conference on Computer Supported Cooperative Work
Exploring spatialized audio & video for distributed conversations
Proceedings of the 2010 ACM conference on Computer supported cooperative work
Non-native speech perception in adverse conditions: A review
Speech Communication
What did i miss?: in-meeting review using multimodal accelerated instant replay (air) conferencing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
Previous research has shown that audio communication is particularly difficult for non-native speakers (NNS) during multilingual collaborations. Especially when audio signals become distorted, NNS are overburdened by not only having to communicate with imperfect language skills, but also compensating for the deteriorations. Under these faulty audio conditions, NNS need to pay extra time and effort to understand the conversation. In order to give NNS more time to process conversations, we tested the insertion of silent gaps (from 0.2 to 0.4 seconds) between conversational turns. First, gaps were inserted into a previously taped conversation, resulting in a significant improvement of NNS's understanding of the conversation. Second, gaps were inserted during a real-time audio conference by adding artificial delay between native speakers. The results show that the added delays have a combination of beneficial and detrimental effects for both native and non-native speakers. The findings have implications towards how audio conferencing can be improved for NNS.