The media equation: how people treat computers, television, and new media like real people and places
Believe it or not: factors influencing credibility on the Web
Journal of the American Society for Information Science and Technology
Responsiveness to robots: effects of ingroup orientation & communication style on hri in china
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
When in Rome: the role of culture & context in adherence to robot recommendations
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Who is more expressive during child-robot interaction: Pakistani or Dutch children?
Proceedings of the 6th international conference on Human-robot interaction
Assessment of adaptive human-robot interactions
Knowledge-Based Systems
A theoretical model for trust in automated systems
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
The objective of this paper is to investigate the effects of communication styles and culture on people's accepting recommendations from robots. The goal was to provide insight for culturally adaptive robot design. The independent variables were communication style (i.e. implicit or explicit), the participants' cultural background (i.e. Chinese or German), and the robot's language (i.e. native language and English for Chinese and German subjects). A laboratory experiment was conducted with 16 Chinese and 16 German college students. Basic descriptive statistics and t-test are used for biographical information analysis; reliability test is used for questionnaire; MANOVA and non-parametric test are used for testing the hypotheses. The results showed that the Chinese participants preferred an implicit communication style than German participants. Chinese participants evaluated the robots as being more likable, trustworthy, and credible, and were more likely to accept the implicit recommendations. The German participants evaluated the robots as being less likable, trustworthy, and credible, and were less inclined to accept implicit recommendations.