CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Computers are social actors: a review of current research
Human values and the design of computer technology
HICSS '04 Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 1 - Volume 1
Guest Editors' Introduction: Artificial Intelligence for Homeland Security
IEEE Intelligent Systems
Typing or messaging? Modality effect on deception detection in computer-mediated communication
Decision Support Systems
Cues to Deception in Online Chinese Groups
HICSS '08 Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences
Social facilitation and human-computer interaction
Computers in Human Behavior
ACM Transactions on Management Information Systems (TMIS)
Text mining and probabilistic language modeling for online review spam detection
ACM Transactions on Management Information Systems (TMIS)
Effects of Automated and Participative Decision Support in Computer-Aided Credibility Assessment
Journal of Management Information Systems
Detecting deception in synchronous computer-mediated communication using speech act profiling
ISI'05 Proceedings of the 2005 IEEE international conference on Intelligence and Security Informatics
Representation and Reasoning Under Uncertainty in Deception Detection: A Neuro-Fuzzy Approach
IEEE Transactions on Fuzzy Systems
Not all lies are spontaneous: an examination of deception across different modes of communication
Journal of the American Society for Information Science and Technology
The “Mail-Order-Bride” (MOB) Phenomenon in the Cyberworld: An Interpretive Investigation
ACM Transactions on Management Information Systems (TMIS)
Hi-index | 0.00 |
Computer-mediated deception is prevalent and may have serious consequences for individuals, organizations, and society. This article investigates several metrics as predictors of deception in synchronous chat-based environments, where participants must often spontaneously formulate deceptive responses. Based on cognitive load theory, we hypothesize that deception influences response time, word count, lexical diversity, and the number of times a chat message is edited. Using a custom chatbot to conduct interviews in an experiment, we collected 1,572 deceitful and 1,590 truthful chat-based responses. The results of the experiment confirm that deception is positively correlated with response time and the number of edits and negatively correlated to word count. Contrary to our prediction, we found that deception is not significantly correlated with lexical diversity. Furthermore, the age of the participant moderates the influence of deception on response time. Our results have implications for understanding deceit in chat-based communication and building deception-detection decision aids in chat-based systems.