An Exploratory Study into Deception Detection in Text-Based Computer-Mediated Communication
HICSS '03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Track1 - Volume 1
HICSS '05 Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) - Track 1 - Volume 01
Structure in the Enron Email Dataset
Computational & Mathematical Organization Theory
Identifying emotions, intentions, and attitudes in text using a game with a purpose
CAAGET '10 Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text
"I don't know where he is not": does deception research yet offer a basis for deception detectives?
EACL 2012 Proceedings of the Workshop on Computational Approaches to Deception Detection
Deception detection for the tangled web
ACM SIGCAS Computers and Society
Hi-index | 0.00 |
In intelligence, law enforcement, and, increasingly, organizational settings there is interest in detecting deception; for example, in intercepted phone calls, emails, and web sites. Humans are not naturally good at detecting deception, but recent work has shown that deception is actually readily detectable - using markers that humans don't see but which software can readily compute. Pennebaker's model suggests that deceptive communication is characterized by changes in the frequency of four kinds of words: first-person pronouns, exception words, negative emotion words, and action words.We investigate what can be learned about the deception model by applying it to a large corpus of Enron emails. We show that each of the four kinds of words in the Pennebaker model acts as a separate latent factor for deception, rather than having their effects mixed together.