Credibility and computing technology
Communications of the ACM
Does automation bias decision-making?
International Journal of Human-Computer Studies
Trapped in the Net: The Unanticipated Consequences of Computerization
Trapped in the Net: The Unanticipated Consequences of Computerization
A Theoretical Integration of User Satisfaction and Technology Acceptance
Information Systems Research
Deception Detection through Automatic, Unobtrusive Analysis of Nonverbal Behavior
IEEE Intelligent Systems
Human Problem Solving
Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (The Addison-Wesley series in artificial intelligence)
A technology transition model derived from field investigation of GSS use aboard the U.S.S. CORONADO
Journal of Management Information Systems - Special issue: GSS insights: a look back at the lab, a look forward from the field
A Comparison of Classification Methods for Predicting Deception in Computer-Mediated Communication
Journal of Management Information Systems
Journal of Management Information Systems
COOPERATIVE QUERY REWRITING FOR DECISION MAKING SUPPORT AND RECOMMENDER SYSTEMS
Applied Artificial Intelligence
Handling sequential pattern decay: Developing a two-stage collaborative recommender system
Electronic Commerce Research and Applications
Detecting concealment of intent in transportation screening: a proof of concept
IEEE Transactions on Intelligent Transportation Systems
Judging the Credibility of Information Gathered from Face-to-Face Interactions
Journal of Data and Information Quality (JDIQ)
Automatic extraction of deceptive behavioral cues from video
ISI'05 Proceedings of the 2005 IEEE international conference on Intelligence and Security Informatics
Effects of Automated and Participative Decision Support in Computer-Aided Credibility Assessment
Journal of Management Information Systems
International Journal of Human-Computer Studies
Hi-index | 0.00 |
Decision aids have long been an important source of help in making structured decisions. However, decision support for more complex problems has been much more difficult to create. Decision aids are now being developed for very complex problems, and their effects among low-and high-task-knowledge individuals are still being explored. One such task is credibility assessment, in which message recipients or observers must determine a message's veracity and trustworthiness. Credibility assessment is made difficult by lack of constraints, hidden or incomplete information, and mistaken beliefs of the assessor. The theory of technology dominance (TTD) proposes that technology is most effectively applied in intelligent decision aids when an experienced user is paired with a sophisticated decision aid. This work examines TT D in the complex task of credibility assessment. To assist in credibility assessment, we created a decision aid that augments the capabilities of the user-whether novice or professional. Using hypotheses based on TT D, we tested the decision aid using high-stakes deception in recorded interviews and involved both student (novice) and law enforcement (professional) users. Both professionals and novices improved their assessment accuracy by using the decision aid. Consistent with TTD, novices were more reliant on the decision aid than were professionals. However, contrary to TTD, there was no significant difference in the way novices and professionals interacted with the system, and the decision aid was not more beneficial to professionals. Novices and professionals frequently discounted the aid's recommendations, and in many cases professionals did not view explanations when the decision aid contradicted their assessments. Potential reasons for these findings, as well as limitations and future research opportunities, are discussed.