Mental models: towards a cognitive science of language, inference, and consciousness
Mental models: towards a cognitive science of language, inference, and consciousness
Security and Usability
Machine Learning
Design for privacy in ubiquitous computing environments
ECSCW'93 Proceedings of the third conference on European Conference on Computer-Supported Cooperative Work
Journal of Artificial Intelligence Research
What instills trust? a qualitative study of phishing
FC'07/USEC'07 Proceedings of the 11th International Conference on Financial cryptography and 1st International conference on Usable Security
Implementing weighted abduction in Markov logic
IWCS '11 Proceedings of the Ninth International Conference on Computational Semantics
Imagined communities: awareness, information sharing, and privacy on the facebook
PET'06 Proceedings of the 6th international conference on Privacy Enhancing Technologies
Hi-index | 0.00 |
Attacks on computer systems are rapidly becoming more numerous and more sophisticated, and current preventive techniques do not seem able to keep pace. Many successful attacks can be attributed to user errors: for example, while focused on other tasks, users may succumb to 'social engineering' attacks such as phishing or trojan horses. Warnings about the danger of these attacks are often vaguely worded and given long before the dangers are realized, and are therefore too easy to ignore. However, we hypothesize that users are more likely to be persuaded by messages that (1) leverage mental models to describe the dangers, (2) describe particular vulnerabilities that the user may be exposed to and (3) are delivered close in time before the danger may actually be realized. We discuss the design and initial implementation of a system to achieve this. It first shows a video about a potential danger, then creates warnings tailored to the user's environment and given at the time they may be most useful, displaying a still frame or snippet from the video to remind the user of the potential danger. The system uses templates of user activities as input to a markov logic network to recognize potentially risky behaviors. This approach can identify likely next steps that can be used to predict immediate danger and customize warnings.