Fundamentals of speech recognition
Fundamentals of speech recognition
Telling humans and computers apart automatically
Communications of the ACM - Information cities
Interaction in 4-second bursts: the fragmented nature of attentional resources in mobile HCI
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tlk or txt? Using voice input for SMS composition
Personal and Ubiquitous Computing
Parakeet: a continuous speech recognition system for mobile touch-screen devices
Proceedings of the 14th international conference on Intelligent user interfaces
Evaluating existing audio CAPTCHAs and an interface optimized for non-visual use
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating the social acceptability of multimodal mobile interactions
CHI '10 Extended Abstracts on Human Factors in Computing Systems
User expectations from dictation on mobile devices
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction platforms and techniques
Towards A Universally Usable Human Interaction Proof: Evaluation of Task Completion Strategies
ACM Transactions on Accessible Computing (TACCESS)
HIV health information access using spoken dialogue systems: touchtone vs. speech
ICTD'09 Proceedings of the 3rd international conference on Information and communication technologies and development
How Good Are Humans at Solving CAPTCHAs? A Large Scale Evaluation
SP '10 Proceedings of the 2010 IEEE Symposium on Security and Privacy
Designing mobile interfaces for novice and low-literacy users
ACM Transactions on Computer-Human Interaction (TOCHI)
On the necessity of user-friendly CAPTCHA
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The Failure of Noise-Based Non-continuous Audio Captchas
SP '11 Proceedings of the 2011 IEEE Symposium on Security and Privacy
Text-based CAPTCHA strengths and weaknesses
Proceedings of the 18th ACM conference on Computer and communications security
The SoundsRight CAPTCHA: an improved approach to audio human interaction proofs for blind users
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Voice typing: a new speech interaction model for dictation on touchscreen devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
“Spindex” (Speech Index) Enhances Menus on Touch Screen Devices with Tapping, Wheeling, and Flicking
ACM Transactions on Computer-Human Interaction (TOCHI)
Hi-index | 0.01 |
Speech certainly has advantages as an input modality for smartphone applications, especially in scenarios where using touch or keyboard entry is difficult, on increasingly miniaturized devices where useable keyboards are difficult to accommodate, or in scenarios where only small amounts of text need to be input, such as when entering SMS texts or responding to a CAPTCHA challenge. In this paper, we propose two new alternative ways to design CAPTCHAs in which the user says the answer instead of typing it with (a) output stimuli provided visually (SeeSay) or (b) auditorily (HearSay). Our user study results show that SeeSay CAPTCHA requires less time to be solved and users prefer it over current text-based CAPTCHA methods.