What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Dasher—a data entry interface using continuous gestures and language models
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Voice as sound: using non-verbal voice input for interactive control
Proceedings of the 14th annual ACM symposium on User interface software and technology
Towards an adaptive communication aid with text input from ambiguous keyboards
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 2
Entering text with a four-button device
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
The vocal joystick:: evaluation of voice-based cursor control techniques
Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
Now Dasher! Dash away!: longitudinal study of fast text entry by Eye Gaze
Proceedings of the 2008 symposium on Eye tracking research & applications
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
Text Entry System Based on a Minimal Scan Matrix for Severely Physically Handicapped People
ICCHP '08 Proceedings of the 11th international conference on Computers Helping People with Special Needs
TextEntry '03 Proceedings of the 2003 EACL Workshop on Language Modeling for Text Entry Methods
Speech dasher: fast writing using speech and gaze
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A comparative longitudinal study of non-verbal mouse pointer
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction - Volume Part II
Qanti: a software tool for quick ambiguous non-standard text input
ICCHP'10 Proceedings of the 12th international conference on Computers helping people with special needs
CHANTI: predictive text entry using non-verbal vocal input
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Measuring performance of a predictive keyboard operated by humming
ICCHP'12 Proceedings of the 13th international conference on Computers Helping People with Special Needs - Volume Part II
Hi-index | 0.00 |
This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.