The information capacity of the human fingertip
IEEE Transactions on Systems, Man and Cybernetics
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple-Instance Learning of Real-Valued Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Technology and perception: the contribution of sensory substitution systems
CT '97 Proceedings of the 2nd International Conference on Cognitive Technology (CT '97)
The SmartTool: A system for Augmented Reality of Haptics
VR '02 Proceedings of the IEEE Virtual Reality Conference 2002
Learning from ambiguity
Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Categorization by Learning and Reasoning with Regions
The Journal of Machine Learning Research
Automating tactile graphics translation
Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility
The nail-mounted tactile display for the behavior modeling
ACM SIGGRAPH 2002 conference abstracts and applications
Fast and independent access to map directions for people who are blind
Interacting with Computers
Hi-index | 0.00 |
Vision is one of the main sources through which people obtain information from the world, but unfortunately, visually impaired people are partially or completely deprived of this type of information. With the help of computer technologies, people with visual impairment can independently access digital textual information by using text-to-speech and text-to-Braille softwares. However, in general, there still exists a major barrier for people who are blind to access the graphical information independently in real time without the help of sighted people. In this paper, we propose a novel multilevel and multimodal approach aiming at addressing this challenging and practical problem, with the key idea being semantic-aware visual-to-tactile conversion through semantic image categorization and segmentation, and semantic-driven image simplification. An end-to-end prototype system was built based on the approach. We present the details of the approach and the system, report sample experimental results with realistic data, and compare our approach with current typical practice.