3-D sound for virtual reality and multimedia
3-D sound for virtual reality and multimedia
Force and touch feedback for virtual reality
Force and touch feedback for virtual reality
Transforming 3D coloured pixels into musical instrument notes for vision substitution applications
Journal on Image and Video Processing
Eye tracking in coloured image scenes represented by ambisonic fields of musical instrument sounds
IWINAC'05 Proceedings of the First international conference on Mechanisms, Symbols, and Models Underlying Cognition: interplay between natural and artificial computation - Volume Part I
The effects of a visual fidelity criterion of the encoding of images
IEEE Transactions on Information Theory
Color-audio encoding interface for visual substitution: see color matlab-based demo
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Detecting objects and obstacles for visually impaired individuals using visual saliency
Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility
Visual perception substitution by the auditory sense
ICCSA'11 Proceedings of the 2011 international conference on Computational science and its applications - Volume Part II
Exploration and avoidance of surrounding obstacles for the visually impaired
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Hi-index | 0.01 |
The See Color interface transforms a small portion of a colored video image into sound sources represented by spatialized musical instruments. Basically, the conversion of colors into sounds is achieved by quantization of the HSL color system. Our purpose is to provide visually impaired individuals with a capability of perception of the environment in real time. In this work we present the system principles of design and several experiments that have been carried out by several blindfolded persons with See ColOr prototypes related to static pictures on a tablet and simple video images. The goal of the first experiment was to identify the colors of static pictures' main features and then to interpret the image scenes. Although learning all instrument sounds in only a training session was too difficult, participants found that colors were helpful to limit the possible image interpretations. The experiments on the analysis of static pictures suggested that the order of magnitude of the slow down factor related to the use of the auditory channel, instead of the visual channel could correspond to the order of magnitude related to the ratio of visual channel capacity to auditory channel capacity. Afterwards, two experiments based on a head mounted camera have been performed. The first experiment pertaining to object manipulation is based on the pairing of colored socks, while the second experiment is related to outdoor navigation with the goal of following a colored serpentine painted on the ground. The ''socks'' experiment demonstrated that blindfolded individuals were able to accurately match pairs of colored socks. The same participants with the addition of a blind individual successfully followed a red serpentine painted on the ground for more than 80m. According to task time durations, the order of magnitude of the slow down factor related to the ''socks'' and ''serpentine'' experiments could be equal to one. From a cognitive perspective this would be consistent with the fact that these two tasks are simpler than the interpretation of image scenes.