3-D sound for virtual reality and multimedia
3-D sound for virtual reality and multimedia
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Tactile Shape Display Using RC Servomotors
HAPTICS '02 Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems
Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service
PERCOM '04 Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom'04)
IV '04 Proceedings of the Information Visualisation, Eighth International Conference
Transforming 3D coloured pixels into musical instrument notes for vision substitution applications
Journal on Image and Video Processing
See ColOr: Seeing Colours with an Orchestra
Human Machine Interaction
Blind Navigation along a Sinuous Path by Means of the See ColOr Interface
IWINAC '09 Proceedings of the 3rd International Work-Conference on The Interplay Between Natural and Artificial Computation: Part II: Bioinspired Applications in Artificial and Natural Computation
COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device
VIZ '09 Proceedings of the 2009 Second International Conference in Visualisation
Assistive Technology for Visually Impaired and Blind People
Assistive Technology for Visually Impaired and Blind People
Eye tracking in coloured image scenes represented by ambisonic fields of musical instrument sounds
IWINAC'05 Proceedings of the First international conference on Mechanisms, Symbols, and Models Underlying Cognition: interplay between natural and artificial computation - Volume Part I
Non-visual-cueing-based sensing and understanding of nearby entities in aided navigation.
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Real-time image registration of RGB webcams and colorless 3d time-of-flight cameras
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
Sonification of images for the visually impaired using a multi-level approach
Proceedings of the 4th Augmented Human International Conference
Proceedings of the 6th International Conference on Computer Vision / Computer Graphics Collaboration Techniques and Applications
Hi-index | 0.01 |
Although retinal neural implants have considerably progressed they raise a number of questions concerning user acceptance, risk rejection, and cost. For the time being we support a low cost approach based on the transmission of limited vision information by means of the auditory channel. The See ColOr mobility aid for visually impaired individuals transforms a small portion of a coloured video image into sound sources represented by spatialised musical instruments. Basically, the conversion of colours into sounds is achieved by quantisation of the HSL colour system. Our purpose is to provide blind people with a capability of perception of the environment in real time. In this work the novelty is the simultaneous sonification of colour and depth, the last parameter being coded by sound rhythm. The main drawback of our approach is that the sonification of a limited portion of a captured image involves limited perception. As a consequence, we propose to extend the local perception module by introducing a new global perception module aiming at providing the user with a clear picture of the entire scene characteristics. Finally, we present several experiments to illustrate the limited perception module, such as: (1) detecting an open door in order to go out from the office; (2) walking in a hallway and looking for a blue cabinet; (3) walking in a hallway and looking for a red tee shirt; (4) avoiding two red obstacles; (5) moving outside and avoiding a parked car. Videos of experiments are available on http://www.youtube.com/guidobologna.