A Web navigation tool for the blind
Assets '98 Proceedings of the third international ACM conference on Assistive technologies
Perceptual user interfaces: multimodal interfaces that process what comes naturally
Communications of the ACM
ICARE software components for rapidly developing multimodal interfaces
Proceedings of the 6th international conference on Multimodal interfaces
Migratory MultiModal interfaces in MultiDevice environments
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Personalised Mashups: Opportunities and Challenges for User Modelling
UM '07 Proceedings of the 11th international conference on User Modeling
Generating multimodal user interfaces for Web services
Interacting with Computers
The implementation of a secure and pervasive multimodal Web system architecture
Information and Software Technology
A probabilistic multimodal approach for predicting listener backchannels
Autonomous Agents and Multi-Agent Systems
Finite-state machine based distributed framework DATA for intelligent ambience systems
CIMMACS'09 Proceedings of the 8th WSEAS International Conference on Computational intelligence, man-machine systems and cybernetics
Platform for flexible integration of multimodal technologies into web application domain
E-ACTIVITIES'09/ISP'09 Proceedings of the 8th WSEAS International Conference on E-Activities and information security and privacy
Hi-index | 0.00 |
Web applications are a widely-spread and a widely-used concept for presenting information. Their underlying architecture and standards, in many cases, limit their presentation/control capabilities of showing pre-recorded audio/video sequences. Highly-dynamic text content, for instance, can only be displayed in its native from (as part of HTML content). This paper provides concepts and answers that enable the transformation of dynamic web-based content into multimodal sequences generated by different multimodal services. Based on the encapsulation of the content into a multimodal shell, any text-based data can dynamically and at interactive speeds be transformed into multimodal visually-synthesized speech. Techniques for the integration of multimodal input (e.g. visioning and speech recognition) are also included. The concept of multimodality relies on mashup approaches rather than traditional integration. It can, therefore, extended any type of web-based solution transparently with no major changes to either the multimodal services or the enhanced web-application.