Optimal multimodal fusion for multimedia data analysis
Proceedings of the 12th annual ACM international conference on Multimedia
Mobile access to personal digital photograph archives
Proceedings of the 7th international conference on Human computer interaction with mobile devices & services
Photo-to-search: using multimodal queries to search the web from mobile devices
Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
Zurfer: mobile multimedia access in spatial, social and topical context
Proceedings of the 15th international conference on Multimedia
MAMI: multimodal annotations on a camera phone
Proceedings of the 10th international conference on Human computer interaction with mobile devices and services
Issues in stacked generalization
Journal of Artificial Intelligence Research
IEEE Transactions on Circuits and Systems for Video Technology
iScope: personalized multi-modality image search for mobile devices
Proceedings of the 7th international conference on Mobile systems, applications, and services
Text versus speech: a comparison of tagging input modalities for camera phones
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Multimodal image retrieval via Bayesian information fusion
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Semantic analysis and retrieval in personal and social photo collections
Multimedia Tools and Applications
Flower information retrieval using color feature and location-based system
AICT'11 Proceedings of the 2nd international conference on Applied informatics and computing theory
Personalized multi-modality image management and search for mobile devices
Personal and Ubiquitous Computing
Hi-index | 0.00 |
Mobile phones are becoming multimedia devices. It is common to observe users capturing photos and videos on their mobile phones on a regular basis. As the amount of digital multimedia content expands, it becomes increasingly difficult to find specific images in the device. In this paper, we present a multimodal and mobile image retrieval prototype named MAMI (Multimodal Automatic Mobile Indexing). It allows users to annotate, index and search for digital photos on their phones via speech or image input. Speech annotations can be added at the time of capturing photos or at a later time. Additional metadata such as location, user identification, date and time of capture is stored in the phone automatically. A key advantage of MAMI is tha it is implemented as a stand-alone application which runs in real-time on the phone. Therefore, users can search for photos in their personal archives without the need of connectivity to a server. In this paper, we compare multimodal and monomodal approaches for image retrieval and we propose a novel algorithm named the Multimodal Redundancy Reduction (MR2) Algorithm. In addition to describing in detail the proposed approaches, we present our experimental results and compare the retrieval accuracy of monomodal versus multimodal algorithms.