Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
Image Retrieval: Feature Primitives, Feature Representation, and Relevance Feedback
CBAIVL '00 Proceedings of the IEEE Workshop on Content-based Access of Image and Video Libraries (CBAIVL'00)
Towards ontology-based cognitive vision
Machine Vision and Applications
Adaptive salient block-based image retrieval in multi-feature space
Image Communication
Semi-automatic annotation and retrieval of visual content using the topic map technology
VIS'08 Proceedings of the 1st WSEAS international conference on Visualization, imaging and simulation
Semantic Image Segmentation and Object Labeling
IEEE Transactions on Circuits and Systems for Video Technology
Automatic image tagging using two-layered Bayesian networks and mobile data from smart phones
Proceedings of the 10th International Conference on Advances in Mobile Computing & Multimedia
A mobile picture tagging system using tree-structured layered Bayesian networks
Mobile Information Systems
Hi-index | 0.00 |
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes.