User-Adapted Image Descriptions from Annotated Knowledge Sources

  • Authors:
  • Maria Teresa Cassotta;Berardina De Carolis;Fiorella de Rosis;Chiara Andreoli;M. Luisa De Cicco

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • AI*IA 01 Proceedings of the 7th Congress of the Italian Association for Artificial Intelligence on Advances in Artificial Intelligence
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present the first results of a research aimed at generating user-adapted image descriptions from annotated knowledge sources. This system employs a User Model and several knowledge sources to select the image attributes to include in the description and the level of detail. Both 'individual' and 'comparative-descriptions' may be generated, by taking an appropriate 'reference' image according to the context and to an ontology of concepts in the domain to which the image refers; the comparison strategy is suited to the User background and to the interaction history. All data employed in the generation of these descriptions (the image, the discourse) are annotated by a XML-like language. Results obtained in the description of radiological images are presented, and the advantage of annotating knowledge sources are discussed.