An empirically terminological point of view on agentism in the artificial

  • Authors:
  • C. T. A. Schmidt

  • Affiliations:
  • Le Mans University, LIUM, Laval, France

  • Venue:
  • MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many endeavours in Artificial Intelligence work towards recreating the dialogical capabilities of humans in machines, robots, "creatures", in short information processing systems. This original goal in AI has been left to the wayside by many in order to produce Artificial Life entities in a futuristic vision of 'life-as-it-could-be'; scientists that have not 'abandoned ship' confirm the difficulty of reaching the summum of AI research. This means the importance of language generation and understanding components have been reduced. Are the pragmatics of language use too difficult to deal with? According to Shapiro and Rapaport (1991), "the quintessential natural-language competence task is interactive dialogue". Man-made entities are not functional in dialoguing with humans. The benefits of re-establishing a "proper" relational stance in the Artificial Sciences are twofold, namely, a./to better understand the communication difficulties encountered, and b./to bring enhanced meaning to the goals of building artificial agents. Point a has consequences for b in that it will change the very goals of scientists working on social and conversational agents. In the literature, the notion of agent proves unsuitable for the specification of any higher-order communication tasks; a Tower of Babel problem exists with regards to the very definition of "agent" between Scientists and Philosophers. In the present article, I eliminate the nebulosity currently contouring agency's terminology with a goal to improving understanding when speaking about entities that can mean.