On the thresholds of knowledge
Artificial Intelligence
Made-up minds: a constructivist approach to artificial intelligence
Made-up minds: a constructivist approach to artificial intelligence
What computers still can't do: a critique of artificial reason
What computers still can't do: a critique of artificial reason
Conceptual Spaces: The Geometry of Thought
Conceptual Spaces: The Geometry of Thought
Syntax as an Emergent Characteristic of the Evolution of Semantic Complexity
Minds and Machines
The cog project: building a humanoid robot
Computation for metaphors, analogy, and agents
On Building Robot Persons: Response to Zlatev
Minds and Machines
Minds and Machines
Physical embodiments for mobile communication agents
Proceedings of the 18th annual ACM symposium on User interface software and technology
Emergence of Cooperation: State of the Art
Artificial Life
Children, Robots and... the Parental Role
Minds and Machines
Imitation as a mechanism of cultural transmission
Artificial Life
An empirically terminological point of view on agentism in the artificial
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
The Unified Conceptual Space Theory: an enactive theory of concepts
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Hi-index | 0.00 |
This article addresses a classical question: Can a machine use language meaningfully and if so, how can this be achieved? The first part of the paper is mainly philosophical. Since meaning implies intentionality on the part of the language user, artificial systems which obviously lack intentionality will be `meaningless' (pace e.g. Dennett). There is, however, no good reason to assume that intentionality is an exclusively biological property (pace e.g. Searle) and thus a robot with bodily structures, interaction patterns and development similar to those of human beings would constitute a system possibly capable of meaning – a conjecture supported through a Wittgenstein-inspired thought experiment. The second part of the paper focuses on the empirical and constructive questions. Departing from the principle of epigenesis stating that during every state of development new structure arises on the basis of existing structure plus various sorts of interaction, a model of human cognitive and linguistic development is proposed according to which physical, social and linguistic interactions between the individual and the environment have their respective peaks in three consecutive stages of development: episodic, mimetic and symbolic. The transitions between these stages are qualitative, and bear a similarity to the stages in phylogenesis proposed by Donald (1991) and Deacon (1997). Following the principle of epigenetic development, robotogenesis could possibly recapitulate ontogenesis, leading to the emergence of intentionality, consciousness and meaning.