Automated Derivation of Primitives for Movement Classification
Autonomous Robots
Unsupervised Analysis of Human Gestures
PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Identifying hierarchical structure in sequences: a linear-time algorithm
Journal of Artificial Intelligence Research
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We have empirically discovered that the space of human actions has a linguistic framework. This is a sensorimotor space consisting of the evolution of the joint angles of the human body in movement. The space of human activity has its own phonemes, morphemes, and sentences formed by syntax. This has implications for the grounding of concrete motion concepts. We present a Human Activity Language (HAL) for symbolic non-arbitrary representation of visual and motor information. In phonology, we define basic atomic segments that are used to compose human activity. We introduce the concept of a kinetological system and propose basic properties for such a system: compactness, view-invariance, reproducibility, and reconstructivity. In morphology, we extend sequential language learning to incorporate associative learning with our parallel learning approach. Parallel learning solves the problem of overgeneralization and is effective in identifying the kinetemes and active joints in a particular action. In syntax, we suggest four lexical categories for our Human Activity Language (noun, verb, adjective, adverb). These categories are combined into sentences through specific syntax for human movement.