Thinking Quickly: Agents for Modeling Air Warfare
AI '98 Selected papers from the 11th Australian Joint Conference on Artificial Intelligence on Advanced Topics in Artificial Intelligence
Dialogue Modelling for a Conversational Agent
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Partially observable Markov decision processes for spoken dialog systems
Computer Speech and Language
What should the agent know?: the challenge of capturing human knowledge
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
The Hidden Information State model: A practical framework for POMDP-based spoken dialogue management
Computer Speech and Language
A prototype for a conversational companion for reminiscing about images
Computer Speech and Language
Multiple agent roles in an adaptive virtual classroom environment
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
A periphery of Pogamut: from bots to agents and back again
Agents for games and simulations II
Applied Artificial Intelligence - Social Engagement with Robots and Agents
Hi-index | 0.00 |
Search and rescue operations often require complex coordination of a range of resources, including human and robotic resources. This paper discusses a proposed new framework that allows agent technology to be used in conjunction with a virtual environment to provide a human controller with an effective visualisation of the distribution of a collection of autonomous objects, in our case, Unmanned Aerial Vehicles (UAVs) so that they can be managed in a way that allows them to successfully complete the task in the minimum possible time. It is our contention that to do this effectively there needs to be two-way initiation of verbal conversations, but that it is not necessary for the system to completely understand the conversations required. An example scenario is presented that illustrates how such a system would be used in practice, illustrating how a single human can communicate with a swarm of semi-autonomous actors verbally and envisage their activities in a swarm based on the visual cues provided within the virtual environment. An agent-based solution is proposed that meets the requirements and provides a command station that can manage a search using a collection of UAVs effectively.