Instance-Based Learning Algorithms
Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
Theory for coordinating concurrent hierarchical planning agents using summary information
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Representation and reasoning for goals in BDI agents
ACSC '02 Proceedings of the twenty-fifth Australasian conference on Computer science - Volume 4
Detecting & exploiting positive goal interaction in intelligent agents
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Goals in agent systems: a unifying framework
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Goal generation with relevant and trusted beliefs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Improved heterogeneous distance functions
Journal of Artificial Intelligence Research
Detecting & avoiding interference between goals in intelligent agents
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Plans and planning in smart homes
Designing Smart Homes
Preferences in AI: An overview
Artificial Intelligence
Hi-index | 0.00 |
An intelligent agent situated in an environment needs to know the preferred states it is expected to achieve or maintain so that it can work towards achieving or maintaining them. We refer to all these preferred states as "preferences". The preferences an agent has selected to bring about at a given time are called "goals". This selection of preferences as goals is generally referred to as "goal generation". Basic aim behind goal generation is to provide the agent with a way of getting new goals. Although goal generation results in an increase in the agent's knowledge about its goals, the overall autonomy of the agent does not increase as its goals are derived from its preferences (which are programmed). We argue that to achieve greater autonomy, an agent must be able to generate new preferences. In this paper we discuss how an agent can generate new preferences based on analogy between new objects and the objects it has known preferences for.