Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
Optimality issues of universal greedy agents with static priors
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Self-modification and mortality in artificial agents
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
Self-modification and mortality in artificial agents
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
Universal knowledge-seeking agents
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Avoiding unintended AI behaviors
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Decision support for safe AI design
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Space-Time embedded intelligence
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Memory issues of intelligent agents
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Differences between kolmogorov complexity and solomonoff probability: consequences for AGI
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
On Potential Cognitive Abilities in the Machine Kingdom
Minds and Machines
Universal knowledge-seeking agents
Theoretical Computer Science
Hi-index | 0.00 |
This paper considers the consequences of endowing an intelligent agent with the ability to modify its own code. The intelligent agent is patterned closely after AIXI with these specific assumptions: 1) The agent is allowed to arbitrarily modify its own inputs if it so chooses; 2) The agent's code is a part of the environment and may be read and written by the environment. The first of these we call the "delusion box"; the second we call "mortality". Within this framework, we discuss and compare four very different kinds of agents, specifically: reinforcementlearning, goal-seeking, prediction-seeking, and knowledge-seeking agents. Our main results are that: 1) The reinforcement-learning agent under reasonable circumstances behaves exactly like an agent whose sole task is to survive (to preserve the integrity of its code); and 2) Only the knowledge-seeking agent behaves completely as expected.