Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Artificial Intelligence
Measures of uncertainty in expert systems
Artificial Intelligence
Bayesian conditioning in possibility theory
Fuzzy Sets and Systems - Special issue on fuzzy measures and integrals
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Toward a generalized theory of uncertainty (GTU): an outline
Information Sciences—Informatics and Computer Science: An International Journal
Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (The Addison-Wesley series in artificial intelligence)
Hi-index | 0.00 |
In artificial intelligence (AI), a number of criticisms were raised against the use of probability for dealing with uncertainty. All these criticisms, except what in this article we call the non-adequacy claim, have been eventually confuted. The non-adequacy claim is an exception because, unlike the other criticisms, it is exquisitely philosophical and, possibly for this reason, it was not discussed in the technical literature. A lack of clarity and understanding of this claim had a major impact on AI. Indeed, mostly leaning on this claim, some scientists developed an alternative research direction and, as a result, the AI community split in two schools: a probabilistic and an alternative one. In this article, we argue that the non-adequacy claim has a strongly metaphysical character and, as such, should not be accepted as a conclusive argument against the adequacy of probability.