Comparing probability measures using possibility theory: A notion of relative peakedness
International Journal of Approximate Reasoning
A representation theorem and applications to measure selection and noninformative priors
International Journal of Approximate Reasoning
Background adjustment of cDNA microarray images by Maximum Entropy distributions
Journal of Biomedical Informatics
An entropy-driven expert system shell applied to portfolio selection
Expert Systems with Applications: An International Journal
A compound class of Weibull and power series distributions
Computational Statistics & Data Analysis
Entropic component analysis and its application in geological data
Computers & Geosciences
Recall and Reasoning-an information theoretical model of cognitive processes
Information Sciences: an International Journal
Enforceable and efficient service provisioning
Computer Communications
Cross-entropy optimisation of importance sampling parameters for statistical model checking
CAV'12 Proceedings of the 24th international conference on Computer Aided Verification
The compound class of extended Weibull power series distributions
Computational Statistics & Data Analysis
Probabilistic Belief Contraction
Minds and Machines
Analog Circuits Fault Detection Using Cross-Entropy Approach
Journal of Electronic Testing: Theory and Applications
Towards an Informational Pragmatic Realism
Minds and Machines
Minds and Machines
Hi-index | 754.84 |
Jaynes's principle of maximum entropy and Kullbacks principle of minimum cross-entropy (minimum directed divergence) are shown to be uniquely correct methods for inductive inference when new information is given in the form of expected values. Previous justifications use intuitive arguments and rely on the properties of entropy and cross-entropy as information measures. The approach here assumes that reasonable methods of inductive inference should lead to consistent results when there are different ways of taking the same information into account (for example, in different coordinate system). This requirement is formalized as four consistency axioms. These are stated in terms of an abstract information operator and make no reference to information measures. It is proved that the principle of maximum entropy is correct in the following sense: maximizing any function but entropy will lead to inconsistency unless that function and entropy have identical maxima. In other words given information in the form of constraints on expected values, there is only one (distribution satisfying the constraints that can be chosen by a procedure that satisfies the consistency axioms; this unique distribution can be obtained by maximizing entropy. This result is established both directly and as a special case (uniform priors) of an analogous result for the principle of minimum cross-entropy. Results are obtained both for continuous probability densities and for discrete distributions.