For unknown-but-bounded errors, interval estimates are often better than averaging

  • Authors:
  • G. William Walster;Vladik Kreinovich

  • Affiliations:
  • -;-

  • Venue:
  • ACM SIGNUM Newsletter
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

For many measuring devices, the only information that we have about them is their biggest possible error ε > 0. In other words, we know that the error Δx = x - x (i.e., the difference between the measured value x and the actual values x) is random, that this error can sometimes become as big as ε or - ε, but we do not have any information about the probabilities of different values of error.Methods of statistics enable us to generate a better estimate for x by making several measurements x1, ..., xn. For example, if the average error is 0 (E(Δx) = 0), then after n measurements, we can take an average x = (x1 + ... + xn)/n, and get an estimate whose standard deviation (and the corresponding confidence intervals) are √n times smaller.Another estimate comes from interval analysis: for every measurement xi, we know that the actual value x belongs to an interval [xi-ε, xi+ε]. So, x belongs to the intersection of all these intervals. In one sense, this estimate is better than the one based on traditional engineering statistics (i.e., averaging): interval estimation is guaranteed. In this paper, we show that for many cases, this intersection is also better in the sense that it gives a more accurate estimate for x than averaging: namely, under certain reasonable conditions, the error of this interval estimate decreases faster (as 1/n) than the error of the average (that only decreases as 1/ √n).A similar result is proved for a multi-dimensional case, when we measure several auxiliary quantities, and use the measurement results to estimate the value of the desired quantity y.