Computing highly accurate or exact P-values using importance sampling

  • Authors:
  • Chris J. Lloyd

  • Affiliations:
  • -

  • Venue:
  • Computational Statistics & Data Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.03

Visualization

Abstract

Especially for discrete data, standard first order P-values can suffer from poor accuracy, even for quite large sample sizes. Moreover, different test statistics can give practically different results. There are several approaches to computing P-values which do not suffer these defects, such as parametric bootstrap P-values or the partially maximised P-values of Berger and Boos (1994). Both these methods require computing the exact tail probability of the approximate P-value as a function of the nuisance parameter/s, known as the significance profile. For most practical problems, this is not computationally feasible. I develop an importance sampling approach to this problem. A major advantage is that significance can be simultaneously estimated at a grid of nuisance parameter values, without the need for smoothing away the simulation noise. The theory is fully developed for generalised linear models. The importance distribution is selected from the same generalised linear model family but with parameters biased towards an optimal point on the boundary of the tail-set. For logistic regression at least, standard guidelines for selecting the importance distribution can fail quite badly and a conceptually simple alternative algorithm for selecting these parameters is developed. This may have application to importance sampling more generally.