Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Latin supercube sampling for very high-dimensional simulations
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special issue on uniform random number generation
Variance with alternative scramblings of digital nets
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Derivatives and credit risk: enhanced quasi-monte carlo methods with dimension reduction
Proceedings of the 34th conference on Winter simulation: exploring new frontiers
New simulation methodology for finance: efficient simulation of gamma and variance-gamma processes
Proceedings of the 35th conference on Winter simulation: driving innovation
Inverting the symmetrical beta distribution
ACM Transactions on Mathematical Software (TOMS)
Smoothness and dimension reduction in Quasi-Monte Carlo methods
Mathematical and Computer Modelling: An International Journal
Fast orthogonal transforms for pricing derivatives with quasi-Monte Carlo
Proceedings of the Winter Simulation Conference
American option pricing with randomized quasi-Monte Carlo simulations
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
We consider a Lévy process monitored at s (fixed) observation times. The goal is to estimate the expected value of some function of these s observations by (randomized) quasi-Monte Carlo. For the case where the process is a Brownian motion, clever techniques such as Brownian bridge sampling and PCA sampling have been proposed to reduce the effective dimension of the problem. The PCA method uses an eigen-decomposition of the covariance matrix of the vector of observations so that a larger fraction of the variance depends on the first few (quasi)random numbers that are generated. We show how this method can be applied to other Lévy processes, and we examine its effectiveness in improving the quasi-Monte Carlo efficiency on some examples. The basic idea is to simulate a Brownian motion at s observation points using PCA, transform its increments into independent uniforms over (0, 1), then transform these uniforms again by applying the inverse distribution function of the increments of the Lévy process. This PCA sampling technique is quite effective in improving the quasi-Monte Carlo performance when the sampled increments of the Lévy process have a distribution that is not too far from normal, which typically happens when the process is observed at a large time scale, but may turn out to be ineffective in cases where the increments are far from normal.