The calculation of value-at-risk (VAR) for large portfolios of complex derivative securities presents a tradeoff between speed and accuracy. The fastest methods rely on simplifying assumptions about changes in underlying risk factors and about how a portfolio's value responds to these changes in the risk factors. Greater realism in measuring changes in portfolio value generally comes at the price of much longer computing times. The simplest methods - the "variance-covariance" solution popularized by RiskMetrics, and the delta-gamma approximations described by Britten-Jones and Schaefer (1999), Rouvinez (1997) and Wilson 1999) - rely on the assumption that a portfolio's value changes linearly or quadratically with changes in market risk factors. These assumptions limit their accuracy. In contrast, Monte Carlo simulation is applicable with virtually any model of changes in risk factors and any mechanism for determining a portfolio's value in each market scenario. But revaluing a portfolio in each scenario can present a substantial computational burden, and this motivates research into ways of improving the efficiency of Monte Carlo methods for VAR. Because the computational bottleneck in Monte Carlo estimation of VAR lies in revaluing a portfolio in each market scenario sampled, accelerating Monte Carlo requires either speeding up each revaluation or sampling fewer scenarios. In this article, we discuss methods for reducing the number of revaluations required through strategic sampling of scenarios. In particular, we review methods developed in Glasserman, Heidelberger, and Shahabuddin 2000ab - henceforth referred to as GHS2000a and GHS2000b - that combine importance sampling and stratified sampling to generate changes in risk factors.