Skip to main content
Log in

On the Value of derivative evaluations and random walk suppression in Markov Chain Monte Carlo algorithms

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Two strategies that can potentially improve Markov Chain Monte Carlo algorithms are to use derivative evaluations of the target density, and to suppress random walk behaviour in the chain. The use of one or both of these strategies has been investigated in a few specific applications, but neither is used routinely. We undertake a broader evaluation of these techniques, with a view to assessing their utility for routine use. In addition to comparing different algorithms, we also compare two different ways in which the algorithms can be applied to a multivariate target distribution. Specifically, the univariate version of an algorithm can be applied repeatedly to one-dimensional conditional distributions, or the multivariate version can be applied directly to the target distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Besag J., York J., and Mollié A. 1991. Bayesian image restoration, with two applications in spatial statistics (with discussion). Annals of the Institute of Statistical Mathematics 43: 1-20.

    Google Scholar 

  • Christensen O.F., Moller J., and Waagepetersen R. 2001. Geometric ergodicity of Metropolis-Hastings algorithms for conditional simulation in generalised linear mixed models. Methodology and Computing in Applied Probability 3: 309-327.

    Google Scholar 

  • Duane S., Kennedy A.D., Pendleton B.J., and Roweth D. 1987. Hybrid Monte Carlo. Physics Letters B 195: 216-222.

    Google Scholar 

  • Gamerman D. 1997. Sampling from the posterior distribution in generalized linear mixed models. Statistics and Computing 7: 57-68.

    Google Scholar 

  • Gelfand A.E. and Smith A.F.M. 1990. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association 85: 398-409.

    Google Scholar 

  • Gelman A., Roberts G.O., and Gilks W. 1996. Efficient Metropolis jumping rules. In: Berger J.O., Bernardo J.M., Dawid A.P., and Smith A.F.M.) Bayesian Statistics 5. Oxford University Press, pp. 599-607.

  • Gelman A. and Rubin D.B. 1992. Inference from iterative simulation using multiple sequences. Statistical Science 7: 457-472.

    Google Scholar 

  • Gustafson P. 1997. Large hierarchical Bayesian analysis of multivariate survival data. Biometrics 53: 230-242.

    Google Scholar 

  • Gustafson P. 1998a. A guided walk Metropolis algorithm. Statistics and Computing 8: 357-364.

    Google Scholar 

  • Gustafson P. 1998b. Flexible Bayesian modelling for survival data. Lifetime Data Analysis 4: 281-299.

    Google Scholar 

  • Hastings W.K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57: 97-109.

    Google Scholar 

  • Horowitz A.M. 1991. A generalized guided Monte Carlo algorithm. Physics Letters B 268: 247-252.

    Google Scholar 

  • Ishwaran H. 1999. Applications of hybrid Monte Carlo to Bayesian generalized linear models: Quasicomplete separation and neural networks. Journal of Computational and Graphical Statistics 8: 779-799.

    Google Scholar 

  • Leroux B.G., Lei X., and Breslow N. 1999. Estimation of disease rates in small areas: A new mixed model for spatial dependence. In: Halloran M.E. and Berry D.A., (Eds.), Statistical Models in Epidemiology, the Environment and Clinical Trials. New York: Springer, pp. 179-192.

    Google Scholar 

  • Leroux B.G. 2000. Modelling spatial disease rates using maximum likelihood. Statistics in Medicine 19: 2321-2332.

    Google Scholar 

  • MacNab Y.C. 2003a. Hierarchical Bayesian spatial modelling of smallarea rates of non-rare disease. Statistics in Medicine 22: 1761-1773.

    Google Scholar 

  • MacNab Y.C. 2003b. Hierarchical Bayesian modelling of spatially correlated health service outcome and utilization rates. Biometrics 59: 305-316.

    Google Scholar 

  • MacNab Y.C. and Dean C.B. 2000. Parametric bootstrap and penalized quasi-likelihood inference in conditional autoregressive models. Statistics in Medicine 19: 2421-2435.

    Google Scholar 

  • Metropolis N., Rosenbluth A.W., Rosenbluth M.N., Teller A.H., and Teller E. 1953. Equations of state calculations by fast computing machines. Journal of Chemical Physics 21: 1087-1092.

    Google Scholar 

  • Neal R.M. 1993. Bayesian learning via stochastic dynamics. In: Giles C.L., Hanson S.J., and Cowan J.D. (Eds.), Advances in Neural Information Processing Systems 5. San Mateo, California, Morgan Kaufmann, pp. 475-482.

    Google Scholar 

  • Neal R.M. 1996. Bayesian learning for neural networks. Lecture Notes in Statistics No. 118, New York, Springer-Verlag.

    Google Scholar 

  • Neal R.M. 1998. Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation. In: Jordan M.I. (Ed.), Learning in Graphical Models. Dordrecht, Kluwer Academic Publishers, pp. 205-225.

    Google Scholar 

  • Roberts G.O., Gelman A., and Gilks W.R. 1997.Weak convergence and optimal scaling of random walk Metropolis algorithms. Annals of Applied Probability 7: 110-120.

    Google Scholar 

  • Roberts G.O. and Rosenthal J.S. 1998. Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society B 60: 255-268.

    Google Scholar 

  • Roberts G.O. and Rosenthal J.S. 2001. Optimal scaling for various Metropolis-Hastings algorithms. Statistical Science 16: 351-367.

    Google Scholar 

  • Roberts G.O. and Tweedie R.L. 1996. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli 2: 341-363.

    Google Scholar 

  • Rue H. 2001. Fast sampling of Gaussian Markov random fields with applications. Journal of the Royal Statistical Society B 63: 325-338.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gustafson, P., MacNab, Y.C. & Wen, S. On the Value of derivative evaluations and random walk suppression in Markov Chain Monte Carlo algorithms. Statistics and Computing 14, 23–38 (2004). https://doi.org/10.1023/B:STCO.0000009413.87656.ef

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:STCO.0000009413.87656.ef

Navigation