Skip to content
/modeling-simulation/mcmc

Markov Chain Monte Carlo

MCMC

Family of algorithms that sample from probability distributions by constructing Markov chains whose stationary distribution matches the target posterior.

Markov Chain Monte Carlo is a class of algorithms that generate samples from complex probability distributions by constructing a Markov chain that converges to the desired target distribution as its stationary distribution 1.

How It Works

MCMC algorithms work by proposing moves in parameter space and accepting or rejecting them based on a criterion that ensures the chain samples from the target posterior distribution. The Metropolis-Hastings algorithm proposes random perturbations and accepts them with a probability ratio based on the posterior density. Over many iterations, the chain visits regions of high posterior probability more frequently.

Standard random-walk proposals can be inefficient in high-dimensional spaces with correlated parameters — common in biological models. Hamiltonian Monte Carlo (HMC) uses gradient information to propose moves along the posterior surface, dramatically improving exploration efficiency. The No-U-Turn Sampler (NUTS) adaptively tunes HMC’s trajectory length 2.

Convergence diagnostics such as the Gelman-Rubin statistic and effective sample size help determine when the chain has explored the posterior sufficiently. Running multiple independent chains and checking their agreement is standard practice.

Computational Considerations

Modern probabilistic programming frameworks like Stan, PyMC, and NumPyro implement HMC/NUTS with automatic differentiation, making gradient-based MCMC accessible for ODE-based biological models. GPU parallelization of independent chains and vectorized likelihood computations enable practical Bayesian inference for models with dozens of parameters 2.


Woolf Software specializes in computational modeling and simulation for biological systems. Get in touch.

Computational Angle

Gradient-based samplers like Hamiltonian Monte Carlo exploit automatic differentiation for efficient exploration; GPU-parallelized chains enable scaling to large biological models.

Related Terms

References

  1. Gilks, W.R. et al.. Markov Chain Monte Carlo in Practice . Chapman & Hall/CRC (1996) DOI
  2. Hoffman, M.D. and Gelman, A.. The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo . Journal of Machine Learning Research (2014) DOI