# Metropolis–Hastings algorithm

 related topics {math, number, function} {rate, high, increase} {math, energy, light} {household, population, female} {city, large, area}

In mathematics and physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult. This sequence can be used to approximate the distribution (i.e., to generate a histogram), or to compute an integral (such as an expected value).

## Contents

### History

The algorithm was named after Nicholas Metropolis, who was an author along with Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller of the 1953 paper Equation of State Calculations by Fast Computing Machines which first proposed the algorithm for the specific case of the Boltzmann distribution;[1] and W. Keith Hastings,[2] who extended it to the more general case in 1970.[3] The Gibbs sampling algorithm is a special case of the Metropolis–Hastings algorithm which is usually faster and easier to use but is less generally applicable.

### Overview

The Metropolis–Hastings algorithm can draw samples from any probability distribution $\displaystyle P(x)$, requiring only that a function proportional to the density can be calculated. In Bayesian applications, the normalization factor is often extremely difficult to compute, so the ability to generate a sample without knowing this constant of proportionality is a major virtue of the algorithm. The algorithm generates a Markov chain in which each state $\displaystyle x^{t+1}$ depends only on the previous state $\displaystyle x^t$. The algorithm uses a proposal density $\displaystyle Q(x'; x^t )$, which depends on the current state $\displaystyle x^t$, to generate a new proposed sample $\displaystyle x'$. This proposal is "accepted" as the next value ($\displaystyle x^{t+1}=x'$) if $\displaystyle \alpha$ drawn from $\displaystyle U(0,1)$ satisfies