MetropolisHastings
Metropolis-Hastings is a Markov chain Monte Carlo method used to obtain a sequence of samples from a target probability distribution π(x) for which direct sampling is difficult. The method constructs a Markov chain whose stationary distribution is π, so that long-run samples approximate π. It generalizes the Metropolis algorithm by allowing asymmetric proposal distributions.
The procedure starts from a current state x and draws a candidate y from a proposal distribution
History and theoretical basis: The method originated with the Metropolis algorithm (1953) for simulating Boltzmann distributions
Variants and usage: Common choices for q include Gaussian random-walk proposals and independent proposals. The algorithm