
Example of Metropolis-Hastings sampling
We can implement this algorithm to find the posterior distribution P(A|B) given the product of P(B|A) and P(A), without considering the normalizing constant that requires a complex integration.
Let's suppose that:

Therefore, the resulting g(x) is:

To solve this problem, we adopt the random walk Metropolis-Hastings, which consists of choosing q ∼ Normal(μ=x(t-1)). This choice allows simplifying the value α, because the two terms q(x(t-1)|x') and q(x'|x(t-1)) are equal (thanks to the symmetry around the vertical axis passing through xmean) and can be canceled out, so α becomes the ratio between g(x') and g(x(t-1)).
The first thing is defining the functions:
import numpy as np
def prior(x):
return 0.1 * np.exp(-0.1 * x)
def likelihood(x):
a = np.sqrt(0.2 / (2.0 * np.pi * np.power(x, 3)))
b = - (0.2 * np.power(x - 1.0, 2)) / (2.0 * x)
return a * np.exp(b)
def g(x):
return likelihood(x) * prior(x)
def q(xp):
return np.random.normal(xp)
Now, we can start our sampling process with 100,000 iterations and x(0) = 1.0:
nb_iterations = 100000
x = 1.0
samples = []
for i in range(nb_iterations):
xc = q(x)
alpha = g(xc) / g(x)
if np.isnan(alpha):
continue
if alpha >= 1:
samples.append(xc)
x = xc
else:
if np.random.uniform(0.0, 1.0) < alpha:
samples.append(xc)
x = xc
To get a representation of the posterior distribution, we need to create a histogram through the NumPy function np.histogram(), which accepts an array of values and the number of desired intervals (bins); in our case, we set 100 intervals:
hist, _ = np.histogram(samples, bins=100)
hist_p = hist / len(samples)
The resulting plot of p(x) is shown in the following graph:
