diff --git a/index.html b/index.html index e3c21b0..49492cf 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Quantum Monte Carlo @@ -329,152 +329,152 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Introduction

+
+

1 Introduction

This website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +514,8 @@ coordinates, etc).

-
-

1.1 Energy and local energy

+
+

1.1 Energy and local energy

For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -549,11 +549,11 @@ For few dimensions, one can easily compute \(E\) by evaluating the integrals on

To this aim, recall that the probabilistic expected value of an arbitrary function \(f(x)\) -with respect to a probability density function \(p(x)\) is given by +with respect to a probability density function \(P(x)\) is given by

-\[ \langle f \rangle_p = \int_{-\infty}^\infty p(x)\, f(x)\,dx, \] +\[ \langle f \rangle_p = \int_{-\infty}^\infty P(x)\, f(x)\,dx, \]

@@ -562,16 +562,16 @@ and integrates to one:

-\[ \int_{-\infty}^\infty p(x)\,dx = 1. \] +\[ \int_{-\infty}^\infty P(x)\,dx = 1. \]

Similarly, we can view the the energy of a system, \(E\), as the expected value of the local energy with respect to -a probability density \(p(\mathbf{r}}\) defined in 3\(N\) dimensions: +a probability density \(P(\mathbf{r}}\) defined in 3\(N\) dimensions:

-\[ E = \int E_L(\mathbf{r}) p(\mathbf{r})\,d\mathbf{r}} \equiv \langle E_L \rangle_{\Psi^2}\,, \] +\[ E = \int E_L(\mathbf{r}) P(\mathbf{r})\,d\mathbf{r}} \equiv \langle E_L \rangle_{\Psi^2}\,, \]

@@ -579,22 +579,22 @@ where the probability density is given by the square of the wave function:

-\[ p(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2){\int \left |\Psi(\mathbf{r})|^2 d\mathbf{r}}\,. \] +\[ P(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2){\int \left |\Psi(\mathbf{r})|^2 d\mathbf{r}}\,. \]

-If we can sample configurations \(\{\mathbf{r}\}\) distributed as \(p\), we can estimate \(E\) as the average of the local energy computed over these configurations: +If we can sample \(N_{\rm MC}\) configurations \(\{\mathbf{r}\}\) distributed as \(p\), we can estimate \(E\) as the average of the local energy computed over these configurations:

-$$ E ≈ \frac{1}{M} ∑i=1M EL(\mathbf{r}i} \,. +$$ E ≈ \frac{1}{N\rm MC} ∑i=1N\rm MC EL(\mathbf{r}i} \,.

-
-

2 Numerical evaluation of the energy of the hydrogen atom

+
+

2 Numerical evaluation of the energy of the hydrogen atom

In this section, we consider the hydrogen atom with the following @@ -623,8 +623,8 @@ To do that, we will compute the local energy and check whether it is constant.

-
-

2.1 Local energy

+
+

2.1 Local energy

You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -651,8 +651,8 @@ to catch the error.

-
-

2.1.1 Exercise 1

+
+

2.1.1 Exercise 1

@@ -696,8 +696,8 @@ and returns the potential.

-
-
2.1.1.1 Solution   solution
+
+
2.1.1.1 Solution   solution

Python @@ -737,8 +737,8 @@ and returns the potential.

-
-

2.1.2 Exercise 2

+
+

2.1.2 Exercise 2

@@ -773,8 +773,8 @@ input arguments, and returns a scalar.

-
-
2.1.2.1 Solution   solution
+
+
2.1.2.1 Solution   solution

Python @@ -801,8 +801,8 @@ input arguments, and returns a scalar.

-
-

2.1.3 Exercise 3

+
+

2.1.3 Exercise 3

@@ -883,8 +883,8 @@ Therefore, the local kinetic energy is

-
-
2.1.3.1 Solution   solution
+
+
2.1.3.1 Solution   solution

Python @@ -925,8 +925,8 @@ Therefore, the local kinetic energy is

-
-

2.1.4 Exercise 4

+
+

2.1.4 Exercise 4

@@ -969,8 +969,8 @@ local kinetic energy.

-
-
2.1.4.1 Solution   solution
+
+
2.1.4.1 Solution   solution

Python @@ -1000,8 +1000,8 @@ local kinetic energy.

-
-

2.1.5 Exercise 5

+
+

2.1.5 Exercise 5

@@ -1011,8 +1011,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(

-
-
2.1.5.1 Solution   solution
+
+
2.1.5.1 Solution   solution
\begin{eqnarray*} E &=& \frac{\hat{H} \Psi}{\Psi} = - \frac{1}{2} \frac{\Delta \Psi}{\Psi} - @@ -1032,8 +1032,8 @@ equal to -0.5 atomic units.
-
-

2.2 Plot of the local energy along the \(x\) axis

+
+

2.2 Plot of the local energy along the \(x\) axis

@@ -1044,8 +1044,8 @@ choose a grid which does not contain the origin.

-
-

2.2.1 Exercise

+
+

2.2.1 Exercise

@@ -1128,8 +1128,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \

-
-
2.2.1.1 Solution   solution
+
+
2.2.1.1 Solution   solution

Python @@ -1204,8 +1204,8 @@ plt.savefig("plot_py.png")

-
-

2.3 Numerical estimation of the energy

+
+

2.3 Numerical estimation of the energy

If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1235,8 +1235,8 @@ The energy is biased because:

-
-

2.3.1 Exercise

+
+

2.3.1 Exercise

@@ -1305,8 +1305,8 @@ To compile the Fortran and run it:

-
-
2.3.1.1 Solution   solution
+
+
2.3.1.1 Solution   solution

Python @@ -1421,8 +1421,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002

-
-

2.4 Variance of the local energy

+
+

2.4 Variance of the local energy

The variance of the local energy is a functional of \(\Psi\) @@ -1449,8 +1449,8 @@ energy can be used as a measure of the quality of a wave function.

-
-

2.4.1 Exercise (optional)

+
+

2.4.1 Exercise (optional)

@@ -1461,8 +1461,8 @@ Prove that :

-
-
2.4.1.1 Solution   solution
+
+
2.4.1.1 Solution   solution

\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1481,8 +1481,8 @@ Prove that :

-
-

2.4.2 Exercise

+
+

2.4.2 Exercise

@@ -1556,8 +1556,8 @@ To compile and run:

-
-
2.4.2.1 Solution   solution
+
+
2.4.2.1 Solution   solution

Python @@ -1694,31 +1694,31 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814

-
-

3 Variational Monte Carlo

+
+

3 Variational Monte Carlo

Numerical integration with deterministic methods is very efficient in low dimensions. When the number of dimensions becomes large, instead of computing the average energy as a numerical integration -on a grid, it is usually more efficient to do a Monte Carlo sampling. +on a grid, it is usually more efficient to use Monte Carlo sampling.

-Moreover, a Monte Carlo sampling will alow us to remove the bias due +Moreover, Monte Carlo sampling will alow us to remove the bias due to the discretization of space, and compute a statistical confidence interval.

-
-

3.1 Computation of the statistical error

+
+

3.1 Computation of the statistical error

To compute the statistical error, you need to perform \(M\) independent Monte Carlo calculations. You will obtain \(M\) different estimates of the energy, which are expected to have a Gaussian -distribution according to the Central Limit Theorem. +distribution for large \(M\), according to the Central Limit Theorem.

@@ -1752,8 +1752,8 @@ And the confidence interval is given by

-
-

3.1.1 Exercise

+
+

3.1.1 Exercise

@@ -1791,8 +1791,8 @@ input array.

-
-
3.1.1.1 Solution   solution
+
+
3.1.1.1 Solution   solution

Python @@ -1851,16 +1851,44 @@ input array.

-
-

3.2 Uniform sampling in the box

+
+

3.2 Uniform sampling in the box

-We will now do our first Monte Carlo calculation to compute the -energy of the hydrogen atom. +We will now perform our first Monte Carlo calculation to compute the +energy of the hydrogen atom.

-At every Monte Carlo iteration: +Consider again the expression of the energy +

+ +\begin{eqnarray*} +E & = & \frac{\int E_L(\mathbf{r})\left[\Psi(\mathbf{r})\right]^2\,d\mathbf{r}}{\int \left[\Psi(\mathbf{r}) \right]^2 d\mathbf{r}}\,. +\end{eqnarray*} + +

+Clearly, the square of the wave function is a good choice of probability density to sample but we will start with something simpler and rewrite the energy as +

+ +\begin{eqnarray*} +E & = & \frac{\int E_L(\mathbf{r})\frac{|\Psi(\mathbf{r})|^2}{p(\mathbf{r})}p(\mathbf{r})\, \,d\mathbf{r}}{\int \frac{|\Psi(\mathbf{r})|^2 }{p(\mathbf{r})}p(\mathbf{r})d\mathbf{r}}\,. +\end{eqnarray*} + +

+Here, we will sample a uniform probability \(p(\mathbf{r})\) in a cube of volume \(L^3\) centered at the origin: +

+ +

+\[ p(\mathbf{r}) = \frac{1}{L^3}\,, \] +

+ +

+and zero outside the cube. +

+ +

+One Monte Carlo run will consist of \(N_{\rm MC}\) Monte Carlo iterations. At every Monte Carlo iteration:

    @@ -1873,9 +1901,8 @@ result in a variable energy

-One Monte Carlo run will consist of \(N\) Monte Carlo iterations. Once all the -iterations have been computed, the run returns the average energy -\(\bar{E}_k\) over the \(N\) iterations of the run. +Once all the iterations have been computed, the run returns the average energy +\(\bar{E}_k\) over the \(N_{\rm MC}\) iterations of the run.

@@ -1886,8 +1913,8 @@ compute the statistical error.

-
-

3.2.1 Exercise

+
+

3.2.1 Exercise

@@ -1987,8 +2014,8 @@ well as the index of the current step.

-
-
3.2.1.1 Solution   solution
+
+
3.2.1.1 Solution   solution

Python @@ -2102,30 +2129,29 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004

-
-

3.3 Metropolis sampling with \(\Psi^2\)

+
+

3.3 Metropolis sampling with \(\Psi^2\)

We will now use the square of the wave function to sample random points distributed with the probability density \[ - P(\mathbf{r}) = \left[\Psi(\mathbf{r})\right]^2 + P(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2){\int \left |\Psi(\mathbf{r})|^2 d\mathbf{r}} \]

The expression of the average energy is now simplified as the average of the local energies, since the weights are taken care of by the -sampling : +sampling:

\[ - E \approx \frac{1}{M}\sum_{i=1}^M E_L(\mathbf{r}_i) + E \approx \frac{1}{N_{\rm MC}}\sum_{i=1}^{N_{\rm MC} E_L(\mathbf{r}_i) \]

-

To sample a chosen probability density, an efficient method is the Metropolis-Hastings sampling algorithm. Starting from a random @@ -2191,8 +2217,8 @@ step such that the acceptance rate is close to 0.5 is a good compromise.

-
-

3.3.1 Exercise

+
+

3.3.1 Exercise

@@ -2299,8 +2325,8 @@ Can you observe a reduction in the statistical error?

-
-
3.3.1.1 Solution   solution
+
+
3.3.1.1 Solution   solution

Python @@ -2445,8 +2471,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004

-
-

3.4 Gaussian random number generator

+
+

3.4 Gaussian random number generator

To obtain Gaussian-distributed random numbers, you can apply the @@ -2508,8 +2534,8 @@ In Python, you can use the -

3.5 Generalized Metropolis algorithm

+
+

3.5 Generalized Metropolis algorithm

One can use more efficient numerical schemes to move the electrons, @@ -2608,8 +2634,8 @@ The transition probability becomes:

-
-

3.5.1 Exercise 1

+
+

3.5.1 Exercise 1

@@ -2643,8 +2669,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-
3.5.1.1 Solution   solution
+
+
3.5.1.1 Solution   solution

Python @@ -2677,8 +2703,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-

3.5.2 Exercise 2

+
+

3.5.2 Exercise 2

@@ -2772,8 +2798,8 @@ Modify the previous program to introduce the drifted diffusion scheme.

-
-
3.5.2.1 Solution   solution
+
+
3.5.2.1 Solution   solution

Python @@ -2959,12 +2985,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004

-
-

4 Diffusion Monte Carlo   solution

+
+

4 Diffusion Monte Carlo   solution

-
-

4.1 Schrödinger equation in imaginary time

+
+

4.1 Schrödinger equation in imaginary time

Consider the time-dependent Schrödinger equation: @@ -3023,8 +3049,8 @@ system.

-
-

4.2 Diffusion and branching

+
+

4.2 Diffusion and branching

The diffusion equation of particles is given by @@ -3078,8 +3104,8 @@ the combination of a diffusion process and a branching process.

-
-

4.3 Importance sampling

+