From b2d4337e07f37682057096fb8c7329e8eaad28ca Mon Sep 17 00:00:00 2001 From: filippi-claudia Date: Sun, 31 Jan 2021 09:26:21 +0000 Subject: [PATCH] deploy: 1e0ffff4a6471cbd71889bf08d8bacf9fe6482a4 --- index.html | 455 +++++++++++++++++++++++++++++++---------------------- 1 file changed, 265 insertions(+), 190 deletions(-) diff --git a/index.html b/index.html index 49492cf..b2ab75a 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Quantum Monte Carlo @@ -329,152 +329,152 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Introduction

+
+

1 Introduction

This website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +514,8 @@ coordinates, etc).

-
-

1.1 Energy and local energy

+
+

1.1 Energy and local energy

For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -593,8 +593,8 @@ $$ E ≈ \frac{1}{N\rm MC} ∑i=1N\rm MC

-
-

2 Numerical evaluation of the energy of the hydrogen atom

+
+

2 Numerical evaluation of the energy of the hydrogen atom

In this section, we consider the hydrogen atom with the following @@ -623,8 +623,8 @@ To do that, we will compute the local energy and check whether it is constant.

-
-

2.1 Local energy

+
+

2.1 Local energy

You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -651,8 +651,8 @@ to catch the error.

-
-

2.1.1 Exercise 1

+
+

2.1.1 Exercise 1

@@ -696,8 +696,8 @@ and returns the potential.

-
-
2.1.1.1 Solution   solution
+
+
2.1.1.1 Solution   solution

Python @@ -737,8 +737,8 @@ and returns the potential.

-
-

2.1.2 Exercise 2

+
+

2.1.2 Exercise 2

@@ -773,8 +773,8 @@ input arguments, and returns a scalar.

-
-
2.1.2.1 Solution   solution
+
+
2.1.2.1 Solution   solution

Python @@ -801,8 +801,8 @@ input arguments, and returns a scalar.

-
-

2.1.3 Exercise 3

+
+

2.1.3 Exercise 3

@@ -883,8 +883,8 @@ Therefore, the local kinetic energy is

-
-
2.1.3.1 Solution   solution
+
+
2.1.3.1 Solution   solution

Python @@ -925,8 +925,8 @@ Therefore, the local kinetic energy is

-
-

2.1.4 Exercise 4

+
+

2.1.4 Exercise 4

@@ -969,8 +969,8 @@ local kinetic energy.

-
-
2.1.4.1 Solution   solution
+
+
2.1.4.1 Solution   solution

Python @@ -1000,8 +1000,8 @@ local kinetic energy.

-
-

2.1.5 Exercise 5

+
+

2.1.5 Exercise 5

@@ -1011,8 +1011,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(

-
-
2.1.5.1 Solution   solution
+
+
2.1.5.1 Solution   solution
\begin{eqnarray*} E &=& \frac{\hat{H} \Psi}{\Psi} = - \frac{1}{2} \frac{\Delta \Psi}{\Psi} - @@ -1032,8 +1032,8 @@ equal to -0.5 atomic units.
-
-

2.2 Plot of the local energy along the \(x\) axis

+
+

2.2 Plot of the local energy along the \(x\) axis

@@ -1044,8 +1044,8 @@ choose a grid which does not contain the origin.

-
-

2.2.1 Exercise

+
+

2.2.1 Exercise

@@ -1128,8 +1128,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \

-
-
2.2.1.1 Solution   solution
+
+
2.2.1.1 Solution   solution

Python @@ -1204,8 +1204,8 @@ plt.savefig("plot_py.png")

-
-

2.3 Numerical estimation of the energy

+
+

2.3 Numerical estimation of the energy

If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1235,8 +1235,8 @@ The energy is biased because:

-
-

2.3.1 Exercise

+
+

2.3.1 Exercise

@@ -1305,8 +1305,8 @@ To compile the Fortran and run it:

-
-
2.3.1.1 Solution   solution
+
+
2.3.1.1 Solution   solution

Python @@ -1421,8 +1421,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002

-
-

2.4 Variance of the local energy

+
+

2.4 Variance of the local energy

The variance of the local energy is a functional of \(\Psi\) @@ -1449,8 +1449,8 @@ energy can be used as a measure of the quality of a wave function.

-
-

2.4.1 Exercise (optional)

+
+

2.4.1 Exercise (optional)

@@ -1461,8 +1461,8 @@ Prove that :

-
-
2.4.1.1 Solution   solution
+
+
2.4.1.1 Solution   solution

\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1481,8 +1481,8 @@ Prove that :

-
-

2.4.2 Exercise

+
+

2.4.2 Exercise

@@ -1556,8 +1556,8 @@ To compile and run:

-
-
2.4.2.1 Solution   solution
+
+
2.4.2.1 Solution   solution

Python @@ -1694,8 +1694,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814

-
-

3 Variational Monte Carlo

+
+

3 Variational Monte Carlo

Numerical integration with deterministic methods is very efficient @@ -1711,8 +1711,8 @@ interval.

-
-

3.1 Computation of the statistical error

+
+

3.1 Computation of the statistical error

To compute the statistical error, you need to perform \(M\) @@ -1752,8 +1752,8 @@ And the confidence interval is given by

-
-

3.1.1 Exercise

+
+

3.1.1 Exercise

@@ -1791,8 +1791,8 @@ input array.

-
-
3.1.1.1 Solution   solution
+
+
3.1.1.1 Solution   solution

Python @@ -1851,8 +1851,8 @@ input array.

-
-

3.2 Uniform sampling in the box

+
+

3.2 Uniform sampling in the box

We will now perform our first Monte Carlo calculation to compute the @@ -1872,15 +1872,15 @@ Clearly, the square of the wave function is a good choice of probability density

\begin{eqnarray*} -E & = & \frac{\int E_L(\mathbf{r})\frac{|\Psi(\mathbf{r})|^2}{p(\mathbf{r})}p(\mathbf{r})\, \,d\mathbf{r}}{\int \frac{|\Psi(\mathbf{r})|^2 }{p(\mathbf{r})}p(\mathbf{r})d\mathbf{r}}\,. +E & = & \frac{\int E_L(\mathbf{r})\frac{|\Psi(\mathbf{r})|^2}{P(\mathbf{r})}P(\mathbf{r})\, \,d\mathbf{r}}{\int \frac{|\Psi(\mathbf{r})|^2 }{P(\mathbf{r})}P(\mathbf{r})d\mathbf{r}}\,. \end{eqnarray*}

-Here, we will sample a uniform probability \(p(\mathbf{r})\) in a cube of volume \(L^3\) centered at the origin: +Here, we will sample a uniform probability \(P(\mathbf{r})\) in a cube of volume \(L^3\) centered at the origin:

-\[ p(\mathbf{r}) = \frac{1}{L^3}\,, \] +\[ P(\mathbf{r}) = \frac{1}{L^3}\,, \]

@@ -1913,8 +1913,8 @@ compute the statistical error.

-
-

3.2.1 Exercise

+
+

3.2.1 Exercise

@@ -2014,8 +2014,8 @@ well as the index of the current step.

-
-
3.2.1.1 Solution   solution
+
+
3.2.1.1 Solution   solution

Python @@ -2129,8 +2129,8 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004

-
-

3.3 Metropolis sampling with \(\Psi^2\)

+
+

3.3 Metropolis sampling with \(\Psi^2\)

We will now use the square of the wave function to sample random @@ -2155,29 +2155,75 @@ sampling:

To sample a chosen probability density, an efficient method is the Metropolis-Hastings sampling algorithm. Starting from a random -initial position \(\mathbf{r}_0\), we will realize a random walk as follows: +initial position \(\mathbf{r}_0\), we will realize a random walk: +

+ +

+\[ \mathbf{r}_0 \rightarrow \mathbf{r}_1 \rightarrow \mathbf{r}_2 \ldots \mathbf{r}_{N_{\rm MC}}\,, \] +

+ +

+according to the following algorithm. +

+ +

+At every step, we propose a new move according to a transition probability \(T(\mathbf{r}_{n+1},\mathbf{r}_n)\) of our choice. +

+ +

+For simplicity, let us move the electron in a 3-dimensional box of side \(2\delta L\) centered at the current position +of the electron:

\[ - \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta t\, \mathbf{u} + \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta L \, \mathbf{u} \]

-where \(\delta t\) is a fixed constant (the so-called time-step), and +where \(\delta L\) is a fixed constant, and \(\mathbf{u}\) is a uniform random number in a 3-dimensional box -\((-1,-1,-1) \le \mathbf{u} \le (1,1,1)\). We will then add the +\((-1,-1,-1) \le \mathbf{u} \le (1,1,1)\). +

+ +

+After having moved the electron, add the accept/reject step that guarantees that the distribution of the -\(\mathbf{r}_n\) is \(\Psi^2\): +\(\mathbf{r}_n\) is \(\Psi^2\). This amounts to accepting the move with +probability +

+ +

+\[ + A{\mathbf{r}_{n+1},\mathbf{r}_n) = \min\left(1,\frac{T(\mathbf{r}_{n},\mathbf{r}_{n+1}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n+1},\mathbf{r}_n)P(\mathbf{r}_{n})}\right)\,, + \] +

+ +

+which, for our choice of transition probability, becomes +

+ +

+\[ + A{\mathbf{r}_{n+1},\mathbf{r}_n) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{\Psi(\mathbf{r}_{n+1})^2}{\Psi(\mathbf{r}_{n})^2} + \] +

+ +

+Explain why the transition probability cancels out in the expression of \(A\). Also note that we do not need to compute the norm of the wave function! +

+ +

+The algorithm is summarized as follows:

  1. Compute \(\Psi\) at a new position \(\mathbf{r'} = \mathbf{r}_n + - \delta t\, \mathbf{u}\)
  2. -
  3. Compute the ratio \(R = \frac{\left[\Psi(\mathbf{r'})\right]^2}{\left[\Psi(\mathbf{r}_{n})\right]^2}\)
  4. + \delta L\, \mathbf{u}\) +
  5. Compute the ratio \(A = \frac{\left[\Psi(\mathbf{r'})\right]^2}{\left[\Psi(\mathbf{r}_{n})\right]^2}\)
  6. Draw a uniform random number \(v \in [0,1]\)
  7. -
  8. if \(v \le R\), accept the move : set \(\mathbf{r}_{n+1} = \mathbf{r'}\)
  9. +
  10. if \(v \le A\), accept the move : set \(\mathbf{r}_{n+1} = \mathbf{r'}\)
  11. else, reject the move : set \(\mathbf{r}_{n+1} = \mathbf{r}_n\)
  12. evaluate the local energy at \(\mathbf{r}_{n+1}\)
@@ -2195,30 +2241,30 @@ All samples should be kept, from both accepted and rejected moves.

-If the time step is infinitely small, the ratio will be very close -to one and all the steps will be accepted. But the trajectory will -be infinitely too short to have statistical significance. +If the box is infinitely small, the ratio will be very close +to one and all the steps will be accepted. However, the moves will be +very correlated and you will visit the configurational space very slowly.

-On the other hand, as the time step increases, the number of +On the other hand, if you propose too large moves, the number of accepted steps will decrease because the ratios might become small. If the number of accepted steps is close to zero, then the space is not well sampled either.

-The time step should be adjusted so that it is as large as +The size of the move should be adjusted so that it is as large as possible, keeping the number of accepted steps not too small. To achieve that, we define the acceptance rate as the number of accepted steps over the total number of steps. Adjusting the time -step such that the acceptance rate is close to 0.5 is a good compromise. +step such that the acceptance rate is close to 0.5 is a good compromise for the current problem.

-
-

3.3.1 Exercise

+
+

3.3.1 Exercise

@@ -2228,7 +2274,7 @@ sampled with \(\Psi^2\).

Compute also the acceptance rate, so that you can adapt the time -step in order to have an acceptance rate close to 0.5 . +step in order to have an acceptance rate close to 0.5.

@@ -2325,8 +2371,8 @@ Can you observe a reduction in the statistical error?

-
-
3.3.1.1 Solution   solution
+
+
3.3.1.1 Solution   solution

Python @@ -2471,8 +2517,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004

-
-

3.4 Gaussian random number generator

+
+

3.4 Gaussian random number generator

To obtain Gaussian-distributed random numbers, you can apply the @@ -2534,14 +2580,17 @@ In Python, you can use the -

3.5 Generalized Metropolis algorithm

+ +
+

3.5 Generalized Metropolis algorithm

-One can use more efficient numerical schemes to move the electrons, -but the Metropolis accepation step has to be adapted accordingly: -the acceptance -probability \(A\) is chosen so that it is consistent with the +One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability. +

+ +

+The Metropolis acceptance step has to be adapted accordingly to ensure that the detailed balance condition is satisfied. This means that +the acceptance probability \(A\) is chosen so that it is consistent with the probability of leaving \(\mathbf{r}_n\) and the probability of entering \(\mathbf{r}_{n+1}\):

@@ -2565,7 +2614,7 @@ numbers. Hence, the transition probability was

\[ T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = - \text{constant} + \text{constant}\,, \]

@@ -2584,7 +2633,7 @@ choose to draw Gaussian random numbers with zero mean and variance \[ T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = \frac{1}{(2\pi\,\delta t)^{3/2}} \exp \left[ - \frac{\left( - \mathbf{r}_{n+1} - \mathbf{r}_{n} \right)^2}{2\delta t} \right] + \mathbf{r}_{n+1} - \mathbf{r}_{n} \right)^2}{2\delta t} \right]\,. \]

@@ -2597,23 +2646,23 @@ acceptance ratios and improve the sampling.

-To do this, we can add the drift vector +To do this, we can use the gradient of the probability density

\[ - \frac{\nabla [ \Psi^2 ]}{\Psi^2} = 2 \frac{\nabla \Psi}{\Psi}. + \frac{\nabla [ \Psi^2 ]}{\Psi^2} = 2 \frac{\nabla \Psi}{\Psi}\,, \]

-The numerical scheme becomes a drifted diffusion: +and add the so-called drift vector, so that the numerical scheme becomes a drifted diffusion:

\[ \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta t\, \frac{\nabla - \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi + \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi \,, \]

@@ -2628,14 +2677,40 @@ The transition probability becomes: T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = \frac{1}{(2\pi\,\delta t)^{3/2}} \exp \left[ - \frac{\left( \mathbf{r}_{n+1} - \mathbf{r}_{n} - \frac{\nabla - \Psi(\mathbf{r}_n)}{\Psi(\mathbf{r}_n)} \right)^2}{2\,\delta t} \right] + \Psi(\mathbf{r}_n)}{\Psi(\mathbf{r}_n)} \right)^2}{2\,\delta t} \right]\,. \]

+ +

+The algorithm of the previous exercise is only slighlty modified summarized: +

+ +
    +
  1. For the starting position compute \(\Psi\) and the drif-vector \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\)
  2. +
  3. +Compute a new position \(\mathbf{r'} = \mathbf{r}_n + + \delta t\, \frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi\) +

    + +

    +Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at the new position +

  4. +
  5. Compute the ratio $A = \frac{T(\mathbf{r}n+1 → \mathbf{r}n) P(\mathbf{r}n+1)}
  6. +
+

+{T(\mathbf{r}n → \mathbf{r}n+1) P(\mathbf{r}n)}$ +

+
    +
  1. Draw a uniform random number \(v \in [0,1]\)
  2. +
  3. if \(v \le A\), accept the move : set \(\mathbf{r}_{n+1} = \mathbf{r'}\)
  4. +
  5. else, reject the move : set \(\mathbf{r}_{n+1} = \mathbf{r}_n\)
  6. +
  7. evaluate the local energy at \(\mathbf{r}_{n+1}\)
  8. +
-
-

3.5.1 Exercise 1

+
+

3.5.1 Exercise 1

@@ -2669,8 +2744,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-
3.5.1.1 Solution   solution
+
+
3.5.1.1 Solution   solution

Python @@ -2703,8 +2778,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-

3.5.2 Exercise 2

+
+

3.5.2 Exercise 2

@@ -2798,8 +2873,8 @@ Modify the previous program to introduce the drifted diffusion scheme.

-
-
3.5.2.1 Solution   solution
+
+
3.5.2.1 Solution   solution

Python @@ -2985,12 +3060,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004

-
-

4 Diffusion Monte Carlo   solution

+
+

4 Diffusion Monte Carlo   solution

-
-

4.1 Schrödinger equation in imaginary time

+
+

4.1 Schrödinger equation in imaginary time

Consider the time-dependent Schrödinger equation: @@ -3049,8 +3124,8 @@ system.

-
-

4.2 Diffusion and branching

+
+

4.2 Diffusion and branching

The diffusion equation of particles is given by @@ -3104,8 +3179,8 @@ the combination of a diffusion process and a branching process.

-
-

4.3 Importance sampling

+