diff --git a/index.html b/index.html index b2ab75a..8a30474 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Quantum Monte Carlo @@ -329,152 +329,152 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Introduction

+
+

1 Introduction

This website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +514,8 @@ coordinates, etc).

-
-

1.1 Energy and local energy

+
+

1.1 Energy and local energy

For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -593,8 +593,8 @@ $$ E ≈ \frac{1}{N\rm MC} ∑i=1N\rm MC

-
-

2 Numerical evaluation of the energy of the hydrogen atom

+
+

2 Numerical evaluation of the energy of the hydrogen atom

In this section, we consider the hydrogen atom with the following @@ -623,8 +623,8 @@ To do that, we will compute the local energy and check whether it is constant.

-
-

2.1 Local energy

+
+

2.1 Local energy

You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -651,8 +651,8 @@ to catch the error.

-
-

2.1.1 Exercise 1

+
+

2.1.1 Exercise 1

@@ -696,8 +696,8 @@ and returns the potential.

-
-
2.1.1.1 Solution   solution
+
+
2.1.1.1 Solution   solution

Python @@ -737,8 +737,8 @@ and returns the potential.

-
-

2.1.2 Exercise 2

+
+

2.1.2 Exercise 2

@@ -773,8 +773,8 @@ input arguments, and returns a scalar.

-
-
2.1.2.1 Solution   solution
+
+
2.1.2.1 Solution   solution

Python @@ -801,8 +801,8 @@ input arguments, and returns a scalar.

-
-

2.1.3 Exercise 3

+
+

2.1.3 Exercise 3

@@ -883,8 +883,8 @@ Therefore, the local kinetic energy is

-
-
2.1.3.1 Solution   solution
+
+
2.1.3.1 Solution   solution

Python @@ -925,8 +925,8 @@ Therefore, the local kinetic energy is

-
-

2.1.4 Exercise 4

+
+

2.1.4 Exercise 4

@@ -969,8 +969,8 @@ local kinetic energy.

-
-
2.1.4.1 Solution   solution
+
+
2.1.4.1 Solution   solution

Python @@ -1000,8 +1000,8 @@ local kinetic energy.

-
-

2.1.5 Exercise 5

+
+

2.1.5 Exercise 5

@@ -1011,8 +1011,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(

-
-
2.1.5.1 Solution   solution
+
+
2.1.5.1 Solution   solution
\begin{eqnarray*} E &=& \frac{\hat{H} \Psi}{\Psi} = - \frac{1}{2} \frac{\Delta \Psi}{\Psi} - @@ -1032,8 +1032,8 @@ equal to -0.5 atomic units.
-
-

2.2 Plot of the local energy along the \(x\) axis

+
+

2.2 Plot of the local energy along the \(x\) axis

@@ -1044,8 +1044,8 @@ choose a grid which does not contain the origin.

-
-

2.2.1 Exercise

+
+

2.2.1 Exercise

@@ -1128,8 +1128,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \

-
-
2.2.1.1 Solution   solution
+
+
2.2.1.1 Solution   solution

Python @@ -1204,8 +1204,8 @@ plt.savefig("plot_py.png")

-
-

2.3 Numerical estimation of the energy

+
+

2.3 Numerical estimation of the energy

If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1235,8 +1235,8 @@ The energy is biased because:

-
-

2.3.1 Exercise

+
+

2.3.1 Exercise

@@ -1305,8 +1305,8 @@ To compile the Fortran and run it:

-
-
2.3.1.1 Solution   solution
+
+
2.3.1.1 Solution   solution

Python @@ -1421,8 +1421,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002

-
-

2.4 Variance of the local energy

+
+

2.4 Variance of the local energy

The variance of the local energy is a functional of \(\Psi\) @@ -1449,8 +1449,8 @@ energy can be used as a measure of the quality of a wave function.

-
-

2.4.1 Exercise (optional)

+
+

2.4.1 Exercise (optional)

@@ -1461,8 +1461,8 @@ Prove that :

-
-
2.4.1.1 Solution   solution
+
+
2.4.1.1 Solution   solution

\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1481,8 +1481,8 @@ Prove that :

-
-

2.4.2 Exercise

+
+

2.4.2 Exercise

@@ -1556,8 +1556,8 @@ To compile and run:

-
-
2.4.2.1 Solution   solution
+
+
2.4.2.1 Solution   solution

Python @@ -1694,8 +1694,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814

-
-

3 Variational Monte Carlo

+
+

3 Variational Monte Carlo

Numerical integration with deterministic methods is very efficient @@ -1711,8 +1711,8 @@ interval.

-
-

3.1 Computation of the statistical error

+
+

3.1 Computation of the statistical error

To compute the statistical error, you need to perform \(M\) @@ -1752,8 +1752,8 @@ And the confidence interval is given by

-
-

3.1.1 Exercise

+
+

3.1.1 Exercise

@@ -1791,8 +1791,8 @@ input array.

-
-
3.1.1.1 Solution   solution
+
+
3.1.1.1 Solution   solution

Python @@ -1851,8 +1851,8 @@ input array.

-
-

3.2 Uniform sampling in the box

+
+

3.2 Uniform sampling in the box

We will now perform our first Monte Carlo calculation to compute the @@ -1913,8 +1913,8 @@ compute the statistical error.

-
-

3.2.1 Exercise

+
+

3.2.1 Exercise

@@ -2014,8 +2014,8 @@ well as the index of the current step.

-
-
3.2.1.1 Solution   solution
+
+
3.2.1.1 Solution   solution

Python @@ -2129,8 +2129,8 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004

-
-

3.3 Metropolis sampling with \(\Psi^2\)

+
+

3.3 Metropolis sampling with \(\Psi^2\)

We will now use the square of the wave function to sample random @@ -2163,15 +2163,15 @@ initial position \(\mathbf{r}_0\), we will realize a random walk:

-according to the following algorithm. +following the following algorithm.

-At every step, we propose a new move according to a transition probability \(T(\mathbf{r}_{n+1},\mathbf{r}_n)\) of our choice. +At every step, we propose a new move according to a transition probability \(T(\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1})\) of our choice.

-For simplicity, let us move the electron in a 3-dimensional box of side \(2\delta L\) centered at the current position +For simplicity, we will move the electron in a 3-dimensional box of side \(2\delta L\) centered at the current position of the electron:

@@ -2188,7 +2188,7 @@ where \(\delta L\) is a fixed constant, and

-After having moved the electron, add the +After having moved the electron, we add the accept/reject step that guarantees that the distribution of the \(\mathbf{r}_n\) is \(\Psi^2\). This amounts to accepting the move with probability @@ -2196,7 +2196,7 @@ probability

\[ - A{\mathbf{r}_{n+1},\mathbf{r}_n) = \min\left(1,\frac{T(\mathbf{r}_{n},\mathbf{r}_{n+1}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n+1},\mathbf{r}_n)P(\mathbf{r}_{n})}\right)\,, + A{\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1}) = \min\left(1,\frac{T(\mathbf{r}_{n},\mathbf{r}_{n+1}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n+1},\mathbf{r}_n)P(\mathbf{r}_{n})}\right)\,, \]

@@ -2206,7 +2206,7 @@ which, for our choice of transition probability, becomes

\[ - A{\mathbf{r}_{n+1},\mathbf{r}_n) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{\Psi(\mathbf{r}_{n+1})^2}{\Psi(\mathbf{r}_{n})^2} + A{\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1}) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{\Psi(\mathbf{r}_{n+1})^2}{\Psi(\mathbf{r}_{n})^2} \]

@@ -2258,13 +2258,14 @@ The size of the move should be adjusted so that it is as large as possible, keeping the number of accepted steps not too small. To achieve that, we define the acceptance rate as the number of accepted steps over the total number of steps. Adjusting the time -step such that the acceptance rate is close to 0.5 is a good compromise for the current problem. +step such that the acceptance rate is close to 0.5 is a good +compromise for the current problem.

-
-

3.3.1 Exercise

+
+

3.3.1 Exercise

@@ -2371,8 +2372,8 @@ Can you observe a reduction in the statistical error?

-
-
3.3.1.1 Solution   solution
+
+
3.3.1.1 Solution   solution

Python @@ -2517,8 +2518,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004

-
-

3.4 Gaussian random number generator

+
+

3.4 Gaussian random number generator

To obtain Gaussian-distributed random numbers, you can apply the @@ -2581,8 +2582,8 @@ In Python, you can use the -

3.5 Generalized Metropolis algorithm

+
+

3.5 Generalized Metropolis algorithm

One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability. @@ -2607,13 +2608,13 @@ probability of transition from \(\mathbf{r}_n\) to

-In the previous example, we were using uniform random -numbers. Hence, the transition probability was +In the previous example, we were using uniform sampling in a box centered +at the current position. Hence, the transition probability was symmetric

\[ - T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = + T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = T(\mathbf{r}_{n+1} \rightarrow \mathbf{r}_{n}) \text{constant}\,, \]

@@ -2624,7 +2625,7 @@ wave functions.

-Now, if instead of drawing uniform random numbers we +Now, if instead of drawing uniform random numbers, we choose to draw Gaussian random numbers with zero mean and variance \(\delta t\), the transition probability becomes:

@@ -2639,9 +2640,9 @@ choose to draw Gaussian random numbers with zero mean and variance

-To sample even better the density, we can "push" the electrons +Furthermore, to sample the density even better, we can "push" the electrons into in the regions of high probability, and "pull" them away from -the low-probability regions. This will mechanically increase the +the low-probability regions. This will ncrease the acceptance ratios and improve the sampling.

@@ -2656,20 +2657,8 @@ To do this, we can use the gradient of the probability density

-and add the so-called drift vector, so that the numerical scheme becomes a drifted diffusion: -

- -

-\[ - \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta t\, \frac{\nabla - \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi \,, - \] -

- -

-where \(\chi\) is a Gaussian random variable with zero mean and -variance \(\delta t\). -The transition probability becomes: +and add the so-called drift vector, so that the numerical scheme becomes a +drifted diffusion with transition probability:

@@ -2682,11 +2671,28 @@ The transition probability becomes:

-The algorithm of the previous exercise is only slighlty modified summarized: +and the corrsponding move is proposed as +

+ +

+\[ + \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta t\, \frac{\nabla + \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi \,, + \] +

+ +

+where \(\chi\) is a Gaussian random variable with zero mean and +variance \(\delta t\). +

+ + + +

+The algorithm of the previous exercise is only slighlty modified as:

    -
  1. For the starting position compute \(\Psi\) and the drif-vector \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\)
  2. Compute a new position \(\mathbf{r'} = \mathbf{r}_n + \delta t\, \frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi\) @@ -2709,8 +2715,8 @@ Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at th

-
-

3.5.1 Exercise 1

+
+

3.5.1 Exercise 1

@@ -2744,8 +2750,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-
3.5.1.1 Solution   solution
+
+
3.5.1.1 Solution   solution

Python @@ -2778,13 +2784,13 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-

3.5.2 Exercise 2

+
+

3.5.2 Exercise 2

-Modify the previous program to introduce the drifted diffusion scheme. -(This is a necessary step for the next section). +Modify the previous program to introduce the drift-diffusion scheme. +(This is a necessary step for the next section on diffusion Monte Carlo).

@@ -2873,8 +2879,8 @@ Modify the previous program to introduce the drifted diffusion scheme.
-
-
3.5.2.1 Solution   solution
+
+
3.5.2.1 Solution   solution

Python @@ -3060,12 +3066,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004

-
-

4 Diffusion Monte Carlo   solution

+
+

4 Diffusion Monte Carlo   solution

-
-

4.1 Schrödinger equation in imaginary time

+
+

4.1 Schrödinger equation in imaginary time

Consider the time-dependent Schrödinger equation: @@ -3073,12 +3079,12 @@ Consider the time-dependent Schrödinger equation:

\[ - i\frac{\partial \Psi(\mathbf{r},t)}{\partial t} = \hat{H} \Psi(\mathbf{r},t) + i\frac{\partial \Psi(\mathbf{r},t)}{\partial t} = \hat{H} \Psi(\mathbf{r},t)\,. \]

-We can expand \(\Psi(\mathbf{r},0)\), in the basis of the eigenstates +We can expand a given starting wave function, \(\Psi(\mathbf{r},0)\), in the basis of the eigenstates of the time-independent Hamiltonian:

@@ -3099,7 +3105,7 @@ The solution of the Schrödinger equation at time \(t\) is

-Now, let's replace the time variable \(t\) by an imaginary time variable +Now, if we replace the time variable \(t\) by an imaginary time variable \(\tau=i\,t\), we obtain

@@ -3110,10 +3116,10 @@ Now, let's replace the time variable \(t\) by an imaginary time variable

-where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\tau) = \Psi(\mathbf{r},t)\) +where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\,)\) and \[ - \psi(\mathbf{r},\tau) = \sum_k a_k \exp( -E_k\, \tau) \phi_k(\mathbf{r}). + \psi(\mathbf{r},\tau) = \sum_k a_k \exp( -E_k\, \tau) \phi_k(\mathbf{r}). \] For large positive values of \(\tau\), \(\psi\) is dominated by the \(k=0\) term, namely the lowest eigenstate. @@ -3124,8 +3130,8 @@ system.

-
-

4.2 Diffusion and branching

+
+

4.2 Diffusion and branching

The diffusion equation of particles is given by @@ -3179,8 +3185,8 @@ the combination of a diffusion process and a branching process.

-
-

4.3 Importance sampling

+