diff --git a/index.html b/index.html index bf63222..1efa19b 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Quantum Monte Carlo @@ -329,151 +329,151 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Introduction

+
+

1 Introduction

-This web site contains the QMC tutorial of the 2021 LTTC winter school +This website contains the QMC tutorial of the 2021 LTTC winter school Tutorials in Theoretical Chemistry.

@@ -487,7 +487,8 @@ computes a statistical estimate of the expectation value of the energy associated with a given wave function, and apply this approach to the hydrogen atom. Finally, we present the diffusion Monte Carlo (DMC) method which -we use here to estimate the exact energy of the hydrogen atom and of the H2 molecule. +we use here to estimate the exact energy of the hydrogen atom and of the H2 molecule, +starting from an approximate wave function.

@@ -510,11 +511,11 @@ coordinates, etc).

-
-

2 Numerical evaluation of the energy

+
+

2 Numerical evaluation of the energy

-In this section we consider the Hydrogen atom with the following +In this section, we consider the hydrogen atom with the following wave function:

@@ -525,7 +526,7 @@ wave function:

-We will first verify that, for a given value of \(a\), \(\Psi\) is an +We will first verify that, for a particular value of \(a\), \(\Psi\) is an eigenfunction of the Hamiltonian

@@ -536,7 +537,7 @@ eigenfunction of the Hamiltonian

-To do that, we will check if the local energy, defined as +To do that, we will compute the local energy, defined as

@@ -546,31 +547,11 @@ To do that, we will check if the local energy, defined as

-is constant. -

- - -

-The probabilistic expected value of an arbitrary function \(f(x)\) -with respect to a probability density function \(p(x)\) is given by +and check whether it is constant.

-\[ \langle f \rangle_p = \int_{-\infty}^\infty p(x)\, f(x)\,dx. \] -

- -

-Recall that a probability density function \(p(x)\) is non-negative -and integrates to one: -

- -

-\[ \int_{-\infty}^\infty p(x)\,dx = 1. \] -

- - -

-The electronic energy of a system is the expectation value of the +In general, the electronic energy of a system, \(E\), can be rewritten as the expectation value of the local energy \(E(\mathbf{r})\) with respect to the 3N-dimensional electron density given by the square of the wave function:

@@ -580,12 +561,40 @@ E & = & \frac{\langle \Psi| \hat{H} | \Psi\rangle}{\langle \Psi |\Psi \rangle} = \frac{\int \Psi(\mathbf{r})\, \hat{H} \Psi(\mathbf{r})\, d\mathbf{r}}{\int \left[\Psi(\mathbf{r}) \right]^2 d\mathbf{r}} \\ & = & \frac{\int \left[\Psi(\mathbf{r})\right]^2\, \frac{\hat{H} \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\,d\mathbf{r}}{\int \left[\Psi(\mathbf{r}) \right]^2 d\mathbf{r}} = \frac{\int \left[\Psi(\mathbf{r})\right]^2\, E_L(\mathbf{r})\,d\mathbf{r}}{\int \left[\Psi(\mathbf{r}) \right]^2 d\mathbf{r}} - = \langle E_L \rangle_{\Psi^2} + = \langle E_L \rangle_{\Psi^2}\,, \end{eqnarray*} +

+where \(\mathbf{r}\) is the vector of the 3N-dimensional electronic coordinates (\(N=1\) for the hydrogen atom). +

+ +

+For a small number of dimensions, one can compute \(E\) by evaluating the integrals on a grid. However, +

+ +

+The probabilistic expected value of an arbitrary function \(f(x)\) +with respect to a probability density function \(p(x)\) is given by +

+ +

+\[ \langle f \rangle_p = \int_{-\infty}^\infty p(x)\, f(x)\,dx, \] +

+ +

+where probability density function \(p(x)\) is non-negative +and integrates to one: +

+ +

+\[ \int_{-\infty}^\infty p(x)\,dx = 1. \] +

-
-

2.1 Local energy

+ + + +
+

2.1 Local energy

Write all the functions of this section in a single file : @@ -608,8 +617,8 @@ to catch the error.

-
-

2.1.1 Exercise 1

+
+

2.1.1 Exercise 1

@@ -653,8 +662,8 @@ and returns the potential.

-
-
2.1.1.1 Solution   solution
+
+
2.1.1.1 Solution   solution

Python @@ -694,8 +703,8 @@ and returns the potential.

-
-

2.1.2 Exercise 2

+
+

2.1.2 Exercise 2

@@ -730,8 +739,8 @@ input arguments, and returns a scalar.

-
-
2.1.2.1 Solution   solution
+
+
2.1.2.1 Solution   solution

Python @@ -758,8 +767,8 @@ input arguments, and returns a scalar.

-
-

2.1.3 Exercise 3

+
+

2.1.3 Exercise 3

@@ -840,8 +849,8 @@ So the local kinetic energy is

-
-
2.1.3.1 Solution   solution
+
+
2.1.3.1 Solution   solution

Python @@ -882,8 +891,8 @@ So the local kinetic energy is

-
-

2.1.4 Exercise 4

+
+

2.1.4 Exercise 4

@@ -926,8 +935,8 @@ local kinetic energy.

-
-
2.1.4.1 Solution   solution
+
+
2.1.4.1 Solution   solution

Python @@ -957,8 +966,8 @@ local kinetic energy.

-
-

2.1.5 Exercise 5

+
+

2.1.5 Exercise 5

@@ -968,8 +977,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(

-
-
2.1.5.1 Solution   solution
+
+
2.1.5.1 Solution   solution
\begin{eqnarray*} E &=& \frac{\hat{H} \Psi}{\Psi} = - \frac{1}{2} \frac{\Delta \Psi}{\Psi} - @@ -989,8 +998,8 @@ equal to -0.5 atomic units.
-
-

2.2 Plot of the local energy along the \(x\) axis

+
+

2.2 Plot of the local energy along the \(x\) axis

@@ -1001,8 +1010,8 @@ choose a grid which does not contain the origin.

-
-

2.2.1 Exercise

+
+

2.2.1 Exercise

@@ -1085,8 +1094,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \

-
-
2.2.1.1 Solution   solution
+
+
2.2.1.1 Solution   solution

Python @@ -1161,8 +1170,8 @@ plt.savefig("plot_py.png")

-
-

2.3 Numerical estimation of the energy

+
+

2.3 Numerical estimation of the energy

If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1192,8 +1201,8 @@ The energy is biased because:

-
-

2.3.1 Exercise

+
+

2.3.1 Exercise

@@ -1262,8 +1271,8 @@ To compile the Fortran and run it:

-
-
2.3.1.1 Solution   solution
+
+
2.3.1.1 Solution   solution

Python @@ -1378,8 +1387,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002

-
-

2.4 Variance of the local energy

+
+

2.4 Variance of the local energy

The variance of the local energy is a functional of \(\Psi\) @@ -1406,8 +1415,8 @@ energy can be used as a measure of the quality of a wave function.

-
-

2.4.1 Exercise (optional)

+
+

2.4.1 Exercise (optional)

@@ -1418,8 +1427,8 @@ Prove that :

-
-
2.4.1.1 Solution   solution
+
+
2.4.1.1 Solution   solution

\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1438,8 +1447,8 @@ Prove that :

-
-

2.4.2 Exercise

+
+

2.4.2 Exercise

@@ -1513,8 +1522,8 @@ To compile and run:

-
-
2.4.2.1 Solution   solution
+
+
2.4.2.1 Solution   solution

Python @@ -1651,8 +1660,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814

-
-

3 Variational Monte Carlo

+
+

3 Variational Monte Carlo

Numerical integration with deterministic methods is very efficient @@ -1668,8 +1677,8 @@ interval.

-
-

3.1 Computation of the statistical error

+
+

3.1 Computation of the statistical error

To compute the statistical error, you need to perform \(M\) @@ -1709,8 +1718,8 @@ And the confidence interval is given by

-
-

3.1.1 Exercise

+
+

3.1.1 Exercise

@@ -1748,8 +1757,8 @@ input array.

-
-
3.1.1.1 Solution   solution
+
+
3.1.1.1 Solution   solution

Python @@ -1808,8 +1817,8 @@ input array.

-
-

3.2 Uniform sampling in the box

+
+

3.2 Uniform sampling in the box

We will now do our first Monte Carlo calculation to compute the @@ -1843,8 +1852,8 @@ compute the statistical error.

-
-

3.2.1 Exercise

+
+

3.2.1 Exercise

@@ -1944,8 +1953,8 @@ well as the index of the current step.

-
-
3.2.1.1 Solution   solution
+
+
3.2.1.1 Solution   solution

Python @@ -2059,8 +2068,8 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004

-
-

3.3 Metropolis sampling with \(\Psi^2\)

+
+

3.3 Metropolis sampling with \(\Psi^2\)

We will now use the square of the wave function to sample random @@ -2148,8 +2157,8 @@ step such that the acceptance rate is close to 0.5 is a good compromise.

-
-

3.3.1 Exercise

+
+

3.3.1 Exercise

@@ -2256,8 +2265,8 @@ Can you observe a reduction in the statistical error?

-
-
3.3.1.1 Solution   solution
+
+
3.3.1.1 Solution   solution

Python @@ -2402,8 +2411,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004

-
-

3.4 Gaussian random number generator

+
+

3.4 Gaussian random number generator

To obtain Gaussian-distributed random numbers, you can apply the @@ -2465,8 +2474,8 @@ In Python, you can use the -

3.5 Generalized Metropolis algorithm

+
+

3.5 Generalized Metropolis algorithm

One can use more efficient numerical schemes to move the electrons, @@ -2565,8 +2574,8 @@ The transition probability becomes:

-
-

3.5.1 Exercise 1

+
+

3.5.1 Exercise 1

@@ -2600,8 +2609,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-
3.5.1.1 Solution   solution
+
+
3.5.1.1 Solution   solution

Python @@ -2634,8 +2643,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-

3.5.2 Exercise 2

+
+

3.5.2 Exercise 2

@@ -2729,8 +2738,8 @@ Modify the previous program to introduce the drifted diffusion scheme.

-
-
3.5.2.1 Solution   solution
+
+
3.5.2.1 Solution   solution

Python @@ -2916,12 +2925,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004

-
-

4 Diffusion Monte Carlo   solution

+
+

4 Diffusion Monte Carlo   solution

-
-

4.1 Schrödinger equation in imaginary time

+
+

4.1 Schrödinger equation in imaginary time

Consider the time-dependent Schrödinger equation: @@ -2980,8 +2989,8 @@ system.

-
-

4.2 Diffusion and branching

+
+

4.2 Diffusion and branching

The diffusion equation of particles is given by @@ -3035,8 +3044,8 @@ the combination of a diffusion process and a branching process.

-
-

4.3 Importance sampling

+