diff --git a/index.html b/index.html index 56f2990..7694664 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Quantum Monte Carlo @@ -329,151 +329,151 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Introduction

+
+

1 Introduction

This website contains the QMC tutorial of the 2021 LTTC winter school @@ -513,8 +513,8 @@ coordinates, etc).

-
-

1.1 Energy and local energy

+
+

1.1 Energy and local energy

For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -578,7 +578,7 @@ where the probability density is given by the square of the wave function:

-\[ P(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2)}{\int |\Psi(\mathbf{r})|^2 d\mathbf{r}}\,. \] +\[ P(\mathbf{r}) = \frac{|\Psi(\mathbf{r})|^2}{\int |\Psi(\mathbf{r})|^2 d\mathbf{r}}\,. \]

@@ -592,8 +592,8 @@ If we can sample \(N_{\rm MC}\) configurations \(\{\mathbf{r}\}\) distributed as

-
-

2 Numerical evaluation of the energy of the hydrogen atom

+
+

2 Numerical evaluation of the energy of the hydrogen atom

In this section, we consider the hydrogen atom with the following @@ -622,8 +622,8 @@ To do that, we will compute the local energy and check whether it is constant.

-
-

2.1 Local energy

+
+

2.1 Local energy

You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -650,8 +650,8 @@ to catch the error.

-
-

2.1.1 Exercise 1

+
+

2.1.1 Exercise 1

@@ -695,8 +695,8 @@ and returns the potential.

-
-
2.1.1.1 Solution   solution
+
+
2.1.1.1 Solution   solution

Python @@ -736,8 +736,8 @@ and returns the potential.

-
-

2.1.2 Exercise 2

+
+

2.1.2 Exercise 2

@@ -772,8 +772,8 @@ input arguments, and returns a scalar.

-
-
2.1.2.1 Solution   solution
+
+
2.1.2.1 Solution   solution

Python @@ -800,8 +800,8 @@ input arguments, and returns a scalar.

-
-

2.1.3 Exercise 3

+
+

2.1.3 Exercise 3

@@ -882,8 +882,8 @@ Therefore, the local kinetic energy is

-
-
2.1.3.1 Solution   solution
+
+
2.1.3.1 Solution   solution

Python @@ -924,8 +924,8 @@ Therefore, the local kinetic energy is

-
-

2.1.4 Exercise 4

+
+

2.1.4 Exercise 4

@@ -968,8 +968,8 @@ local kinetic energy.

-
-
2.1.4.1 Solution   solution
+
+
2.1.4.1 Solution   solution

Python @@ -999,8 +999,8 @@ local kinetic energy.

-
-

2.1.5 Exercise 5

+
+

2.1.5 Exercise 5

@@ -1010,8 +1010,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(

-
-
2.1.5.1 Solution   solution
+
+
2.1.5.1 Solution   solution
\begin{eqnarray*} E &=& \frac{\hat{H} \Psi}{\Psi} = - \frac{1}{2} \frac{\Delta \Psi}{\Psi} - @@ -1031,8 +1031,8 @@ equal to -0.5 atomic units.
-
-

2.2 Plot of the local energy along the \(x\) axis

+
+

2.2 Plot of the local energy along the \(x\) axis

@@ -1043,8 +1043,8 @@ choose a grid which does not contain the origin.

-
-

2.2.1 Exercise

+
+

2.2.1 Exercise

@@ -1127,8 +1127,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \

-
-
2.2.1.1 Solution   solution
+
+
2.2.1.1 Solution   solution

Python @@ -1203,8 +1203,8 @@ plt.savefig("plot_py.png")

-
-

2.3 Numerical estimation of the energy

+
+

2.3 Numerical estimation of the energy

If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1234,8 +1234,8 @@ The energy is biased because:

-
-

2.3.1 Exercise

+
+

2.3.1 Exercise

@@ -1304,8 +1304,8 @@ To compile the Fortran and run it:

-
-
2.3.1.1 Solution   solution
+
+
2.3.1.1 Solution   solution

Python @@ -1420,8 +1420,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002

-
-

2.4 Variance of the local energy

+
+

2.4 Variance of the local energy

The variance of the local energy is a functional of \(\Psi\) @@ -1448,8 +1448,8 @@ energy can be used as a measure of the quality of a wave function.

-
-

2.4.1 Exercise (optional)

+
+

2.4.1 Exercise (optional)

@@ -1460,8 +1460,8 @@ Prove that :

-
-
2.4.1.1 Solution   solution
+
+
2.4.1.1 Solution   solution

\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1480,8 +1480,8 @@ Prove that :

-
-

2.4.2 Exercise

+
+

2.4.2 Exercise

@@ -1555,8 +1555,8 @@ To compile and run:

-
-
2.4.2.1 Solution   solution
+
+
2.4.2.1 Solution   solution

Python @@ -1693,8 +1693,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814

-
-

3 Variational Monte Carlo

+
+

3 Variational Monte Carlo

Numerical integration with deterministic methods is very efficient @@ -1710,8 +1710,8 @@ interval.

-
-

3.1 Computation of the statistical error

+
+

3.1 Computation of the statistical error

To compute the statistical error, you need to perform \(M\) @@ -1751,8 +1751,8 @@ And the confidence interval is given by

-
-

3.1.1 Exercise

+
+

3.1.1 Exercise

@@ -1790,8 +1790,8 @@ input array.

-
-
3.1.1.1 Solution   solution
+
+
3.1.1.1 Solution   solution

Python @@ -1850,8 +1850,8 @@ input array.

-
-

3.2 Uniform sampling in the box

+
+

3.2 Uniform sampling in the box

We will now perform our first Monte Carlo calculation to compute the @@ -1912,8 +1912,8 @@ compute the statistical error.

-
-

3.2.1 Exercise

+
+

3.2.1 Exercise

@@ -2013,8 +2013,8 @@ well as the index of the current step.

-
-
3.2.1.1 Solution   solution
+
+
3.2.1.1 Solution   solution

Python @@ -2128,8 +2128,8 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004

-
-

3.3 Metropolis sampling with \(\Psi^2\)

+
+

3.3 Metropolis sampling with \(\Psi^2\)

We will now use the square of the wave function to sample random @@ -2262,14 +2262,14 @@ compromise for the current problem.

-NOTE: below, we use the symbol dt to denote dL since we will use +NOTE: below, we use the symbol \(\delta t\) to denote \(\delta L\) since we will use the same variable later on to store a time step.

-
-

3.3.1 Exercise

+
+

3.3.1 Exercise

@@ -2376,8 +2376,8 @@ Can you observe a reduction in the statistical error?

-
-
3.3.1.1 Solution   solution
+
+
3.3.1.1 Solution   solution

Python @@ -2522,8 +2522,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004

-
-

3.4 Gaussian random number generator

+
+

3.4 Gaussian random number generator

To obtain Gaussian-distributed random numbers, you can apply the @@ -2586,8 +2586,8 @@ In Python, you can use the -

3.5 Generalized Metropolis algorithm

+
+

3.5 Generalized Metropolis algorithm

One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability. @@ -2705,12 +2705,7 @@ Compute a new position \(\mathbf{r'} = \mathbf{r}_n +

Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at the new position

-
  • Compute the ratio $A = \frac{T(\mathbf{r}n+1 → \mathbf{r}n) P(\mathbf{r}n+1)}
  • - -

    -{T(\mathbf{r}n → \mathbf{r}n+1) P(\mathbf{r}n)}$ -

    -
      +
    1. Compute the ratio \(A = \frac{T(\mathbf{r}_{n+1} \rightarrow \mathbf{r}_{n}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) P(\mathbf{r}_{n})}\)
    2. Draw a uniform random number \(v \in [0,1]\)
    3. if \(v \le A\), accept the move : set \(\mathbf{r}_{n+1} = \mathbf{r'}\)
    4. else, reject the move : set \(\mathbf{r}_{n+1} = \mathbf{r}_n\)
    5. @@ -2719,8 +2714,8 @@ Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at th
    -
    -

    3.5.1 Exercise 1

    +
    +

    3.5.1 Exercise 1

    @@ -2754,8 +2749,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

    -
    -
    3.5.1.1 Solution   solution
    +
    +
    3.5.1.1 Solution   solution

    Python @@ -2788,8 +2783,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

    -
    -

    3.5.2 Exercise 2

    +
    +

    3.5.2 Exercise 2

    @@ -2883,8 +2878,8 @@ Modify the previous program to introduce the drift-diffusion scheme.

    -
    -
    3.5.2.1 Solution   solution
    +
    +
    3.5.2.1 Solution   solution

    Python @@ -3070,12 +3065,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004

    -
    -

    4 Diffusion Monte Carlo   solution

    +
    +

    4 Diffusion Monte Carlo   solution

    -
    -

    4.1 Schrödinger equation in imaginary time

    +
    +

    4.1 Schrödinger equation in imaginary time

    Consider the time-dependent Schrödinger equation: @@ -3088,7 +3083,7 @@ Consider the time-dependent Schrödinger equation:

    -where we introduced a shift in the energy, \(E_{\rm ref}\), which will come useful below. +where we introduced a shift in the energy, \(E_{\rm ref}\), for reasons which will become apparent below.

    @@ -3124,13 +3119,13 @@ Now, if we replace the time variable \(t\) by an imaginary time variable

    -where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\,t)\) +where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\,\tau)\) and

    \begin{eqnarray*} -\psi(\mathbf{r},\tau) &=& \sum_k a_k \exp( -(E_k-E_{\rm ref})\, \tau) \phi_k(\mathbf{r})\\ - &=& \exp(-(E_0-E_{\rm ref})\, \tau)\sum_k a_k \exp( -(E_k-E_0)\, \tau) \phi_k(\mathbf{r})\,. +\psi(\mathbf{r},\tau) &=& \sum_k a_k \exp( -(E_k-E_{\rm ref})\, \tau) \Phi_k(\mathbf{r})\\ + &=& \exp(-(E_0-E_{\rm ref})\, \tau)\sum_k a_k \exp( -(E_k-E_0)\, \tau) \Phi_k(\mathbf{r})\,. \end{eqnarray*}

    @@ -3143,8 +3138,8 @@ system.

    -
    -

    4.2 Diffusion and branching

    +
    +

    4.2 Diffusion and branching

    The imaginary-time Schrödinger equation can be explicitly written in terms of the kinetic and @@ -3213,8 +3208,8 @@ so-called branching process).

    -Diffusion Monte Carlo (DMC) consists in obtaining the ground state of a -system by simulating the Schrödinger equation in imaginary time, by +In Diffusion Monte Carlo (DMC), one onbtains the ground state of a +system by simulating the Schrödinger equation in imaginary time via the combination of a diffusion process and a branching process.

    @@ -3241,12 +3236,12 @@ Therefore, in both cases, you are dealing with a "Bosonic" ground state.
    -
    -

    4.3 Importance sampling

    +
    +

    4.3 Importance sampling

    In a molecular system, the potential is far from being constant -and diverges at inter-particle coalescence points. Hence, when the +and, in fact, diverges at the inter-particle coalescence points. Hence, when the rate equation is simulated, it results in very large fluctuations in the numbers of particles, making the calculations impossible in practice. @@ -3279,7 +3274,7 @@ Defining \(\Pi(\mathbf{r},\tau) = \psi(\mathbf{r},\tau) \Psi_T(\mathbf{r})\), (s The new "kinetic energy" can be simulated by the drift-diffusion scheme presented in the previous section (VMC). The new "potential" is the local energy, which has smaller fluctuations -when \(\Psi_T\) gets closer to the exact wave function. It can be simulated by +when \(\Psi_T\) gets closer to the exact wave function. This term can be simulated by changing the number of particles according to \(\exp\left[ -\delta t\, \left(E_L(\mathbf{r}) - E_{\rm ref}\right)\right]\) where \(E_{\rm ref}\) is the constant we had introduced above, which is adjusted to @@ -3338,8 +3333,8 @@ energies computed with the trial wave function.

    -
    -

    4.3.1 Appendix : Details of the Derivation

    +
    +

    4.3.1 Appendix : Details of the Derivation

    \[ @@ -3400,8 +3395,8 @@ Defining \(\Pi(\mathbf{r},t) = \psi(\mathbf{r},\tau)

    -
    -

    4.4 Pure Diffusion Monte Carlo (PDMC)

    +
    +

    4.4 Pure Diffusion Monte Carlo (PDMC)

    Instead of having a variable number of particles to simulate the @@ -3437,12 +3432,7 @@ Compute a new position \(\mathbf{r'} = \mathbf{r}_n +

    Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at the new position

    -
  • Compute the ratio $A = \frac{T(\mathbf{r}n+1 → \mathbf{r}n) P(\mathbf{r}n+1)}
  • - -

    -{T(\mathbf{r}n → \mathbf{r}n+1) P(\mathbf{r}n)}$ -

    -
      +
    1. Compute the ratio \(A = \frac{T(\mathbf{r}_{n+1} \rightarrow \mathbf{r}_{n}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) P(\mathbf{r}_{n})}\)
    2. Draw a uniform random number \(v \in [0,1]\)
    3. if \(v \le A\), accept the move : set \(\mathbf{r}_{n+1} = \mathbf{r'}\)
    4. else, reject the move : set \(\mathbf{r}_{n+1} = \mathbf{r}_n\)
    5. @@ -3460,37 +3450,29 @@ Some comments are needed: \begin{eqnarray*} -E = \frac{\sum_{i=1}{N_{\rm MC}} E_L(\mathbf{r}_i) W(\mathbf{r}_i, i\delta t)}{\sum_{i=1}{N_{\rm MC}} W(\mathbf{r}_i, i\delta t)} -\end{eqnarray} - -- The result will be affected by a time-step error (the finite size of $\delta t$) and one -has in principle to extrapolate to the limit $\delta t \rightarrow 0$. This amounts to fitting -the energy computed for multiple values of $\delta t$. -- The accept/reject step (steps 2-5 in the algorithm) is not in principle needed for the correctness of -the DMC algorithm. However, its use reduces si - - - - - - -The wave function becomes - -\[ -\psi(\mathbf{r},\tau) = \Psi_T(\mathbf{r}) W(\mathbf{r},\tau) -\] - -and the expression of the fixed-node DMC energy is - -\begin{eqnarray*} -E(\tau) & = & \frac{\int \psi(\mathbf{r},\tau) \Psi_T(\mathbf{r}) E_L(\mathbf{r}) d\mathbf{r}} - {\int \psi(\mathbf{r},\tau) \Psi_T(\mathbf{r}) d\mathbf{r}} \\ - & = & \frac{\int \left[ \Psi_T(\mathbf{r}) \right]^2 W(\mathbf{r},\tau) E_L(\mathbf{r}) d\mathbf{r}} - {\int \left[ \Psi_T(\mathbf{r}) \right]^2 W(\mathbf{r},\tau) d\mathbf{r}} \\ +E = \frac{\sum_{k=1}{N_{\rm MC}} E_L(\mathbf{r}_k) W(\mathbf{r}_k, k\delta t)}{\sum_{k=1}{N_{\rm MC}} W(\mathbf{r}_k, k\delta t)} \end{eqnarray*} +
        +
      • The result will be affected by a time-step error (the finite size of \(\delta t\)) and one
      • +

      -This algorithm is less stable than the branching algorithm: it +has in principle to extrapolate to the limit \(\delta t \rightarrow 0\). This amounts to fitting +the energy computed for multiple values of \(\delta t\). +

      + +

      +Here, you will be using a small enough time-step and you should not worry about the extrapolation. +

      +
        +
      • The accept/reject step (steps 2-5 in the algorithm) is in principle not needed for the correctness of
      • +
      +

      +the DMC algorithm. However, its use reduces significantly the time-step error. +

      + +

      +PDMC algorithm is less stable than the branching algorithm: it requires to have a value of \(E_\text{ref}\) which is close to the fixed-node energy, and a good trial wave function. Its big advantage is that it is very easy to program starting from a VMC @@ -3499,13 +3481,13 @@ code, so this is what we will do in the next section.

    -
    -

    4.5 Hydrogen atom

    +
    +

    4.5 Hydrogen atom

    -
    -

    4.5.1 Exercise

    +
    +

    4.5.1 Exercise

    @@ -3604,8 +3586,8 @@ energy of H for any value of \(a\).

    -
    -
    4.5.1.1 Solution   solution
    +
    +
    4.5.1.1 Solution   solution

    Python @@ -3821,8 +3803,8 @@ A = 0.98788066666666663 +/- 7.2889356133441110E-005

    -
    -

    4.6 TODO H2

    +
    +

    4.6 TODO H2

    We will now consider the H2 molecule in a minimal basis composed of the @@ -3843,8 +3825,8 @@ the nuclei.

    -
    -

    5 TODO [0/3] Last things to do

    +
    +

    5 TODO [0/3] Last things to do

    • [ ] Give some hints of how much time is required for each section
    • @@ -3860,7 +3842,7 @@ the H\(_2\) molecule at $R$=1.4010 bohr. Answer: 0.17406 a.u.

    Author: Anthony Scemama, Claudia Filippi

    -

    Created: 2021-02-01 Mon 12:52

    +

    Created: 2021-02-01 Mon 20:57

    Validate