From f52cf22c9b0058326d116d8cfd5ff7ca10b1e9e2 Mon Sep 17 00:00:00 2001 From: scemama Date: Tue, 2 Feb 2021 22:42:33 +0000 Subject: [PATCH] deploy: c10c7697b9db9dd276d794592ca707768659d3ba --- index.html | 417 ++++++++++++++++++++++++++++------------------------- 1 file changed, 219 insertions(+), 198 deletions(-) diff --git a/index.html b/index.html index b11ceef..ebc4d9f 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Quantum Monte Carlo @@ -329,153 +329,153 @@ for the JavaScript code in this tag.

Table of Contents

-
-

1 Introduction

+
+

1 Introduction

This website contains the QMC tutorial of the 2021 LTTC winter school @@ -515,8 +515,8 @@ coordinates, etc).

-
-

1.1 Energy and local energy

+
+

1.1 Energy and local energy

For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -599,8 +599,8 @@ energy computed over these configurations:

-
-

2 Numerical evaluation of the energy of the hydrogen atom

+
+

2 Numerical evaluation of the energy of the hydrogen atom

In this section, we consider the hydrogen atom with the following @@ -629,8 +629,8 @@ To do that, we will compute the local energy and check whether it is constant.

-
-

2.1 Local energy

+
+

2.1 Local energy

You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -657,8 +657,8 @@ to catch the error.

-
-

2.1.1 Exercise 1

+
+

2.1.1 Exercise 1

@@ -703,8 +703,8 @@ and returns the potential.

-
-
2.1.1.1 Solution   solution
+
+
2.1.1.1 Solution   solution

Python @@ -745,8 +745,8 @@ and returns the potential.

-
-

2.1.2 Exercise 2

+
+

2.1.2 Exercise 2

@@ -781,8 +781,8 @@ input arguments, and returns a scalar.

-
-
2.1.2.1 Solution   solution
+
+
2.1.2.1 Solution   solution

Python @@ -809,8 +809,8 @@ input arguments, and returns a scalar.

-
-

2.1.3 Exercise 3

+
+

2.1.3 Exercise 3

@@ -891,8 +891,8 @@ Therefore, the local kinetic energy is

-
-
2.1.3.1 Solution   solution
+
+
2.1.3.1 Solution   solution

Python @@ -933,8 +933,8 @@ Therefore, the local kinetic energy is

-
-

2.1.4 Exercise 4

+
+

2.1.4 Exercise 4

@@ -993,8 +993,8 @@ are calling is yours.

-
-
2.1.4.1 Solution   solution
+
+
2.1.4.1 Solution   solution

Python @@ -1025,8 +1025,8 @@ are calling is yours.

-
-

2.1.5 Exercise 5

+
+

2.1.5 Exercise 5

@@ -1036,8 +1036,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(

-
-
2.1.5.1 Solution   solution
+
+
2.1.5.1 Solution   solution
\begin{eqnarray*} E &=& \frac{\hat{H} \Psi}{\Psi} = - \frac{1}{2} \frac{\Delta \Psi}{\Psi} - @@ -1057,8 +1057,8 @@ equal to -0.5 atomic units.
-
-

2.2 Plot of the local energy along the \(x\) axis

+
+

2.2 Plot of the local energy along the \(x\) axis

The program you will write in this section will be written in @@ -1089,8 +1089,8 @@ In Fortran, you will need to compile all the source files together:

-
-

2.2.1 Exercise

+
+

2.2.1 Exercise

@@ -1184,8 +1184,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \

-
-
2.2.1.1 Solution   solution
+
+
2.2.1.1 Solution   solution

Python @@ -1262,8 +1262,8 @@ plt.savefig("plot_py.png")

-
-

2.3 Numerical estimation of the energy

+
+

2.3 Numerical estimation of the energy

If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1293,8 +1293,8 @@ The energy is biased because:

-
-

2.3.1 Exercise

+
+

2.3.1 Exercise

@@ -1365,8 +1365,8 @@ To compile the Fortran and run it:

-
-
2.3.1.1 Solution   solution
+
+
2.3.1.1 Solution   solution

Python @@ -1483,8 +1483,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002

-
-

2.4 Variance of the local energy

+
+

2.4 Variance of the local energy

The variance of the local energy is a functional of \(\Psi\) @@ -1511,8 +1511,8 @@ energy can be used as a measure of the quality of a wave function.

-
-

2.4.1 Exercise (optional)

+
+

2.4.1 Exercise (optional)

@@ -1523,8 +1523,8 @@ Prove that :

-
-
2.4.1.1 Solution   solution
+
+
2.4.1.1 Solution   solution

\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1543,8 +1543,8 @@ Prove that :

-
-

2.4.2 Exercise

+
+

2.4.2 Exercise

@@ -1620,8 +1620,8 @@ To compile and run:

-
-
2.4.2.1 Solution   solution
+
+
2.4.2.1 Solution   solution

Python @@ -1760,8 +1760,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814

-
-

3 Variational Monte Carlo

+
+

3 Variational Monte Carlo

Numerical integration with deterministic methods is very efficient @@ -1777,8 +1777,8 @@ interval.

-
-

3.1 Computation of the statistical error

+
+

3.1 Computation of the statistical error

To compute the statistical error, you need to perform \(M\) @@ -1818,8 +1818,8 @@ And the confidence interval is given by

-
-

3.1.1 Exercise

+
+

3.1.1 Exercise

@@ -1859,8 +1859,8 @@ input array.

-
-
3.1.1.1 Solution   solution
+
+
3.1.1.1 Solution   solution

Python @@ -1921,8 +1921,8 @@ input array.

-
-

3.2 Uniform sampling in the box

+
+

3.2 Uniform sampling in the box

We will now perform our first Monte Carlo calculation to compute the @@ -1983,8 +1983,8 @@ compute the statistical error.

-
-

3.2.1 Exercise

+
+

3.2.1 Exercise

@@ -2086,8 +2086,8 @@ well as the index of the current step.

-
-
3.2.1.1 Solution   solution
+
+
3.2.1.1 Solution   solution

Python @@ -2193,8 +2193,8 @@ E = -0.48084122147238995 +/- 2.4983775878329355E-003

-
-

3.3 Metropolis sampling with \(\Psi^2\)

+
+

3.3 Metropolis sampling with \(\Psi^2\)

We will now use the square of the wave function to sample random @@ -2313,8 +2313,8 @@ All samples should be kept, from both accepted and rejected moves.

-
-

3.3.1 Optimal step size

+
+

3.3.1 Optimal step size

If the box is infinitely small, the ratio will be very close @@ -2349,8 +2349,8 @@ the same variable later on to store a time step.

-
-

3.3.2 Exercise

+
+

3.3.2 Exercise

@@ -2459,8 +2459,8 @@ Can you observe a reduction in the statistical error?

-
-
3.3.2.1 Solution   solution
+
+
3.3.2.1 Solution   solution

Python @@ -2607,8 +2607,8 @@ A = 0.50762633333333318 +/- 3.4601756760043725E-004

-
-

3.4 Generalized Metropolis algorithm

+
+

3.4 Generalized Metropolis algorithm

One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability. @@ -2729,8 +2729,8 @@ The algorithm of the previous exercise is only slighlty modified as:

-
-

3.4.1 Gaussian random number generator

+
+

3.4.1 Gaussian random number generator

To obtain Gaussian-distributed random numbers, you can apply the @@ -2794,8 +2794,8 @@ In Python, you can use the -

3.4.2 Exercise 1

+
+

3.4.2 Exercise 1

@@ -2837,8 +2837,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-
3.4.2.1 Solution   solution
+
+
3.4.2.1 Solution   solution

Python @@ -2871,8 +2871,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P

-
-

3.4.3 Exercise 2

+
+

3.4.3 Exercise 2

@@ -2968,8 +2968,8 @@ Modify the previous program to introduce the drift-diffusion scheme.

-
-
3.4.3.1 Solution   solution
+
+
3.4.3.1 Solution   solution

Python @@ -3157,8 +3157,8 @@ A = 0.62037333333333333 +/- 4.8970160591451110E-004

-
-

4 Diffusion Monte Carlo   solution

+
+

4 Diffusion Monte Carlo   solution

As we have seen, Variational Monte Carlo is a powerful method to @@ -3175,8 +3175,8 @@ finding a near-exact numerical solution to the Schrödinger equation.

-
-

4.1 Schrödinger equation in imaginary time

+
+

4.1 Schrödinger equation in imaginary time

Consider the time-dependent Schrödinger equation: @@ -3244,8 +3244,8 @@ system.

-
-

4.2 Relation to diffusion

+
+

4.2 Relation to diffusion

The diffusion equation of particles is given by @@ -3290,12 +3290,13 @@ The diffusion equation can be simulated by a Brownian motion: where \(\chi\) is a Gaussian random variable, and the potential term can be simulated by creating or destroying particles over time (a so-called branching process) or by simply considering it as a -cumulative multiplicative weight along the diffusion trajectory: +cumulative multiplicative weight along the diffusion trajectory +(pure Diffusion Monte Carlo):

\[ - \exp \left( \int_0^\tau - (E_L(\mathbf{r}_t) - E_{\text{ref}}) dt \right) + \exp \left( \int_0^\tau - (V(\mathbf{r}_t) - E_{\text{ref}}) dt \right). \]

@@ -3324,13 +3325,13 @@ Therefore, in both cases, you are dealing with a "Bosonic" ground state.
-
-

4.3 Importance sampling

+
+

4.3 Importance sampling

In a molecular system, the potential is far from being constant and, in fact, diverges at the inter-particle coalescence points. Hence, -it results in very large fluctuations of the term associated with +it results in very large fluctuations of the erm weight associated with the potental, making the calculations impossible in practice. Fortunately, if we multiply the Schrödinger equation by a chosen trial wave function \(\Psi_T(\mathbf{r})\) (Hartree-Fock, Kohn-Sham @@ -3362,21 +3363,23 @@ The new "kinetic energy" can be simulated by the drift-diffusion scheme presented in the previous section (VMC). The new "potential" is the local energy, which has smaller fluctuations when \(\Psi_T\) gets closer to the exact wave function. -This term can be simulated by t particles according to \(\exp\left[ -\delta t\, - \left(E_L(\mathbf{r}) - E_{\rm ref}\right)\right]\) +This term can be simulated by + \[ + \exp \left( \int_0^\tau - (E_L(\mathbf{r}_t) - E_{\text{ref}}) dt \right). + \] where \(E_{\rm ref}\) is the constant we had introduced above, which is adjusted to -the running average energy to keep the number of particles -reasonably constant. +an estimate of the average energy to keep the weights close to one.

This equation generates the N-electron density \(\Pi\), which is the -product of the ground state with the trial wave function. You may then ask: how -can we compute the total energy of the system? +product of the ground state solution with the trial wave +function. You may then ask: how can we compute the total energy of +the system?

-To this aim, we use the mixed estimator of the energy: +To this aim, we use the mixed estimator of the energy:

\begin{eqnarray*} @@ -3398,7 +3401,8 @@ For large \(\tau\), we have that

-and, using that \(\hat{H}\) is Hermitian and that \(\Phi_0\) is an eigenstate of the Hamiltonian, we obtain for large \(\tau\) +and, using that \(\hat{H}\) is Hermitian and that \(\Phi_0\) is an +eigenstate of the Hamiltonian, we obtain for large \(\tau\)

@@ -3420,8 +3424,8 @@ energies computed with the trial wave function.

-
-

4.3.1 Appendix : Details of the Derivation

+
+

4.3.1 Appendix : Details of the Derivation

\[ @@ -3482,14 +3486,14 @@ Defining \(\Pi(\mathbf{r},t) = \psi(\mathbf{r},\tau)

-
-

4.4 Pure Diffusion Monte Carlo (PDMC)

+
+

4.4 Pure Diffusion Monte Carlo

Instead of having a variable number of particles to simulate the -branching process, one can consider the term -\(\exp \left( -\delta t\,( E_L(\mathbf{r}) - E_{\rm ref}) \right)\) as a -cumulative product of weights: +branching process as in the Diffusion Monte Carlo (DMC) algorithm, we +use variant called pure Diffusion Monte Carlo (PDMC) where +the potential term is considered as a cumulative product of weights:

\begin{eqnarray*} @@ -3501,22 +3505,42 @@ W(\mathbf{r}_n, \tau) \end{eqnarray*}

-where \(\mathbf{r}_i\) are the coordinates along the trajectory and we introduced a time-step \(\delta t\). +where \(\mathbf{r}_i\) are the coordinates along the trajectory and +we introduced a time-step variable \(\delta t\) to discretize the +integral.

-The algorithm can be rather easily built on top of your VMC code: +The PDMC algorithm is less stable than the DMC algorithm: it +requires to have a value of \(E_\text{ref}\) which is close to the +fixed-node energy, and a good trial wave function. Moreover, we +can't let \(\tau\) become too large because the weight whether +explode or vanish: we need to have a fixed value of \(\tau\) +(projection time). +The big advantage of PDMC is that it is rather simple to implement +starting from a VMC code:

    -
  1. Start with \(W=1\)
  2. -
  3. Evaluate the local energy at \(\mathbf{r}_{n}\) and accumulate it
  4. -
  5. Compute the weight \(w(\mathbf{r}_n)\)
  6. -
  7. Update \(W\)
  8. +
  9. Start with \(W(\mathbf{r}_0)=1, \tau_0 = 0\)
  10. +
  11. Evaluate the local energy at \(\mathbf{r}_{n}\)
  12. +
  13. Compute the contribution to the weight \(w(\mathbf{r}_n) = + \exp(-\delta t(E_L(\mathbf{r}_n)-E_\text{ref}))\)
  14. +
  15. Update \(W(\mathbf{r}_{n}) = W(\mathbf{r}_{n-1}) \times w(\mathbf{r}_n)\)
  16. +
  17. Accumulate the weighted energy \(W(\mathbf{r}_n) \times + E_L(\mathbf{r}_n)\), +and the weight \(W(\mathbf{r}_n)\) for the normalization
  18. +
  19. Update \(\tau_n = \tau_{n-1} + \delta t\)
  20. +
  21. If \(\tau_{n} > \tau_\text{max}\), the long projection time has +been reached and we can start an new trajectory from the current +position: reset \(W(r_n) = 1\) and \(\tau_n + = 0\)
  22. Compute a new position \(\mathbf{r'} = \mathbf{r}_n + \delta t\, \frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi\)
  23. Evaluate \(\Psi(\mathbf{r}')\) and \(\frac{\nabla \Psi(\mathbf{r'})}{\Psi(\mathbf{r'})}\) at the new position
  24. Compute the ratio \(A = \frac{T(\mathbf{r}' \rightarrow \mathbf{r}_{n}) P(\mathbf{r}')}{T(\mathbf{r}_{n} \rightarrow \mathbf{r}') P(\mathbf{r}_{n})}\)
  25. +
+
  1. Draw a uniform random number \(v \in [0,1]\)
  2. if \(v \le A\), accept the move : set \(\mathbf{r}_{n+1} = \mathbf{r'}\)
  3. else, reject the move : set \(\mathbf{r}_{n+1} = \mathbf{r}_n\)
  4. @@ -3537,41 +3561,37 @@ E = \frac{\sum_{k=1}^{N_{\rm MC}} E_L(\mathbf{r}_k) W(\mathbf{r}_k, k\delta t)}{ \end{eqnarray*}
  5. -The result will be affected by a time-step error (the finite size of \(\delta t\)) and one -has in principle to extrapolate to the limit \(\delta t \rightarrow 0\). This amounts to fitting -the energy computed for multiple values of \(\delta t\). +The result will be affected by a time-step error +(the finite size of \(\delta t\)) due to the discretization of the +integral, and one has in principle to extrapolate to the limit +\(\delta t \rightarrow 0\). This amounts to fitting the energy +computed for multiple values of \(\delta t\).

    Here, you will be using a small enough time-step and you should not worry about the extrapolation.

  6. -
  7. The accept/reject step (steps 2-5 in the algorithm) is in principle not needed for the correctness of +
  8. The accept/reject step (steps 9-12 in the algorithm) is in principle not needed for the correctness of the DMC algorithm. However, its use reduces significantly the time-step error.
  9. - -

    -The PDMC algorithm is less stable than the branching algorithm: it -requires to have a value of \(E_\text{ref}\) which is close to the -fixed-node energy, and a good trial wave function. Its big -advantage is that it is very easy to program starting from a VMC -code, so this is what we will do in the next section. -

-
-

4.5 Hydrogen atom

+ +
+

4.5 Hydrogen atom

-
-

4.5.1 Exercise

+
+

4.5.1 Exercise

-Modify the Metropolis VMC program to introduce the PDMC weight. +Modify the Metropolis VMC program into a PDMC program. In the limit \(\delta t \rightarrow 0\), you should recover the exact -energy of H for any value of \(a\). +energy of H for any value of \(a\), as long as the simulation is stable. +We choose here a fixed projection time \(\tau=10\) a.u.

@@ -3590,6 +3610,7 @@ energy of H for any value of \(a\). a = 1.2 nmax = 100000 dt = 0.01 +tau = 10. E_ref = -0.5 X0 = [ MonteCarlo(a, nmax, dt, E_ref) for i in range(30)] @@ -3664,8 +3685,8 @@ energy of H for any value of \(a\).
-
-
4.5.1.1 Solution   solution
+
+
4.5.1.1 Solution   solution

Python @@ -3883,8 +3904,8 @@ A = 0.98788066666666663 +/- 7.2889356133441110E-005

-
-

4.6 TODO H2

+
+

4.6 TODO H2

We will now consider the H2 molecule in a minimal basis composed of the @@ -3905,8 +3926,8 @@ the nuclei.

-
-

5 TODO [0/3] Last things to do

+
+

5 TODO [0/3] Last things to do

  • [ ] Give some hints of how much time is required for each section
  • @@ -3920,8 +3941,8 @@ the H\(_2\) molecule at $R$=1.4010 bohr. Answer: 0.17406 a.u.
-
-

6 Schedule

+
+

6 Schedule

@@ -3985,7 +4006,7 @@ the H\(_2\) molecule at $R$=1.4010 bohr. Answer: 0.17406 a.u.

Author: Anthony Scemama, Claudia Filippi

-

Created: 2021-02-02 Tue 22:08

+

Created: 2021-02-02 Tue 22:42

Validate