diff --git a/index.html b/index.html index 5012b44..d7f44f1 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
- +[0/3]
Last things to do[0/3]
Last things to doThis website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +515,8 @@ coordinates, etc).
For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -598,8 +599,8 @@ energy computed over these configurations:
In this section, we consider the hydrogen atom with the following @@ -628,8 +629,8 @@ To do that, we will compute the local energy and check whether it is constant.
You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -656,8 +657,8 @@ to catch the error.
@@ -702,8 +703,8 @@ and returns the potential.
Python @@ -744,8 +745,8 @@ and returns the potential.
@@ -780,8 +781,8 @@ input arguments, and returns a scalar.
Python @@ -808,8 +809,8 @@ input arguments, and returns a scalar.
@@ -890,8 +891,8 @@ Therefore, the local kinetic energy is
Python @@ -932,8 +933,8 @@ Therefore, the local kinetic energy is
@@ -992,8 +993,8 @@ are calling is yours.
Python @@ -1024,8 +1025,8 @@ are calling is yours.
@@ -1035,8 +1036,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(
The program you will write in this section will be written in @@ -1088,8 +1089,8 @@ In Fortran, you will need to compile all the source files together:
@@ -1183,8 +1184,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \
Python @@ -1261,8 +1262,8 @@ plt.savefig("plot_py.png")
If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1292,8 +1293,8 @@ The energy is biased because:
@@ -1364,8 +1365,8 @@ To compile the Fortran and run it:
Python @@ -1482,8 +1483,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002
The variance of the local energy is a functional of \(\Psi\) @@ -1510,8 +1511,8 @@ energy can be used as a measure of the quality of a wave function.
@@ -1522,8 +1523,8 @@ Prove that :
\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1542,8 +1543,8 @@ Prove that :
@@ -1619,8 +1620,8 @@ To compile and run:
Python @@ -1759,8 +1760,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814
Numerical integration with deterministic methods is very efficient @@ -1776,8 +1777,8 @@ interval.
To compute the statistical error, you need to perform \(M\) @@ -1817,8 +1818,8 @@ And the confidence interval is given by
@@ -1858,8 +1859,8 @@ input array.
Python @@ -1920,8 +1921,8 @@ input array.
We will now perform our first Monte Carlo calculation to compute the @@ -1982,12 +1983,12 @@ compute the statistical error.
-Parameterize the wave function with \(a=0.9\). Perform 30 +Parameterize the wave function with \(a=1.2\). Perform 30 independent Monte Carlo runs, each with 100 000 Monte Carlo steps. Store the final energies of each run and use this array to compute the average energy and the associated error bar. @@ -2015,7 +2016,7 @@ the def MonteCarlo(a, nmax): # TODO -a = 0.9 +a = 1.2 nmax = 100000 #TODO @@ -2062,7 +2063,7 @@ well as the index of the current step. program qmc implicit none - double precision, parameter :: a = 0.9 + double precision, parameter :: a = 1.2d0 integer*8 , parameter :: nmax = 100000 integer , parameter :: nruns = 30 @@ -2085,8 +2086,8 @@ well as the index of the current step.
Python @@ -2112,7 +2113,7 @@ well as the index of the current step. return energy / normalization -a = 0.9 +a = 1.2 nmax = 100000 X = [MonteCarlo(a,nmax) for i in range(30)] @@ -2123,23 +2124,13 @@ well as the index of the current step.
-E = -0.4956255109300764 +/- 0.0007082875482711226 +E = -0.48363807880008725 +/- 0.002330876047368999
Fortran
-
-When running Monte Carlo calculations, the number of steps is
-usually very large. We expect nmax
to be possibly larger than 2
-billion, so we use 8-byte integers (integer*8
) to represent it, as
-well as the index of the current step.
-
subroutine uniform_montecarlo(a,nmax,energy) implicit none @@ -2174,7 +2165,7 @@ well as the index of the current step. program qmc implicit none - double precision, parameter :: a = 0.9 + double precision, parameter :: a = 1.2d0 integer*8 , parameter :: nmax = 100000 integer , parameter :: nruns = 30 @@ -2194,7 +2185,7 @@ well as the index of the current step.
-E = -0.49518773675598715 +/- 5.2391494923686175E-004 +E = -0.48084122147238995 +/- 2.4983775878329355E-003
We will now use the square of the wave function to sample random @@ -2279,12 +2270,19 @@ which, for our choice of transition probability, becomes
\[ - A(\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1}) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{\Psi(\mathbf{r}_{n+1})^2}{\Psi(\mathbf{r}_{n})^2}\right)\,. + A(\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1}) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{|\Psi(\mathbf{r}_{n+1})|^2}{|\Psi(\mathbf{r}_{n})|^2}\right)\,. \]
+-Explain why the transition probability cancels out in the expression of \(A\). Also note that we do not need to compute the norm of the wave function! +Explain why the transition probability cancels out in the +expression of \(A\). +
+ ++Also note that we do not need to compute the norm of the wave function!
@@ -2308,11 +2306,16 @@ calculation of the average. Don't do it!
-All samples should be kept, from both accepted and rejected moves. +All samples should be kept, from both accepted and rejected moves.
If the box is infinitely small, the ratio will be very close to one and all the steps will be accepted. However, the moves will be @@ -2335,16 +2338,20 @@ step such that the acceptance rate is close to 0.5 is a good compromise for the current problem.
+-NOTE: below, we use the symbol \(\delta t\) to denote \(\delta L\) since we will use +Below, we use the symbol \(\delta t\) to denote \(\delta L\) since we will use the same variable later on to store a time step.
+ +Modify the program of the previous section to compute the energy, @@ -2379,7 +2386,7 @@ Can you observe a reduction in the statistical error? # Run simulation -a = 0.9 +a = 1.2 nmax = 100000 dt = #TODO @@ -2422,7 +2429,7 @@ Can you observe a reduction in the statistical error? program qmc implicit none - double precision, parameter :: a = 0.9d0 + double precision, parameter :: a = 1.2d0 double precision, parameter :: dt = ! TODO integer*8 , parameter :: nmax = 100000 integer , parameter :: nruns = 30 @@ -2452,9 +2459,9 @@ Can you observe a reduction in the statistical error?
Python
@@ -2488,9 +2495,9 @@ Can you observe a reduction in the statistical error? return energy/nmax, N_accep/nmax # Run simulation -a = 0.9 +a = 1.2 nmax = 100000 -dt = 1.3 +dt = 1.0 X0 = [ MonteCarlo(a,nmax,dt) for i in range(30)] @@ -2507,8 +2514,8 @@ Can you observe a reduction in the statistical error?-E = -0.4950720838131573 +/- 0.00019089638602238043 -A = 0.5172960000000001 +/- 0.0003443446549306529 +E = -0.4802595860693983 +/- 0.0005124420418289207 +A = 0.5074913333333334 +/- 0.000350889422714878@@ -2567,8 +2574,8 @@ A = 0.5172960000000001 +/- 0.0003443446549306529 program qmc implicit none - double precision, parameter :: a = 0.9d0 - double precision, parameter :: dt = 1.3d0 + double precision, parameter :: a = 1.2d0 + double precision, parameter :: dt = 1.0d0 integer*8 , parameter :: nmax = 100000 integer , parameter :: nruns = 30 @@ -2591,8 +2598,8 @@ A = 0.5172960000000001 +/- 0.0003443446549306529
-E = -0.49503130891988767 +/- 1.7160104275040037E-004 -A = 0.51695266666666673 +/- 4.0445505648997396E-004 +E = -0.47948142754168033 +/- 4.8410177863919307E-004 +A = 0.50762633333333318 +/- 3.4601756760043725E-004
-To obtain Gaussian-distributed random numbers, you can apply the -Box Muller transform to uniform random numbers: -
- -\begin{eqnarray*} -z_1 &=& \sqrt{-2 \ln u_1} \cos(2 \pi u_2) \\ -z_2 &=& \sqrt{-2 \ln u_1} \sin(2 \pi u_2) -\end{eqnarray*} - --Below is a Fortran implementation returning a Gaussian-distributed -n-dimensional vector \(\mathbf{z}\). This will be useful for the -following sections. -
- --Fortran -
-subroutine random_gauss(z,n) - implicit none - integer, intent(in) :: n - double precision, intent(out) :: z(n) - double precision :: u(n+1) - double precision, parameter :: two_pi = 2.d0*dacos(-1.d0) - integer :: i - - call random_number(u) - - if (iand(n,1) == 0) then - ! n is even - do i=1,n,2 - z(i) = dsqrt(-2.d0*dlog(u(i))) - z(i+1) = z(i) * dsin( two_pi*u(i+1) ) - z(i) = z(i) * dcos( two_pi*u(i+1) ) - end do - - else - ! n is odd - do i=1,n-1,2 - z(i) = dsqrt(-2.d0*dlog(u(i))) - z(i+1) = z(i) * dsin( two_pi*u(i+1) ) - z(i) = z(i) * dcos( two_pi*u(i+1) ) - end do - - z(n) = dsqrt(-2.d0*dlog(u(n))) - z(n) = z(n) * dcos( two_pi*u(n+1) ) - - end if - -end subroutine random_gauss --
-In Python, you can use the random.normal
function of Numpy.
-
One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability.
@@ -2697,7 +2640,7 @@ at the current position. Hence, the transition probability was symmetric\[ T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = T(\mathbf{r}_{n+1} \rightarrow \mathbf{r}_{n}) - \text{constant}\,, + = \text{constant}\,, \]
@@ -2791,10 +2734,74 @@ Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at th+To obtain Gaussian-distributed random numbers, you can apply the +Box Muller transform to uniform random numbers: +
-+Below is a Fortran implementation returning a Gaussian-distributed +n-dimensional vector \(\mathbf{z}\). This will be useful for the +following sections. +
+ ++Fortran +
+subroutine random_gauss(z,n) + implicit none + integer, intent(in) :: n + double precision, intent(out) :: z(n) + double precision :: u(n+1) + double precision, parameter :: two_pi = 2.d0*dacos(-1.d0) + integer :: i + + call random_number(u) + + if (iand(n,1) == 0) then + ! n is even + do i=1,n,2 + z(i) = dsqrt(-2.d0*dlog(u(i))) + z(i+1) = z(i) * dsin( two_pi*u(i+1) ) + z(i) = z(i) * dcos( two_pi*u(i+1) ) + end do + + else + ! n is odd + do i=1,n-1,2 + z(i) = dsqrt(-2.d0*dlog(u(i))) + z(i+1) = z(i) * dsin( two_pi*u(i+1) ) + z(i) = z(i) * dcos( two_pi*u(i+1) ) + end do + + z(n) = dsqrt(-2.d0*dlog(u(n))) + z(n) = z(n) * dcos( two_pi*u(n+1) ) + + end if + +end subroutine random_gauss ++
+In Python, you can use the random.normal
function of Numpy.
+
Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\). @@ -2827,9 +2834,9 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
Python
@@ -2861,9 +2868,9 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\PModify the previous program to introduce the drift-diffusion scheme. @@ -2885,7 +2892,7 @@ Modify the previous program to introduce the drift-diffusion scheme. # TODO # Run simulation -a = 0.9 +a = 1.2 nmax = 100000 dt = # TODO @@ -2928,7 +2935,7 @@ Modify the previous program to introduce the drift-diffusion scheme. program qmc implicit none - double precision, parameter :: a = 0.9 + double precision, parameter :: a = 1.2d0 double precision, parameter :: dt = ! TODO integer*8 , parameter :: nmax = 100000 integer , parameter :: nruns = 30 @@ -2958,9 +2965,9 @@ Modify the previous program to introduce the drift-diffusion scheme.
Python
@@ -3010,7 +3017,7 @@ Modify the previous program to introduce the drift-diffusion scheme. # Run simulation -a = 0.9 +a = 1.2 nmax = 100000 dt = 1.3 @@ -3113,8 +3120,8 @@ A = 0.7200673333333333 +/- 0.00045942791345632793 program qmc implicit none - double precision, parameter :: a = 0.9 - double precision, parameter :: dt = 1.0 + double precision, parameter :: a = 1.2d0 + double precision, parameter :: dt = 1.0d0 integer*8 , parameter :: nmax = 100000 integer , parameter :: nruns = 30 @@ -3147,12 +3154,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004Consider the time-dependent Schrödinger equation: @@ -3220,8 +3227,8 @@ system.
The imaginary-time Schrödinger equation can be explicitly written in terms of the kinetic and @@ -3318,8 +3325,8 @@ Therefore, in both cases, you are dealing with a "Bosonic" ground state.
In a molecular system, the potential is far from being constant @@ -3415,8 +3422,8 @@ energies computed with the trial wave function.
\[ @@ -3477,8 +3484,8 @@ Defining \(\Pi(\mathbf{r},t) = \psi(\mathbf{r},\tau)
Instead of having a variable number of particles to simulate the @@ -3559,13 +3566,13 @@ code, so this is what we will do in the next section.
@@ -3587,7 +3594,7 @@ energy of H for any value of \(a\). # TODO # Run simulation -a = 0.9 +a = 1.2 nmax = 100000 dt = 0.01 E_ref = -0.5 @@ -3632,7 +3639,7 @@ energy of H for any value of \(a\). program qmc implicit none - double precision, parameter :: a = 0.9 + double precision, parameter :: a = 1.2d0 double precision, parameter :: dt = 0.1d0 double precision, parameter :: E_ref = -0.5d0 double precision, parameter :: tau = 10.d0 @@ -3664,8 +3671,8 @@ energy of H for any value of \(a\).
Python @@ -3730,7 +3737,7 @@ energy of H for any value of \(a\). # Run simulation -a = 0.9 +a = 1.2 nmax = 100000 dt = 0.1 tau = 10. @@ -3847,7 +3854,7 @@ energy of H for any value of \(a\). program qmc implicit none - double precision, parameter :: a = 0.9 + double precision, parameter :: a = 1.2d0 double precision, parameter :: dt = 0.1d0 double precision, parameter :: E_ref = -0.5d0 double precision, parameter :: tau = 10.d0 @@ -3883,8 +3890,8 @@ A = 0.98788066666666663 +/- 7.2889356133441110E-005
We will now consider the H2 molecule in a minimal basis composed of the @@ -3905,8 +3912,8 @@ the nuclei.
[0/3]
Last things to do[0/3]
Last things to do[ ]
Give some hints of how much time is required for each section3.1 | + +|
+ | 3.2 | +
+ | 3.3 | +