diff --git a/index.html b/index.html index b2ab75a..8a30474 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
- +[0/3]
Last things to do[0/3]
Last things to doThis website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +514,8 @@ coordinates, etc).
For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -593,8 +593,8 @@ $$ E ≈ \frac{1}{N\rm MC} ∑i=1N\rm MC
In this section, we consider the hydrogen atom with the following @@ -623,8 +623,8 @@ To do that, we will compute the local energy and check whether it is constant.
You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -651,8 +651,8 @@ to catch the error.
@@ -696,8 +696,8 @@ and returns the potential.
Python @@ -737,8 +737,8 @@ and returns the potential.
@@ -773,8 +773,8 @@ input arguments, and returns a scalar.
Python @@ -801,8 +801,8 @@ input arguments, and returns a scalar.
@@ -883,8 +883,8 @@ Therefore, the local kinetic energy is
Python @@ -925,8 +925,8 @@ Therefore, the local kinetic energy is
@@ -969,8 +969,8 @@ local kinetic energy.
Python @@ -1000,8 +1000,8 @@ local kinetic energy.
@@ -1011,8 +1011,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(
@@ -1044,8 +1044,8 @@ choose a grid which does not contain the origin.
@@ -1128,8 +1128,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \
Python @@ -1204,8 +1204,8 @@ plt.savefig("plot_py.png")
If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1235,8 +1235,8 @@ The energy is biased because:
@@ -1305,8 +1305,8 @@ To compile the Fortran and run it:
Python @@ -1421,8 +1421,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002
The variance of the local energy is a functional of \(\Psi\) @@ -1449,8 +1449,8 @@ energy can be used as a measure of the quality of a wave function.
@@ -1461,8 +1461,8 @@ Prove that :
\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1481,8 +1481,8 @@ Prove that :
@@ -1556,8 +1556,8 @@ To compile and run:
Python @@ -1694,8 +1694,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814
Numerical integration with deterministic methods is very efficient @@ -1711,8 +1711,8 @@ interval.
To compute the statistical error, you need to perform \(M\) @@ -1752,8 +1752,8 @@ And the confidence interval is given by
@@ -1791,8 +1791,8 @@ input array.
Python @@ -1851,8 +1851,8 @@ input array.
We will now perform our first Monte Carlo calculation to compute the @@ -1913,8 +1913,8 @@ compute the statistical error.
@@ -2014,8 +2014,8 @@ well as the index of the current step.
Python @@ -2129,8 +2129,8 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004
We will now use the square of the wave function to sample random @@ -2163,15 +2163,15 @@ initial position \(\mathbf{r}_0\), we will realize a random walk:
-according to the following algorithm. +following the following algorithm.
-At every step, we propose a new move according to a transition probability \(T(\mathbf{r}_{n+1},\mathbf{r}_n)\) of our choice. +At every step, we propose a new move according to a transition probability \(T(\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1})\) of our choice.
-For simplicity, let us move the electron in a 3-dimensional box of side \(2\delta L\) centered at the current position +For simplicity, we will move the electron in a 3-dimensional box of side \(2\delta L\) centered at the current position of the electron:
@@ -2188,7 +2188,7 @@ where \(\delta L\) is a fixed constant, and-After having moved the electron, add the +After having moved the electron, we add the accept/reject step that guarantees that the distribution of the \(\mathbf{r}_n\) is \(\Psi^2\). This amounts to accepting the move with probability @@ -2196,7 +2196,7 @@ probability
\[ - A{\mathbf{r}_{n+1},\mathbf{r}_n) = \min\left(1,\frac{T(\mathbf{r}_{n},\mathbf{r}_{n+1}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n+1},\mathbf{r}_n)P(\mathbf{r}_{n})}\right)\,, + A{\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1}) = \min\left(1,\frac{T(\mathbf{r}_{n},\mathbf{r}_{n+1}) P(\mathbf{r}_{n+1})}{T(\mathbf{r}_{n+1},\mathbf{r}_n)P(\mathbf{r}_{n})}\right)\,, \]
@@ -2206,7 +2206,7 @@ which, for our choice of transition probability, becomes\[ - A{\mathbf{r}_{n+1},\mathbf{r}_n) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{\Psi(\mathbf{r}_{n+1})^2}{\Psi(\mathbf{r}_{n})^2} + A{\mathbf{r}_{n}\rightarrow\mathbf{r}_{n+1}) = \min\left(1,\frac{P(\mathbf{r}_{n+1})}{P(\mathbf{r}_{n})}\right)= \min\left(1,\frac{\Psi(\mathbf{r}_{n+1})^2}{\Psi(\mathbf{r}_{n})^2} \]
@@ -2258,13 +2258,14 @@ The size of the move should be adjusted so that it is as large as possible, keeping the number of accepted steps not too small. To achieve that, we define the acceptance rate as the number of accepted steps over the total number of steps. Adjusting the time -step such that the acceptance rate is close to 0.5 is a good compromise for the current problem. +step such that the acceptance rate is close to 0.5 is a good +compromise for the current problem.@@ -2371,8 +2372,8 @@ Can you observe a reduction in the statistical error?
Python @@ -2517,8 +2518,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004
To obtain Gaussian-distributed random numbers, you can apply the
@@ -2581,8 +2582,8 @@ In Python, you can use the
-
One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability.
@@ -2607,13 +2608,13 @@ probability of transition from \(\mathbf{r}_n\) to
-In the previous example, we were using uniform random
-numbers. Hence, the transition probability was
+In the previous example, we were using uniform sampling in a box centered
+at the current position. Hence, the transition probability was symmetric
\[
- T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) =
+ T(\mathbf{r}_{n} \rightarrow \mathbf{r}_{n+1}) = T(\mathbf{r}_{n+1} \rightarrow \mathbf{r}_{n})
\text{constant}\,,
\]
-Now, if instead of drawing uniform random numbers we
+Now, if instead of drawing uniform random numbers, we
choose to draw Gaussian random numbers with zero mean and variance
\(\delta t\), the transition probability becomes:
-To sample even better the density, we can "push" the electrons
+Furthermore, to sample the density even better, we can "push" the electrons
into in the regions of high probability, and "pull" them away from
-the low-probability regions. This will mechanically increase the
+the low-probability regions. This will ncrease the
acceptance ratios and improve the sampling.
-and add the so-called drift vector, so that the numerical scheme becomes a drifted diffusion:
-
-\[
- \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta t\, \frac{\nabla
- \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi \,,
- \]
-
-where \(\chi\) is a Gaussian random variable with zero mean and
-variance \(\delta t\).
-The transition probability becomes:
+and add the so-called drift vector, so that the numerical scheme becomes a
+drifted diffusion with transition probability:
@@ -2682,11 +2671,28 @@ The transition probability becomes:
-The algorithm of the previous exercise is only slighlty modified summarized:
+and the corrsponding move is proposed as
+
+\[
+ \mathbf{r}_{n+1} = \mathbf{r}_{n} + \delta t\, \frac{\nabla
+ \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi \,,
+ \]
+
+where \(\chi\) is a Gaussian random variable with zero mean and
+variance \(\delta t\).
+
+The algorithm of the previous exercise is only slighlty modified as:
Compute a new position \(\mathbf{r'} = \mathbf{r}_n +
\delta t\, \frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})} + \chi\)
@@ -2709,8 +2715,8 @@ Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at th
@@ -2744,8 +2750,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
Python
@@ -2778,13 +2784,13 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
-Modify the previous program to introduce the drifted diffusion scheme.
-(This is a necessary step for the next section).
+Modify the previous program to introduce the drift-diffusion scheme.
+(This is a necessary step for the next section on diffusion Monte Carlo).
Python
@@ -3060,12 +3066,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004
Consider the time-dependent Schrödinger equation:
@@ -3073,12 +3079,12 @@ Consider the time-dependent Schrödinger equation:
\[
- i\frac{\partial \Psi(\mathbf{r},t)}{\partial t} = \hat{H} \Psi(\mathbf{r},t)
+ i\frac{\partial \Psi(\mathbf{r},t)}{\partial t} = \hat{H} \Psi(\mathbf{r},t)\,.
\]
-We can expand \(\Psi(\mathbf{r},0)\), in the basis of the eigenstates
+We can expand a given starting wave function, \(\Psi(\mathbf{r},0)\), in the basis of the eigenstates
of the time-independent Hamiltonian:
-Now, let's replace the time variable \(t\) by an imaginary time variable
+Now, if we replace the time variable \(t\) by an imaginary time variable
\(\tau=i\,t\), we obtain
-where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\tau) = \Psi(\mathbf{r},t)\)
+where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\,)\)
and
\[
- \psi(\mathbf{r},\tau) = \sum_k a_k \exp( -E_k\, \tau) \phi_k(\mathbf{r}).
+ \psi(\mathbf{r},\tau) = \sum_k a_k \exp( -E_k\, \tau) \phi_k(\mathbf{r}).
\]
For large positive values of \(\tau\), \(\psi\) is dominated by the
\(k=0\) term, namely the lowest eigenstate.
@@ -3124,8 +3130,8 @@ system.
The diffusion equation of particles is given by
@@ -3179,8 +3185,8 @@ the combination of a diffusion process and a branching process.
In a molecular system, the potential is far from being constant,
@@ -3237,8 +3243,8 @@ error known as the fixed node error.
\[
@@ -3300,8 +3306,8 @@ Defining \(\Pi(\mathbf{r},t) = \psi(\mathbf{r},\tau)
Now that we have a process to sample \(\Pi(\mathbf{r},\tau) =
@@ -3353,8 +3359,8 @@ energies computed with the trial wave function.
Instead of having a variable number of particles to simulate the
@@ -3406,13 +3412,13 @@ code, so this is what we will do in the next section.
@@ -3511,8 +3517,8 @@ energy of H for any value of \(a\).
Python
@@ -3728,8 +3734,8 @@ A = 0.98788066666666663 +/- 7.2889356133441110E-005
We will now consider the H2 molecule in a minimal basis composed of the
@@ -3750,8 +3756,8 @@ the nuclei.
3.5 Generalized Metropolis algorithm
+3.5 Generalized Metropolis algorithm
-
3.5.1 Exercise 1
+3.5.1 Exercise 1
3.5.1.1 Solution solution
+3.5.1.1 Solution solution
3.5.2 Exercise 2
+3.5.2 Exercise 2
3.5.2.1 Solution solution
+3.5.2.1 Solution solution
4 Diffusion Monte Carlo solution
+4 Diffusion Monte Carlo solution
4.1 Schrödinger equation in imaginary time
+4.1 Schrödinger equation in imaginary time
4.2 Diffusion and branching
+4.2 Diffusion and branching
4.3 Importance sampling
+4.3 Importance sampling
4.3.1 Appendix : Details of the Derivation
+4.3.1 Appendix : Details of the Derivation
4.4 Fixed-node DMC energy
+4.4 Fixed-node DMC energy
4.5 Pure Diffusion Monte Carlo (PDMC)
+4.5 Pure Diffusion Monte Carlo (PDMC)
4.6 Hydrogen atom
+4.6 Hydrogen atom
4.6.1 Exercise
+4.6.1 Exercise
4.6.1.1 Solution solution
+4.6.1.1 Solution solution
4.7 TODO H2
+4.7 TODO H2
5 TODO
+[0/3]
Last things to do5 TODO
[0/3]
Last things to do
[ ]
Give some hints of how much time is required for each section