diff --git a/index.html b/index.html index e3c21b0..49492cf 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
- +[0/3]
Last things to do[0/3]
Last things to doThis website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +514,8 @@ coordinates, etc).
For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -549,11 +549,11 @@ For few dimensions, one can easily compute \(E\) by evaluating the integrals on
To this aim, recall that the probabilistic expected value of an arbitrary function \(f(x)\) -with respect to a probability density function \(p(x)\) is given by +with respect to a probability density function \(P(x)\) is given by
-\[ \langle f \rangle_p = \int_{-\infty}^\infty p(x)\, f(x)\,dx, \] +\[ \langle f \rangle_p = \int_{-\infty}^\infty P(x)\, f(x)\,dx, \]
@@ -562,16 +562,16 @@ and integrates to one:
-\[ \int_{-\infty}^\infty p(x)\,dx = 1. \] +\[ \int_{-\infty}^\infty P(x)\,dx = 1. \]
Similarly, we can view the the energy of a system, \(E\), as the expected value of the local energy with respect to -a probability density \(p(\mathbf{r}}\) defined in 3\(N\) dimensions: +a probability density \(P(\mathbf{r}}\) defined in 3\(N\) dimensions:
-\[ E = \int E_L(\mathbf{r}) p(\mathbf{r})\,d\mathbf{r}} \equiv \langle E_L \rangle_{\Psi^2}\,, \] +\[ E = \int E_L(\mathbf{r}) P(\mathbf{r})\,d\mathbf{r}} \equiv \langle E_L \rangle_{\Psi^2}\,, \]
@@ -579,22 +579,22 @@ where the probability density is given by the square of the wave function:
-\[ p(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2){\int \left |\Psi(\mathbf{r})|^2 d\mathbf{r}}\,. \] +\[ P(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2){\int \left |\Psi(\mathbf{r})|^2 d\mathbf{r}}\,. \]
-If we can sample configurations \(\{\mathbf{r}\}\) distributed as \(p\), we can estimate \(E\) as the average of the local energy computed over these configurations: +If we can sample \(N_{\rm MC}\) configurations \(\{\mathbf{r}\}\) distributed as \(p\), we can estimate \(E\) as the average of the local energy computed over these configurations:
-$$ E ≈ \frac{1}{M} ∑i=1M EL(\mathbf{r}i} \,. +$$ E ≈ \frac{1}{N\rm MC} ∑i=1N\rm MC EL(\mathbf{r}i} \,.
In this section, we consider the hydrogen atom with the following @@ -623,8 +623,8 @@ To do that, we will compute the local energy and check whether it is constant.
You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -651,8 +651,8 @@ to catch the error.
@@ -696,8 +696,8 @@ and returns the potential.
Python @@ -737,8 +737,8 @@ and returns the potential.
@@ -773,8 +773,8 @@ input arguments, and returns a scalar.
Python @@ -801,8 +801,8 @@ input arguments, and returns a scalar.
@@ -883,8 +883,8 @@ Therefore, the local kinetic energy is
Python @@ -925,8 +925,8 @@ Therefore, the local kinetic energy is
@@ -969,8 +969,8 @@ local kinetic energy.
Python @@ -1000,8 +1000,8 @@ local kinetic energy.
@@ -1011,8 +1011,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(
@@ -1044,8 +1044,8 @@ choose a grid which does not contain the origin.
@@ -1128,8 +1128,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \
Python @@ -1204,8 +1204,8 @@ plt.savefig("plot_py.png")
If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1235,8 +1235,8 @@ The energy is biased because:
@@ -1305,8 +1305,8 @@ To compile the Fortran and run it:
Python @@ -1421,8 +1421,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002
The variance of the local energy is a functional of \(\Psi\) @@ -1449,8 +1449,8 @@ energy can be used as a measure of the quality of a wave function.
@@ -1461,8 +1461,8 @@ Prove that :
\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1481,8 +1481,8 @@ Prove that :
@@ -1556,8 +1556,8 @@ To compile and run:
Python @@ -1694,31 +1694,31 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814
Numerical integration with deterministic methods is very efficient in low dimensions. When the number of dimensions becomes large, instead of computing the average energy as a numerical integration -on a grid, it is usually more efficient to do a Monte Carlo sampling. +on a grid, it is usually more efficient to use Monte Carlo sampling.
-Moreover, a Monte Carlo sampling will alow us to remove the bias due +Moreover, Monte Carlo sampling will alow us to remove the bias due to the discretization of space, and compute a statistical confidence interval.
To compute the statistical error, you need to perform \(M\) independent Monte Carlo calculations. You will obtain \(M\) different estimates of the energy, which are expected to have a Gaussian -distribution according to the Central Limit Theorem. +distribution for large \(M\), according to the Central Limit Theorem.
@@ -1752,8 +1752,8 @@ And the confidence interval is given by
@@ -1791,8 +1791,8 @@ input array.
Python @@ -1851,16 +1851,44 @@ input array.
-We will now do our first Monte Carlo calculation to compute the -energy of the hydrogen atom. +We will now perform our first Monte Carlo calculation to compute the +energy of the hydrogen atom.
-At every Monte Carlo iteration: +Consider again the expression of the energy +
+ +\begin{eqnarray*} +E & = & \frac{\int E_L(\mathbf{r})\left[\Psi(\mathbf{r})\right]^2\,d\mathbf{r}}{\int \left[\Psi(\mathbf{r}) \right]^2 d\mathbf{r}}\,. +\end{eqnarray*} + ++Clearly, the square of the wave function is a good choice of probability density to sample but we will start with something simpler and rewrite the energy as +
+ +\begin{eqnarray*} +E & = & \frac{\int E_L(\mathbf{r})\frac{|\Psi(\mathbf{r})|^2}{p(\mathbf{r})}p(\mathbf{r})\, \,d\mathbf{r}}{\int \frac{|\Psi(\mathbf{r})|^2 }{p(\mathbf{r})}p(\mathbf{r})d\mathbf{r}}\,. +\end{eqnarray*} + ++Here, we will sample a uniform probability \(p(\mathbf{r})\) in a cube of volume \(L^3\) centered at the origin: +
+ ++\[ p(\mathbf{r}) = \frac{1}{L^3}\,, \] +
+ ++and zero outside the cube. +
+ ++One Monte Carlo run will consist of \(N_{\rm MC}\) Monte Carlo iterations. At every Monte Carlo iteration:
energy
-One Monte Carlo run will consist of \(N\) Monte Carlo iterations. Once all the -iterations have been computed, the run returns the average energy -\(\bar{E}_k\) over the \(N\) iterations of the run. +Once all the iterations have been computed, the run returns the average energy +\(\bar{E}_k\) over the \(N_{\rm MC}\) iterations of the run.
@@ -1886,8 +1913,8 @@ compute the statistical error.
@@ -1987,8 +2014,8 @@ well as the index of the current step.
Python @@ -2102,30 +2129,29 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004
We will now use the square of the wave function to sample random points distributed with the probability density \[ - P(\mathbf{r}) = \left[\Psi(\mathbf{r})\right]^2 + P(\mathbf{r}) = \frac{|Psi(\mathbf{r}|^2){\int \left |\Psi(\mathbf{r})|^2 d\mathbf{r}} \]
The expression of the average energy is now simplified as the average of the local energies, since the weights are taken care of by the -sampling : +sampling:
\[ - E \approx \frac{1}{M}\sum_{i=1}^M E_L(\mathbf{r}_i) + E \approx \frac{1}{N_{\rm MC}}\sum_{i=1}^{N_{\rm MC} E_L(\mathbf{r}_i) \]
-To sample a chosen probability density, an efficient method is the Metropolis-Hastings sampling algorithm. Starting from a random @@ -2191,8 +2217,8 @@ step such that the acceptance rate is close to 0.5 is a good compromise.
@@ -2299,8 +2325,8 @@ Can you observe a reduction in the statistical error?
Python @@ -2445,8 +2471,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004
To obtain Gaussian-distributed random numbers, you can apply the
@@ -2508,8 +2534,8 @@ In Python, you can use the
-
One can use more efficient numerical schemes to move the electrons,
@@ -2608,8 +2634,8 @@ The transition probability becomes:
@@ -2643,8 +2669,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
Python
@@ -2677,8 +2703,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
@@ -2772,8 +2798,8 @@ Modify the previous program to introduce the drifted diffusion scheme.
Python
@@ -2959,12 +2985,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004
Consider the time-dependent Schrödinger equation:
@@ -3023,8 +3049,8 @@ system.
The diffusion equation of particles is given by
@@ -3078,8 +3104,8 @@ the combination of a diffusion process and a branching process.
In a molecular system, the potential is far from being constant,
@@ -3136,8 +3162,8 @@ error known as the fixed node error.
\[
@@ -3199,8 +3225,8 @@ Defining \(\Pi(\mathbf{r},t) = \psi(\mathbf{r},\tau)
Now that we have a process to sample \(\Pi(\mathbf{r},\tau) =
@@ -3252,8 +3278,8 @@ energies computed with the trial wave function.
Instead of having a variable number of particles to simulate the
@@ -3305,13 +3331,13 @@ code, so this is what we will do in the next section.
@@ -3410,8 +3436,8 @@ energy of H for any value of \(a\).
Python
@@ -3627,8 +3653,8 @@ A = 0.98788066666666663 +/- 7.2889356133441110E-005
We will now consider the H2 molecule in a minimal basis composed of the
@@ -3649,8 +3675,8 @@ the nuclei.
3.5 Generalized Metropolis algorithm
+3.5 Generalized Metropolis algorithm
3.5.1 Exercise 1
+3.5.1 Exercise 1
3.5.1.1 Solution solution
+3.5.1.1 Solution solution
3.5.2 Exercise 2
+3.5.2 Exercise 2
3.5.2.1 Solution solution
+3.5.2.1 Solution solution
4 Diffusion Monte Carlo solution
+4 Diffusion Monte Carlo solution
4.1 Schrödinger equation in imaginary time
+4.1 Schrödinger equation in imaginary time
4.2 Diffusion and branching
+4.2 Diffusion and branching
4.3 Importance sampling
+4.3 Importance sampling
4.3.1 Appendix : Details of the Derivation
+4.3.1 Appendix : Details of the Derivation
4.4 Fixed-node DMC energy
+4.4 Fixed-node DMC energy
4.5 Pure Diffusion Monte Carlo (PDMC)
+4.5 Pure Diffusion Monte Carlo (PDMC)
4.6 Hydrogen atom
+4.6 Hydrogen atom
4.6.1 Exercise
+4.6.1 Exercise
4.6.1.1 Solution solution
+4.6.1.1 Solution solution
4.7 TODO H2
+4.7 TODO H2
5 TODO
+[0/3]
Last things to do5 TODO
[0/3]
Last things to do
[ ]
Give some hints of how much time is required for each section