diff --git a/index.html b/index.html index 2c435d2..3ae2a1c 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
- +[0/3]
Last things to do[0/3]
Last things to doThis website contains the QMC tutorial of the 2021 LTTC winter school @@ -514,8 +514,8 @@ coordinates, etc).
For a given system with Hamiltonian \(\hat{H}\) and wave function \(\Psi\), we define the local energy as @@ -593,8 +593,8 @@ $$ E ≈ \frac{1}{N\rm MC} ∑i=1N\rm MC
In this section, we consider the hydrogen atom with the following @@ -623,8 +623,8 @@ To do that, we will compute the local energy and check whether it is constant.
You will now program all quantities needed to compute the local energy of the H atom for the given wave function. @@ -651,8 +651,8 @@ to catch the error.
@@ -696,8 +696,8 @@ and returns the potential.
Python @@ -737,8 +737,8 @@ and returns the potential.
@@ -773,8 +773,8 @@ input arguments, and returns a scalar.
Python @@ -801,8 +801,8 @@ input arguments, and returns a scalar.
@@ -883,8 +883,8 @@ Therefore, the local kinetic energy is
Python @@ -925,8 +925,8 @@ Therefore, the local kinetic energy is
@@ -969,8 +969,8 @@ local kinetic energy.
Python @@ -1000,8 +1000,8 @@ local kinetic energy.
@@ -1011,8 +1011,8 @@ Find the theoretical value of \(a\) for which \(\Psi\) is an eigenfunction of \(
@@ -1044,8 +1044,8 @@ choose a grid which does not contain the origin.
@@ -1128,8 +1128,8 @@ plot './data' index 0 using 1:2 with lines title 'a=0.1', \
Python @@ -1204,8 +1204,8 @@ plt.savefig("plot_py.png")
If the space is discretized in small volume elements \(\mathbf{r}_i\) @@ -1235,8 +1235,8 @@ The energy is biased because:
@@ -1305,8 +1305,8 @@ To compile the Fortran and run it:
Python @@ -1421,8 +1421,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002
The variance of the local energy is a functional of \(\Psi\) @@ -1449,8 +1449,8 @@ energy can be used as a measure of the quality of a wave function.
@@ -1461,8 +1461,8 @@ Prove that :
\(\bar{E} = \langle E \rangle\) is a constant, so \(\langle \bar{E} @@ -1481,8 +1481,8 @@ Prove that :
@@ -1556,8 +1556,8 @@ To compile and run:
Python @@ -1694,8 +1694,8 @@ a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814
Numerical integration with deterministic methods is very efficient @@ -1711,8 +1711,8 @@ interval.
To compute the statistical error, you need to perform \(M\) @@ -1752,8 +1752,8 @@ And the confidence interval is given by
@@ -1791,8 +1791,8 @@ input array.
Python @@ -1851,8 +1851,8 @@ input array.
We will now perform our first Monte Carlo calculation to compute the @@ -1913,8 +1913,8 @@ compute the statistical error.
@@ -2014,8 +2014,8 @@ well as the index of the current step.
Python @@ -2129,8 +2129,8 @@ E = -0.49518773675598715 +/- 5.2391494923686175E-004
We will now use the square of the wave function to sample random @@ -2269,8 +2269,8 @@ become clear later.
@@ -2377,8 +2377,8 @@ Can you observe a reduction in the statistical error?
Python @@ -2523,8 +2523,8 @@ A = 0.51695266666666673 +/- 4.0445505648997396E-004
To obtain Gaussian-distributed random numbers, you can apply the
@@ -2587,8 +2587,8 @@ In Python, you can use the
-
One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability.
@@ -2720,8 +2720,8 @@ Evaluate \(\Psi\) and \(\frac{\nabla \Psi(\mathbf{r})}{\Psi(\mathbf{r})}\) at th
@@ -2755,8 +2755,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
Python
@@ -2789,8 +2789,8 @@ Write a function to compute the drift vector \(\frac{\nabla \Psi(\mathbf{r})}{\P
@@ -2884,8 +2884,8 @@ Modify the previous program to introduce the drift-diffusion scheme.
Python
@@ -3071,12 +3071,12 @@ A = 0.78839866666666658 +/- 3.2503783452043152E-004
Consider the time-dependent Schrödinger equation:
@@ -3127,12 +3127,13 @@ Now, if we replace the time variable \(t\) by an imaginary time variable
where \(\psi(\mathbf{r},\tau) = \Psi(\mathbf{r},-i\,)\)
and
-\begin{eqnarray*}
-ψ(\mathbf{r},τ) &=& ∑k ak exp( -Ek\, τ) φk(\mathbf{r})
For large positive values of \(\tau\), \(\psi\) is dominated by the
\(k=0\) term, namely, the lowest eigenstate. If we adjust \(E_T\) to the running estimate of \(E_0\),
@@ -3143,8 +3144,8 @@ system.
The imaginary-time Schrödinger equation can be explicitly written in terms of the kinetic and
@@ -3163,7 +3164,7 @@ We can simulate this differential equation as a diffusion-branching process.
-To this this, recall that the diffusion equation of particles is given by
+To see this, recall that the diffusion equation of particles is given by
@@ -3217,11 +3218,20 @@ so-called branching process).
system by simulating the Schrödinger equation in imaginary time, by
the combination of a diffusion process and a branching process.
+We note here that the ground-state wave function of a Fermionic system is
+antisymmetric and changes sign.
+
+I AM HERE
+
In a molecular system, the potential is far from being constant
@@ -3250,20 +3260,20 @@ Defining \(\Pi(\mathbf{r},\tau) = \psi(\mathbf{r},\tau) \Psi_T(\mathbf{r})\), (s
-\frac{\partial \Pi(\mathbf{r},\tau)}{\partial \tau}
= -\frac{1}{2} \Delta \Pi(\mathbf{r},\tau) +
\nabla \left[ \Pi(\mathbf{r},\tau) \frac{\nabla \Psi_T(\mathbf{r})}{\Psi_T(\mathbf{r})}
- \right] + E_L(\mathbf{r}) \Pi(\mathbf{r},\tau)
+ \right] + (E_L(\mathbf{r})-E_T)\Pi(\mathbf{r},\tau)
\]
-The new "kinetic energy" can be simulated by the drifted diffusion
+The new "kinetic energy" can be simulated by the drift-diffusion
scheme presented in the previous section (VMC).
The new "potential" is the local energy, which has smaller fluctuations
when \(\Psi_T\) gets closer to the exact wave function. It can be simulated by
changing the number of particles according to \(\exp\left[ -\delta t\,
- \left(E_L(\mathbf{r}) - E_\text{ref}\right)\right]\)
-where \(E_{\text{ref}}\) is a constant introduced so that the average
-of this term is close to one, keeping the number of particles rather
-constant.
+ \left(E_L(\mathbf{r}) - E_T\right)\right]\)
+where \(E_T\) is the constant we had introduced above, which is adjusted to
+the running average energy and is introduced to keep the number of particles
+reasonably constant.
@@ -3278,8 +3288,8 @@ error known as the fixed node error.
\[
@@ -3341,8 +3351,8 @@ Defining \(\Pi(\mathbf{r},t) = \psi(\mathbf{r},\tau)
Now that we have a process to sample \(\Pi(\mathbf{r},\tau) =
@@ -3394,8 +3404,8 @@ energies computed with the trial wave function.
Instead of having a variable number of particles to simulate the
@@ -3447,13 +3457,13 @@ code, so this is what we will do in the next section.
@@ -3552,8 +3562,8 @@ energy of H for any value of \(a\).
Python
@@ -3769,8 +3779,8 @@ A = 0.98788066666666663 +/- 7.2889356133441110E-005
We will now consider the H2 molecule in a minimal basis composed of the
@@ -3791,8 +3801,8 @@ the nuclei.
3.5 Generalized Metropolis algorithm
+3.5 Generalized Metropolis algorithm
3.5.1 Exercise 1
+3.5.1 Exercise 1
3.5.1.1 Solution solution
+3.5.1.1 Solution solution
3.5.2 Exercise 2
+3.5.2 Exercise 2
3.5.2.1 Solution solution
+3.5.2.1 Solution solution
4 Diffusion Monte Carlo solution
+4 Diffusion Monte Carlo solution
4.1 Schrödinger equation in imaginary time
+4.1 Schrödinger equation in imaginary time
- &=& exp(-(E0-ET)\, τ)∑k ak exp( -(Ek-E0)\, τ) φk(\mathbf{r})\,.
-\begin{eqnarray*}
4.2 Diffusion and branching
+4.2 Diffusion and branching
4.3 Importance sampling
+4.3 Importance sampling
4.3.1 Appendix : Details of the Derivation
+4.3.1 Appendix : Details of the Derivation
4.4 Fixed-node DMC energy
+4.4 Fixed-node DMC energy
4.5 Pure Diffusion Monte Carlo (PDMC)
+4.5 Pure Diffusion Monte Carlo (PDMC)
4.6 Hydrogen atom
+4.6 Hydrogen atom
4.6.1 Exercise
+4.6.1 Exercise
4.6.1.1 Solution solution
+4.6.1.1 Solution solution
4.7 TODO H2
+4.7 TODO H2
5 TODO
+[0/3]
Last things to do5 TODO
[0/3]
Last things to do
[ ]
Give some hints of how much time is required for each section