1st cleanup of Sec III

This commit is contained in:
Pierre-Francois Loos 2022-09-19 17:11:30 +02:00
parent 462c33c1b0
commit 5e7edd8b37

292
g.tex
View File

@ -52,6 +52,7 @@
\newcommand{\cT}{\mathcal{T}}
\newcommand{\cC}{\mathcal{C}}
\newcommand{\cD}{\mathcal{D}}
\newcommand{\cE}{\mathcal{E}}
\newcommand{\EPT}{E_{\PT}}
\newcommand{\laPT}{\lambda_{\PT}}
@ -443,7 +444,7 @@ and this defines a Poisson law with an average number of trapping events
\ee
Introducing the continuous time $t_i = n \tau$, the average trapping time is thus given by
\be
\bar{t_i}= \frac{1}{H^+_{ii}-(\EL^+)_{i}},
\bar{t_i}= \qty[ H^+_{ii}-(\EL^+)_{i} ]^{-1},
\ee
and, in the limit $\tau \to 0$, the Poisson probability takes the usual form
\be
@ -541,19 +542,19 @@ we have
\ee
and the average trapping time is
\be
t_{I}={\bar n}_{I} \tau = \frac{1}{\PsiG_{I}} \mel{ I }{ P_{I} \frac{1}{H^+ - \EL^+ \Id} P_{I} }{ \PsiG }.
t_{I}={\bar n}_{I} \tau = \frac{1}{\PsiG_{I}} \mel{ I }{ P_{I} \qty( H^+ - \EL^+ \Id )^{-1} P_{I} }{ \PsiG }.
\ee
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-\EL^+ \Id)$ in $\cD_{I}$.
Note that it is possible only if the dimension of the domains is not too large (say, less than a few thousands).
%=======================================%
\subsection{Expressing the Green's matrix using domains}
\subsection{Time-dependent Green's matrix using domains}
\label{sec:Green}
%=======================================%
%--------------------------------------------%
\subsubsection{Time-dependent Green's matrix}
\label{sec:time}
%\subsubsection{Time-dependent Green's matrix}
%\label{sec:time}
%--------------------------------------------%
In this section we generalize the path-integral expression of the Green's matrix, Eqs.~\eqref{eq:G} and \eqref{eq:cn_stoch}, to the case where domains are used.
For that we introduce the Green's matrix associated with each domain
@ -588,10 +589,10 @@ where $\delta_{i,j}$ is a Kronecker delta.
This expression is the path-integral representation of the Green's matrix using only the variables $(\ket{I_k},n_k)$ of the effective dynamics defined over the set of domains.
The standard formula derived above [see Eq.~\eqref{eq:G}] may be considered as the particular case where the domain associated with each state is empty,
In that case, $p=N$ and $n_k=1$ for $0 \le k \le N$ and we are left only with the $p$th component of the sum, that is, $G^{(N)}_{I_0 I_N}
= \prod_{k=0}^{N-1} \mel{ I_k }{ F_{I_k} }{ I_{k+1} } $ where $F=T$.
In this case, $p=N$ and $n_k=1$ (for $0 \le k \le N$) and we are left only with the $p$th component of the sum, that is, $G^{(N)}_{I_0 I_N}
= \prod_{k=0}^{N-1} \mel{ I_k }{ F_{I_k} }{ I_{k+1} }$, with $F=T$.
To express the fundamental equation for $G$ under the form of a probabilistic average, we write the importance-sampled version of the equation
To express the fundamental equation of $G$ under the form of a probabilistic average, we rely on the importance-sampled version of Eq.~\eqref{eq:G}, \ie,
\begin{multline}
\label{eq:Gbart}
\bar{G}^{(N)}_{I_0 I_N}=\bar{G}^{(N),\cD}_{I_0 I_N} +
@ -603,9 +604,9 @@ To express the fundamental equation for $G$ under the form of a probabilistic av
\delta_{\sum_k n_k,N+1} \qty{ \prod_{k=0}^{p-1} \qty[ \frac{\PsiG_{I_{k+1}}}{\PsiG_{I_k}} \mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} } ] }
\bar{G}^{(n_p-1),\cD}_{I_p I_N}.
\end{multline}
Introducing the weight
Introducing the weights
\be
W_{I_k I_{k+1}} = \frac{\mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} }}{\mel{ I_k }{ \qty(T^{+}_{I_k})^{n_k-1} F^+_{I_k} }{ I_{k+1} }}
W_{I_k I_{k+1}} = \frac{\mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} }}{\mel{ I_k }{ \qty(T^{+}_{I_k})^{n_k-1} F^+_{I_k} }{ I_{k+1} }},
\ee
and using the effective transition probability defined in Eq.~\eqref{eq:eq3C}, we get
\be
@ -613,10 +614,9 @@ and using the effective transition probability defined in Eq.~\eqref{eq:eq3C}, w
\bar{G}^{(N)}_{I_0 I_N} = \bar{G}^{(N),\cD}_{I_0 I_N} + \sum_{p=1}^{N}
\expval{
\qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} )
\bar{G}^{(n_p-1), {\cal D}}_{I_p I_N}
}
\bar{G}^{(n_p-1), {\cal D}}_{I_p I_N} },
\ee
where the average is defined as
where the average of a given function $F$ is defined as
\begin{multline}
\expval{F}
= \sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}}}
@ -624,7 +624,7 @@ where the average is defined as
\delta_{\sum_k n_k,N+1}
\\
\times
\prod_{k=0}^{N-1}\cP_{I_k \to I_{k+1}}(n_k-1) F(I_0,n_0;\ldots.;I_N,n_N)
\prod_{k=0}^{N-1}\cP_{I_k \to I_{k+1}}(n_k-1) F(I_0,n_0;\ldots.;I_N,n_N).
\end{multline}
In practice, a schematic DMC algorithm to compute the average is as follows.\\
i) Choose some initial vector $\ket{I_0}$\\
@ -637,161 +637,158 @@ iv) Go to step ii) until some maximum number of paths is reached.\\
At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_\text{max}$ is obtained.
At large $N_\text{max}$ where the convergence of the average as a function of $p$ is reached, such values can be averaged.
%--------------------------------------------%
\subsubsection{Integrating out the trapping times : The Domain Green's Function Monte Carlo approach}
%==============================================%
\subsection{Domain Green's Function Monte Carlo}
\label{sec:energy}
%--------------------------------------------%
%==============================================%
Now, let us show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective
stochastic dynamics involving now only the exit states. Physically, it means that we are going to compute exactly within the time-evolution of all
stochastic paths trapped within each domain. We shall present two different ways to derive the new dynamics and renormalized probabilistic averages.
The first one, called the pedestrian way, consists in starting from the preceding time-expression for $G$ and make the explicit integration over the
$n_k$. The second, more direct and elegant, is based on the Dyson equation.\\
\\
{\it $\bullet$ The pedestrian way}. Let us define the quantity\\
The aim of this section is to show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective stochastic dynamics involving now only the exit states.
Physically, it means that we are going to compute exactly within the time-evolution of all stochastic paths trapped within each domain.
We shall present two different ways to derive the new dynamics and renormalized probabilistic averages.
The first one, called the pedestrian way, consists in starting from the preceding time-expression for $G$ and make the explicit integration over the $n_k$'s.
The second, more direct and elegant, is based on the Dyson equation.
%--------------------------------------------%
\subsubsection{The pedestrian way}
%--------------------------------------------%
Let us define the quantity\\
$$
G^E_{ij}= \tau \sum_{N=0}^{\infty} \mel{ i }{ T^N }{ j}.
$$
By summing over $N$ we obtain
\be
G^E_{ij}= \mel{i}{\frac{1}{H-E \Id}}{j}.
G^E_{ij}= \mel{i}{ \qty( H-E \Id )^{-1} }{j}.
\ee
This quantity, which no longer depends on the time-step, is referred to as the energy-dependent Green's matrix. Note that in the continuum this quantity is
essentially the Laplace transform of the time-dependent Green's function. Here, we then use the same denomination. The remarkable property
is that, thanks to the summation over $N$ up to the
infinity the constrained multiple sums appearing in Eq.(\ref{Gt}) can be factorized in terms of a product of unconstrained single sums as follows
This quantity, which no longer depends on the time step, is referred to as the energy-dependent Green's matrix.
Note that, \titou{in a continuous space}, this quantity is essentially the \titou{Laplace transform of the time-dependent Green's function}.
Here, we then use the same denomination.
The remarkable property is that, thanks to the summation over $N$ up to the infinity the constrained multiple sums appearing in Eq.~\eqref{eq:Gt} can be factorized in terms of a product of unconstrained single sums as follows
\be
\sum_{N=1}^\infty \sum_{p=1}^N \sum_{n_0 \ge 1} ...\sum_{n_p \ge 1} \delta_{n_0+...+n_p,N+1}
= \sum_{p=1}^{\infty} \sum_{n_0=1}^{\infty} ...\sum_{n_p=1}^{\infty}.
\sum_{N=1}^\infty \sum_{p=1}^N \sum_{n_0 \ge 1} ...\sum_{n_p \ge 1} \delta_{n_0+...+n_p,N+1} \titou{??}
= \sum_{p=1}^{\infty} \sum_{n_0=1}^{\infty} ...\sum_{n_p=1}^{\infty} \titou{??}.
\ee
It is then a trivial matter to integrate out exactly the $n_k$ variables, leading to
$$
\langle I_0|\frac{1}{H-E}|I_N\rangle = \langle I_0|P_0\frac{1}{H-E} P_0|I_N\rangle
+ \sum_{p=1}^{\infty}
\sum_{I_1 \notin {\cal D}_0, \hdots , I_p \notin {\cal D}_{p-1}}
$$
\begin{multline}
\label{eq:eqfond}
\mel{ I_0 }{ \qty(H-E \Id)^{-1} }{ I_N }
= \mel{ I_0 }{ P_0 \qty(H-E \Id)^{-1} P_0 }{ I_N}
\\
+ \sum_{p=1}^{\infty} \sum_{I_1 \notin \cD_0, \hdots , I_p \notin \cD_{p-1}}
\qty[ \prod_{k=0}^{p-1} \mel{ I_k }{ P_k \qty( H-E \Id )^{-1} P_k (-H)(1-P_k) }{ I_{k+1} } ]
\mel{ I_p }{ P_p \qty( H-E \Id)^{-1} P_p }{ I_N }
\end{multline}
As an illustration, Appendix \ref{app:A} reports the exact derivation of this formula in the case of a \titou{two-level} system.
%----------------------------%
\subsubsection{Dyson equation}
%----------------------------%
In fact, there is a more direct way to derive the same equation by resorting to the Dyson equation. Starting from the well-known equality
\be
\Big[ \prod_{k=0}^{p-1} \langle I_k| P_k \frac{1}{H-E} P_k (-H)(1-P_k)|I_{k+1} \rangle \Big]
\langle I_p| P_p \frac{1} {H-E} P_p|I_N\rangle
\label{eqfond}
\qty(H-E\Id)^{-1} = \qty(H_0-E\Id)^{-1} + \qty(H_0-E\Id)^{-1} (H_0-H) \qty(H-E\Id)^{-1},
\ee
\noindent
As an illustration, Appendix \ref{A} reports the exact derivation of this formula in the case of a two-state system.\\
\\
{\it $\bullet$ Dyson equation.} In fact, there is a more direct way to derive the same equation by resorting to the Dyson equation. Starting from the
well-known equality
where $H_0$ is some arbitrary reference Hamiltonian, we have the Dyson equation
\be
\frac{1}{H-E} = \frac{1}{H_0-E}
+ \frac{1}{H_0-E} (H_0-H)\frac{1}{H-E}
\ee
where $H_0$ is some arbitrary reference Hamiltonian, we have
the Dyson equation
$$
G^E_{ij}= G^E_{0,ij} + \sum_{k,l} G^{E}_{0,ik} (H_0-H)_{kl} G^E_{lj}
$$
Let us choose as $H_0$
$$
\langle i |H_0|j\rangle= \langle i|P_i H P_i|j\rangle \;\;\; {\rm for \; all \; i \;j}.
$$
The Dyson equation becomes
$$
\langle i| \frac{1}{H-E}|j\rangle
= \langle i| P_i \frac{1}{H-E} P_i|j\rangle
$$
\be
+ \sum_k \langle i| P_i \frac{1}{H-E} P_i(H_0-H)|k\rangle \langle k|\frac{1}{H-E}|j\rangle
\label{eq:GE}
G^E_{ij}= \titou{G^E_{0,ij}} + \sum_{kl} G^{E}_{0,ik} (H_0-H)_{kl} G^E_{lj}
\ee
Let us choose as $H_0$ such that $\mel{ i }{ H_0 }{ j } = \mel{ i }{ P_i H P_i }{ j }$ for all $i$ and $j$.
Then, The Dyson equation \eqref{eq:GE} becomes
\begin{multline}
\mel{ i }{ \qty(H-E \Id)^{-1} }{ j }
= \mel{ i }{ P_i \qty(H-E \Id)^{-1} P_i }{ j }
\\
+ \sum_k \mel{ i }{ P_i \qty(H-E \Id)^{-1} P_i(H_0-H) }{ k } \mel{ k }{ \qty(H-E \Id)^{-1} }{ j }.
\end{multline}
Now, we have
$$
P_i \frac{1}{H-E} P_i(H_0-H) = P_i \frac{1}{H-E} P_i (P_i H P_i - H)
$$
$$
= P_i \frac{1}{H-E} P_i (-H) (1-P_i)
$$
\be
\begin{split}
P_i \qty(H-E \Id)^{-1} P_i(H_0-H)
& = P_i \qty(H-E \Id)^{-1} P_i (P_i H P_i - H)
\\
& = P_i \qty(H-E \Id)^{-1} P_i (-H) (1-P_i)
\end{split}
\ee
and the Dyson equation may be written under the form
$$
\langle i| \frac{1}{H-E}|j\rangle
= \langle i| P_i \frac{1}{H-E} P_i|j\rangle
$$
$$
+ \sum_{k \notin {\cal D}_i} \langle i| P_i \frac{1}{H-E} P_i (-H)(1-P_i)|k\rangle \langle k|\frac{1}{H-E}|j\rangle
$$
which is identical to Eq.(\ref{eqfond}) when $G$ is expanded iteratively.\\
\begin{multline}
\mel{ i }{ \qty(H-E \Id)^{-1} }{ j }
= \mel{ i }{ P_i \qty(H-E \Id)^{-1} P_i }{ j}
\\
+ \sum_{k \notin \cD_i} \mel{ i }{ P_i \qty(H-E \Id)^{-1} P_i (-H)(1-P_i) }{ k } \mel{k}{\qty(H-E \Id)^{-1}}{j}
\end{multline}
which is identical to Eq.~\eqref{eq:eqfond} when $G$ is expanded iteratively.
\\
Let us use as effective transition probability density
\be
P(I \to J) = \frac{1} {\PsiG(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \PsiG(J)
P_{I \to J} = \frac{1} {\PsiG(I)} \mel{ I }{ P_I \qty(H^+ - \EL^+ \Id)^{-1} P_I (-H^+) (1-P_I) }{ J } \PsiG(J)
\ee
and the weight
\be
W^E_{IJ} =
\frac{\langle I|\frac{1}{H-E} P_I (-H)(1-P_I) |J\rangle }{\langle I|\frac{1}{H^+-E^+_L} P_I (-H^+)(1-P_I) |J\rangle}
W^E_{IJ} =
\frac{ \mel{ I }{ \qty(H-E \Id)^{-1} P_I (-H)(1-P_I) }{ J} }
{\mel{ I }{ \qty(H^+ - \EL^+ \Id)^{-1} P_I (-H^+)(1-P_I) }{ J} }
\ee
Using Eqs.~\eqref{eq:eq1C}, \eqref{eq:eq3C} and \eqref{eq:relation}, we verify that $P_{I \to J} \ge 0$ and $\sum_J P_{I \to J}=1$.
Using Eqs.~\eqref{eq:eq1C}, \eqref{eq:eq3C} and \eqref{eq:relation}, one can easily verify that $P_{I \to J} \ge 0$ and $\sum_J P_{I \to J}=1$.
Finally, the probabilistic expression writes
$$
\langle I_0| \frac{1}{H-E}|I_N\rangle
= \langle I_0| P_{I_0} \frac{1}{H-E} P_{I_0}|I_N\rangle
$$
\begin{multline}
\label{eq:final_E}
\mel{ I_0 }{ \qty(H-E \Id)^{-1} }{ I_N }
= \mel{ I_0 }{ P_{I_0} \qty(H-E \Id)^{-1} P_{I_0} }{ I_N }
\\
+ \sum_{p=0}^{\infty} \expval{ \qty( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} ) \mel{ I_p }{ P_{I_p} \qty(H-E \Id)^{-1} P_{I_p} }{ I_N } }.
\end{multline}
%----------------------------%
\subsubsection{Energy estimator}
%----------------------------%
To calculate the energy we introduce the following quantity
\be
+ \sum_{p=0}^{\infty} \Bigg \langle \Big( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|I_N\rangle \Bigg \rangle
\label{final_E}
\ee
{\it Energy estimator.} To calculate the energy we introduce the following quantity
\be
{\cal E}(E) = \frac{ \langle I_0|\frac{1}{H-E}|H\PsiT\rangle} {\langle I_0|\frac{1}{H-E}|\PsiT\rangle}.
\cE(E) = \frac{ \mel{ I_0 }{ \qty(H-E \Id)^{-1} }{ H\PsiT } } {\mel{ I_0 }{ \qty(H-E \Id)^{-1} }{ \PsiT } }.
\label{calE}
\ee
and search for the solution $E=E_0$ of
\be
{\cal E}(E)= E
\ee
and search for the solution $E=E_0$ of $\cE(E)= E$.
Using the spectral decomposition of $H$ we have
\be
{\cal E}(E) = \frac{ \sum_i \frac{E_i c_i}{E_i-E}}{\sum_i \frac{c_i}{E_i-E}}
\label{calE}
\label{eq:calE}
\cE(E) = \frac{ \sum_i \frac{E_i c_i}{E_i-E}}{\sum_i \frac{c_i}{E_i-E}}
\ee
with
\be
c_i = \langle I_0| \Phi_0\rangle \langle \Phi_0| \PsiT\rangle
\ee
It is easy to check that in the vicinity of $E=E_0$, ${\cal E}(E)$ is a linear function of $E-E_0$.
So, in practice we compute a few values of ${\cal E}(E^{k})$ and fit the data using some function of $E$ close to the linearity
to extrapolate the exact value of $E_0$. Let us describe the 3 functions used here for the fit.\\
with \titou{$c_i = \braket{ I_0 }{ \Phi_0 } \braket{ \Phi_0}{ \PsiT }$}.
It is easy to check that, in the vicinity of $E = E_0$, $\cE(E)$ is a linear function of $E - E_0$.
Therefore, in practice, we compute several values of \titou{$\cE(E^{k})$} and fit the data using some function of $E$ close to the linearity to extrapolate the exact value of $E_0$.
%Let us describe the 3 functions used here for the fit.
i) Linear fit\\
We write
\be
{\cal E}(E)= a_0 + a_1 E
\ee
and search the best value of $(a_0,a_1)$ by fitting the data.
Then, ${\cal E}(E)=E$ leads to
$$
E_0= \frac{a_0}{1-a_1}
$$
ii) Quadratic fit\\
At the quadratic level we write
$$
{\cal E}(E)= a_0 + a_1 E + a_2 E^2
$$
leading to
$$
E_0 = \frac{1 - a_1 \pm \sqrt{ (a_1 -1)^2 - 4 a_0 a_2}}{2 a_2}
$$
iii) Two-component fit\\
We take the advantage that the exact expression of ${\cal E}(E)$ is known as an infinite series, Eq.(\ref{calE}).
If we limit ourselves to the first two-component, we write
\be
{\cal E}(E) = \frac{ \frac{E_0 c_0}{E_0-E} + \frac{E_1 c_1}{E_1-E}}{\frac{c_0}{E_0-E} + \frac{c_1}{E_1-E} }
\ee
Here, the variational parameters used for the fit are ($c_0,E_0,c_1,E_1)$.\\
%i) Linear fit\\
%We write
%\be
%\cE(E)= a_0 + a_1 E
%\ee
%and search the best value of $(a_0,a_1)$ by fitting the data.
%Then, $\cE(E)=E$ leads to
%$$
%E_0= \frac{a_0}{1-a_1}
%$$
%ii) Quadratic fit\\
%At the quadratic level we write
%$$
%\cE(E)= a_0 + a_1 E + a_2 E^2
%$$
%leading to
%$$
%E_0 = \frac{1 - a_1 \pm \sqrt{ (a_1 -1)^2 - 4 a_0 a_2}}{2 a_2}
%$$
%iii) Two-component fit\\
%We take the advantage that the exact expression of $\cE(E)$ is known as an infinite series, Eq.(\ref{calE}).
%If we limit ourselves to the first two-component, we write
%\be
%\cE(E) = \frac{ \frac{E_0 c_0}{E_0-E} + \frac{E_1 c_1}{E_1-E}}{\frac{c_0}{E_0-E} + \frac{c_1}{E_1-E} }
%\ee
%Here, the variational parameters used for the fit are ($c_0,E_0,c_1,E_1)$.\\
In order to have a precise extrapolation of the energy, it is interesting to compute the ratio
${\cal E}(E)$ for values of $E$ as close as possible to the exact energy. However, in that case
the numerators and denominators computed diverge. This is reflected by the fact that we need to compute
more and more $p$-components with an important increase of statistical fluctuations. So, in practice
a tradoff has to be found between the possible bias in the extrapolation and the amount of simulation time
required.
In order to have a precise extrapolation of the energy, it is interesting to compute the ratio $\cE(E)$ for values of $E$ as close as possible to the exact energy.
However, in that case the numerators and denominators computed diverge.
This is reflected by the fact that we need to compute more and more $p$-components with an important increase of statistical fluctuations.
So, in practice a trade-off has to be found between the possible bias in the extrapolation and the amount of simulation time required.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Numerical application to the Hubbard model}
@ -805,17 +802,17 @@ Let us consider the one-dimensional Hubbard Hamiltonian for a chain of $N$ sites
\ee
where $\langle i j\rangle$ denotes the summation over two neighboring sites,
$\hat{a}_{i\sigma} (\hat{a}_{i\sigma})$ is the fermionic creation (annihilation) operator of
an electron of spin $\sigma$ ($=\uparrow$ or $\downarrow$) at site $i$, $\hat{n}_{i\sigma} = \hat{a}^+_{i\sigma} \hat{a}_{i\sigma}$ the
an spin-$\sigma$ electron (with $\sigma$ = $\uparrow$ or $\downarrow$) on site $i$, $\hat{n}_{i\sigma} = \hat{a}^+_{i\sigma} \hat{a}_{i\sigma}$ the
number operator, $t$ the hopping amplitude and $U$ the on-site Coulomb repulsion.
We consider a chain with an even number of sites and open boundary conditions (OBC)
at half-filling, that is, $N_{\uparrow}=N_{\downarrow}=\frac{N}{2}$.
We consider a chain with an even number of sites and open boundary conditions
at half-filling, that is, $N_{\uparrow}=N_{\downarrow}=N/2$.
In the site-representation, a general vector of the Hilbert space will be written as
\be
|n\rangle = |n_{1 \uparrow},...,n_{N \uparrow},n_{1 \downarrow},...,n_{N \downarrow}\rangle
\ee
where $n_{i \sigma}=0,1$ is the number of electrons of spin $\sigma$ at site $i$.
For the 1D Hubbard model and OBC the components of the ground-state vector have the same sign (say, $c_i \ge 0$).
For the one-dimensional Hubbard model with open boundary conditions, the components of the ground-state vector have the same sign (say, $c_i \ge 0$).
It is then possible to identify the guiding and trial vectors, that is, $|c^+\rangle=|c_T\rangle$.
As trial wave function we shall employ a generalization of the Gutzwiller wave function\cite{Gutzwiller_1963} written under the form
\be
@ -915,7 +912,7 @@ we shall restrict ourselves to the case of the Green's Function Monte Carlo appr
Following Eqs.(\ref{final_E},\ref{calE}), the practical formula used for computing the QMC energy is written as
\be
{\cal E}_{QMC}(E,p_{ex},p_{max})= \frac{H_0 +...+ H_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} H^{QMC}_p }
\cE_{QMC}(E,p_{ex},p_{max})= \frac{H_0 +...+ H_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} H^{QMC}_p }
{S_0 +...+ S_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} S^{QMC}_p }
\label{calE}
\ee
@ -946,9 +943,9 @@ a slower convergence is observed, as expected from the divergence of the matrix
Green's matrix at $E=E_0$ where the expansion does not converge at all. Note the oscillations of the curves as a function of $p$ due to a
parity effect specific to this system. In practice, it is not too much of a problem since
a smoothly convergent behavior is nevertheless observed for the even- and odd-parity curves.
The ratio, ${\cal E}_{QMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in figure \ref{fig2}. Here, $p_{max}$ is taken sufficiently large
The ratio, $\cE_{QMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in figure \ref{fig2}. Here, $p_{max}$ is taken sufficiently large
so that the convergence at large $p$ is reached. The values of $E$ are $-0.780$, $-0.790$, $-0,785$, $-0,780$, and $-0.775$. For smaller $E$ the curve is extrapolated using
the two-component expression. The estimate of the energy obtained from ${\cal E}(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068...$.
the two-component expression. The estimate of the energy obtained from $\cE(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068...$.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{fig1}
@ -981,7 +978,7 @@ the average trapping time becomes infinite. Let us emphasize that the rate of co
For example, the domain ${\cal D}(0,3) \cup {\cal D}(1,0)$ has a trapping time for the N\'eel state of 6.2, while
the domain ${\cal D}(0,3) \cup {\cal D}(1,1)$ having almost the same number of states (28 states), has an average trapping time about 6 times longer. Finally, the last column gives the energy obtained for
$E=-1$. The energy is expected to be independent of the domain and to converge to a common value, which is indeed the case here.
The exact value, ${\cal E}(E=-1)=-0.75272390...$, can be found at the last row of the Table for the case of a domain corresponding to the full space.
The exact value, $\cE(E=-1)=-0.75272390...$, can be found at the last row of the Table for the case of a domain corresponding to the full space.
In sharp contrast, the statistical error depends strongly on the type of domains used. As expected, the largest error of $3 \times 10^{-5}$ is obtained in the case of
a single-state domain for all states. The smallest statistical error is obtained for the "best" domain having the largest average
trapping time. Using this domain leads to a reduction in the statistical error as large as about three orders of magnitude, nicely illustrating the
@ -992,11 +989,11 @@ critical importance of the domains employed.
\caption{$N$=4, $U$=12, $E$=-1, $\alpha=1.292$, $\beta=0.552$,$p_{ex}=4$. Simulation with 20 independent blocks and $10^5$ stochastic paths
starting from the N\'eel state. $\bar{t}_{I_0}$
is the average trapping time for the
N\'eel state. $p_{\rm conv}$ is a measure of the convergence of ${\cal E}_{QMC}(p)$ as a function of $p$, see text.}
N\'eel state. $p_{\rm conv}$ is a measure of the convergence of $\cE_{QMC}(p)$ as a function of $p$, see text.}
\label{tab1}
\begin{ruledtabular}
\begin{tabular}{lcccl}
Domain & Size & $\bar{t}_{I_0}$ & $p_{\rm conv}$ & $\;\;\;\;\;\;{\cal E}_{QMC}$ \\
Domain & Size & $\bar{t}_{I_0}$ & $p_{\rm conv}$ & $\;\;\;\;\;\;\cE_{QMC}$ \\
\hline
Single & 1 & 0.026 & 88 &$\;\;\;\;$-0.75276(3)\\
${\cal D}(0,3)$ & 2 & 2.1 & 110 &$\;\;\;\;$-0.75276(3)\\
@ -1028,7 +1025,7 @@ case where eight components for $H_p$ and $S_p$ are exactly computed.
For the best domain, the impact is much more important with a huge reduction of about three orders of
magnitude in the statistical error. Table \ref{tab3} reports
the energies converged as a function of $p$ with their statistical error on the last digit for $E=
-0.8, -0.795, -0.79, -0.785$, and $-0.78$. The values are displayed on Fig.\ref{fig2}. As seen on the figure the behavior of ${\cal E}$ as a function of
-0.8, -0.795, -0.79, -0.785$, and $-0.78$. The values are displayed on Fig.\ref{fig2}. As seen on the figure the behavior of $\cE$ as a function of
$E$ is very close to the linearity. The extrapolated values obtained from the five values of the energy with the three fitting functions are reported.
Using the linear fitting function
leads to an energy of -0.7680282(5) to compare with the exact value of -0.768068... A small bias of about $4 \times 10^{-5}$ is observed. This bias vanishes within the
@ -1196,7 +1193,6 @@ $$
\be
\Big[ \prod_{k=0}^{p-1} \langle I_k| P_k \frac{1}{H-E} P_k (-H)(1-P_k)|I_{k+1} \rangle \Big]
\langle I_p| P_p \frac{1} {H-E} P_p|\Psi\rangle
\label{eqfond}
\ee
To treat simultaneously the two possible cases for the final state, $|I_N\rangle =|1\rangle$ or $|2\rangle$,
the equation has been slightly generalized to the case of a general vector for the final state written as