saving work

This commit is contained in:
Pierre-Francois Loos 2022-09-21 16:26:52 +02:00
parent 2b76d91354
commit e6f3649482

268
g.tex
View File

@ -522,9 +522,9 @@ corresponding to the last move connecting the inside and outside regions of the
0, & \text{otherwise}.
\end{cases}
\ee
Physically, $F$ may be seen as a flux operator through the boundary of ${\cal D}_{I}$.
Physically, $F$ may be seen as a flux operator through the boundary of $\cD_{I}$.
\titou{Now}, the probability of being trapped $n$ times in ${\cal D}_{I}$ is given by
\titou{Now}, the probability of being trapped $n$ times in $\cD_{I}$ is given by
\be
\label{eq:PiN}
P_{I}(n) = \frac{1}{\PsiG_{I}} \mel{ I }{ \qty(T^+_{I})^{n-1} F^+_{I} }{ \PsiG }.
@ -575,7 +575,7 @@ where $\cC_p$ is the set of paths with $p$ exit states, $\ket{I_k}$, and trappin
It follows that
\begin{multline}
\label{eq:Gt}
G^{(N)}_{I_0 I_N}= G^{(N),{\cal D}}_{I_0 I_N} +
G^{(N)}_{I_0 I_N}= G^{(N),\cD}_{I_0 I_N} +
\sum_{p=1}^{N}
\sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}} }
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}
@ -583,7 +583,7 @@ It follows that
\\
\times
\qty[ \prod_{k=0}^{p-1} \mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} } ]
G^{(n_p-1),{\cal D}}_{I_p I_N},
G^{(n_p-1),\cD}_{I_p I_N},
\end{multline}
where $\delta_{i,j}$ is a Kronecker delta.
@ -594,7 +594,6 @@ In this case, $p=N$ and $n_k=1$ (for $0 \le k \le N$) and we are left only with
To express the fundamental equation of $G$ under the form of a probabilistic average, we rely on the importance-sampled version of Eq.~\eqref{eq:G}, \ie,
\begin{multline}
\label{eq:Gbart}
\bar{G}^{(N)}_{I_0 I_N}=\bar{G}^{(N),\cD}_{I_0 I_N} +
\sum_{p=1}^{N}
\sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}}}
@ -614,7 +613,7 @@ and using the effective transition probability defined in Eq.~\eqref{eq:eq3C}, w
\bar{G}^{(N)}_{I_0 I_N} = \bar{G}^{(N),\cD}_{I_0 I_N} + \sum_{p=1}^{N}
\expval{
\qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} )
\bar{G}^{(n_p-1), {\cal D}}_{I_p I_N} },
\bar{G}^{(n_p-1), \cD}_{I_p I_N} },
\ee
where, in this context, the average of a given function $F$ is defined as
\begin{multline}
@ -631,7 +630,7 @@ In practice, a schematic DMC algorithm to compute the average is as follows.\\
ii) Generate a stochastic path by running over $k$ (starting at $k=0$) as follows.\\
$\;\;\;\bullet$ Draw $n_k$ using the probability $P_{I_k}(n)$ [see Eq.~\eqref{eq:PiN}]\\
$\;\;\;\bullet$ Draw the exit state, $\ket{I_{k+1}}$, using the conditional probability $$\frac{\cP_{I_k \to I_{k+1}}(n_k)}{P_{I_k}(n_k)}$$\\
iii) Terminate the path when $\sum_k n_k=N$ is greater than some target value $N_{\rm max}$ and compute $F(I_0,n_0;\ldots;I_N,n_N)$\\
iii) Terminate the path when $\sum_k n_k=N$ is greater than some target value $N_\text{max}$ and compute $F(I_0,n_0;\ldots;I_N,n_N)$\\
iv) Go to step ii) until some maximum number of paths is reached.\\
\\
At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_\text{max}$ is obtained.
@ -744,7 +743,6 @@ Finally, the probabilistic expression reads
To calculate the energy, we introduce the following estimator
\be
\cE(E) = \frac{ \mel{ I_0 }{ \qty(H-E \Id)^{-1} }{ H\PsiT } } {\mel{ I_0 }{ \qty(H-E \Id)^{-1} }{ \PsiT } },
\label{calE}
\ee
and search for the solution of the non-linear equation $\cE(E)= E$.
Using the spectral decomposition of $H$, we have
@ -755,7 +753,7 @@ Using the spectral decomposition of $H$, we have
where $c_i = \braket{ I_0 }{ \Phi_i } \braket{ \Phi_i}{ \PsiT }$ and $\Phi_i$ are eigenstates of $H$, \ie, $H \ket{\Phi_i} = E_i \ket{\Phi_i}$.
It is easy to check that, in the vicinity of the exact energy $E_0$, $\cE(E)$ is a linear function of $E - E_0$.
Therefore, in practice, we compute the value of $\cE(E)$ for several values of $E$, and fit these data using a linear or quadratic function of $E$ in order to obtain, via extrapolation, an estimate of $E_0$.
Therefore, in practice, we compute the value of $\cE(E)$ for several values of $E$, and fit these data using a linear, quadratic or a more complicated function of $E$ in order to obtain, via extrapolation, an estimate of $E_0$.
In order to have a precise extrapolation of the energy, it is best to compute $\cE(E)$ for values of $E$ as close as possible to the exact energy.
However, as $E \to E_0$, both the numerators and denominators of Eq.~\eqref{eq:calE} diverge.
\titou{This is reflected by the fact that one needs to compute more and more $p$-components with an important increase of statistical fluctuations.}
@ -796,6 +794,10 @@ Thus, from a practical point of view, a trade-off has to be found between the qu
\label{sec:numerical}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%======================================
\subsection{Hamiltonian and trial wave function}
%======================================
Let us consider the one-dimensional Hubbard Hamiltonian for a chain of $N$ sites
\be
H= -t \sum_{\expval{ i j } \sigma} \hat{a}^+_{i\sigma} \hat{a}_{j\sigma}
@ -824,6 +826,10 @@ The parameters $\alpha$ and $\beta$ are optimized by minimizing the variational
E_\text{T}(\alpha,\beta) = \frac{\mel{ c_\text{T} }{H }{c_\text{T}}} {\braket{ c_\text{T} }{ c_\text{T} }}.
\ee
%======================================
\subsection{Domains}
%======================================
As discussed above, the efficiency of the method depends on the choice of states forming each domain.
As a general guiding principle, it is advantageous to build domains associated with a large average trapping time in order to integrate out the most important part of the Green's matrix.
@ -847,14 +853,14 @@ To define $\cD$, let us introduce the following set of states
\titou{\cS_{ij} = \qty{ \ket{n} : n_D(n)=i \land n_A(n)=j }}.
\ee
which means that $\cD$ contains the set of states having up to $n_D^\text{max}$ double occupations and, for each state with a number of double occupations equal to $m$, a number of nearest-neighbor antiparallel pairs between $n_A^\text{min}(m)$ and $n_A^\text{max}(m)$.
Here, $n_A^\text{max}(m)$ will not be varied and taken to be the maximum possible for a given $m$, $n_A^\text{max}(m)= \max(N-1-2m,0)$.
Here, $n_A^\text{max}(m)$ will not be varied and taken to be the maximum possible for a given $m$, $n_A^\text{max}(m)= \max_{N}(N-1-2m,0)$.
Using these definitions, the main domain is taken as the union of some elementary domains
\be
\cD = \bigcup_{n_D=0}^{n_D^\text{max}}\cD(n_D,n_A^{\rm min}(n_D))
\cD = \bigcup_{n_D=0}^{n_D^\text{max}}\cD(n_D,n_A^\text{min}(n_D)),
\ee
where the elementary domain is defined as
\be
\cD(n_D,n_A^\text{min}(n_D))=\bigcup_{ n_A^\text{min}(n_D) \leq j \leq n_A^{\rm max}(n_D)} \cS_{n_D j}
\cD(n_D,n_A^\text{min}(n_D))=\bigcup_{ j = n_A^\text{min}(n_D)}^{n_A^\text{max}(n_D)} \cS_{n_D j}.
\ee
The two quantities defining the main domain are thus $n_D^\text{max}$ and $n_A^\text{min}(m)$.
To give an illustrative example, let us consider the 4-site case. There are 6 possible elementary domains
@ -888,60 +894,75 @@ where
\end{align*}
For the three last cases, the dots indicate the remaining states obtained by permuting the position of the pairs.
Let us now present our QMC calculations for the Hubbard model.
In what follows we shall restrict ourselves to the case of the Green's function Monte Carlo approach where trapping times are integrated out exactly.
%======================================
\subsection{DMC simulations}
%======================================
Following Eqs.~\eqref{eq:final_E} and \eqref{calE}, the practical formula used for computing the QMC energy is written as
Let us now present our DMC calculations for the Hubbard model.
In what follows, we shall restrict ourselves to the case of the Green's function Monte Carlo approach where trapping times are integrated out exactly.
Following Eqs.~\eqref{eq:final_E} and \eqref{eq:calE}, the practical formula used for computing the DMC energy is written as
\be
\cE^\text{QMC}(E,p_{ex},p_{max})= \frac{H_0 +...+ H_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} H^\text{QMC}_p }
{S_0 +...+ S_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} S^\text{QMC}_p }
\label{calE}
\cE^\text{DMC}(E,p_{ex},p_{max})= \frac{H_0 +...+ H_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} H^\text{DMC}_p }
{S_0 +...+ S_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} S^\text{DMC}_p }
\ee
where $p_{ex}+1$ is the number of pairs, ($H_p$, $S_p$), computed analytically.
For $p_{ex} < p \le p_{max}$ the Monte Carlo estimates are written as
\be
H^\text{QMC}_p= \expval{ \qty( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} ) \mel{ I_p }{ P_{I_p} \frac{1}{H-E} P_{I_p} }{ H\PsiT } }
\ee
and
\be
S^\text{QMC}_p= \expval{ \qty(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} ) \mel{ I_p }{ P_{I_p} \frac{1}{H-E} P_{I_p} }{ \PsiT } }.
\ee
\begin{align}
H^\text{DMC}_p & = \expval{ \qty( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} ) \mel{ I_p }{ P_{I_p} \frac{1}{H-E} P_{I_p} }{ H\PsiT } }
\\
S^\text{DMC}_p & = \expval{ \qty(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} ) \mel{ I_p }{ P_{I_p} \frac{1}{H-E} P_{I_p} }{ \PsiT } }.
\end{align}
Let us begin with a small chain of 4 sites with $U=12$.
From now on, we shall take $t=1$.
The size of the linear space is ${\binom{4}{2}}^2= 36$ and the ground-state energy obtained by exact diagonalization is $E_0=-0.768068...$.
The two variational parameters of the trial vector have been optimized and fixed at the values of $\alpha=1.292$, and $\beta=0.552$ with a variational energy of $E_\text{T}=-0.495361...$.
In what follows $\ket{I_0}$ will be systematically chosen as one of the two N\'eel states, {\it e.g.} $\ket{I_0} = \ket{\uparrow,\downarrow, \uparrow,\ldots}$.
The size of the linear space is ${\binom{4}{2}}^2 = 36$ and the ground-state energy obtained by exact diagonalization is $E_0=-0.768068\ldots$.
The two variational parameters of the trial vector have been optimized and fixed at the values of $\alpha=1.292$, and $\beta=0.552$ with a variational energy of $E_\text{T}=-0.495361\ldots$.
In what follows $\ket{I_0}$ will be systematically chosen as one of the two N\'eel states, \eg, $\ket{I_0} = \ket{\uparrow,\downarrow, \uparrow,\ldots}$.
Figure \ref{fig1} shows the convergence of $H_p$ as a function of $p$ for different values of the reference energy $E$.
We consider the simplest case where a single-state domain is associated to each state.
Five different values of $E$ have been chosen, namely $E=-1.6,-1.2,-1,-0.9$, and $E=-0.8$.
Only $H_0$ is computed analytically ($p_{ex}=0$). At the scale of the figure error bars are too small to be perceptible.
When $E$ is far from the exact value of $E_0=-0.768...$ the convergence is very rapid and only a few terms of the $p$-expansion are necessary.
Five different values of $E$ have been chosen, namely $E=-1.6$, $-1.2$, $-1.0$, $-0.9$, and $-0.8$.
Only $H_0$ is computed analytically ($p_{ex}=0$).
At the scale of the figure, error bars are too small to be seen.
When $E$ is far from the exact value of $E_0=-0.768068\ldots$, the convergence is very rapid and only a few terms of the $p$-expansion are necessary.
In contrast, when $E$ approaches the exact energy, a slower convergence is observed, as expected from the divergence of the matrix elements of the Green's matrix at $E=E_0$ where the expansion does not converge at all.
Note the oscillations of the curves as a function of $p$ due to a parity effect specific to this system.
In practice, it is not too much of a problem since a smoothly convergent behavior is nevertheless observed for the even- and odd-parity curves.
The ratio, $\cE_{QMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in figure \ref{fig2}. Here, $p_{max}$ is taken sufficiently large so that the convergence at large $p$ is reached.
The values of $E$ are $-0.780$, $-0.790$, $-0,785$, $-0,780$, and $-0.775$. For smaller $E$ the curve is extrapolated using the two-component expression. The estimate of the energy obtained from $\cE(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068...$.
The ratio, $\cE^\text{DMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in Fig.~\ref{fig2}.
Here, $p_{max}$ is taken sufficiently large so that the convergence at large $p$ is reached.
The values of $E$ are $-0.780$, $-0.790$, $-0.785$, $-0.780$, and $-0.775$.
For small $E$, the curve is extrapolated using the so-called two-component expression:
\be
\label{eq:2comp}
\cE(E) = \frac{ \frac{\epsilon_0 c_0}{\epsilon_0-E} + \frac{\epsilon_1 c_1}{\epsilon_1-E}}{\frac{c_0}{\epsilon_0-E} + \frac{c_1}{\epsilon_1-E} },
\ee
which considers only the first two terms of the exact expression of $\cE(E)$ [see Eq.~\eqref{eq:calE}].
%If we limit ourselves to the first two-component, we write
%\be
%\cE(E) = \frac{ \frac{E_0 c_0}{E_0-E} + \frac{E_1 c_1}{E_1-E}}{\frac{c_0}{E_0-E} + \frac{c_1}{E_1-E} }
%\ee
Here, the fitting parameters that needs to be determined are $c_0$, $\epsilon_0$, $c_1$, and $\epsilon_1$.
The estimate of the energy obtained from $\cE(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068\ldots$.
%%% FIG 1 %%%
\begin{figure}[h!]
\begin{figure}
\includegraphics[width=\columnwidth]{fig1}
\caption{One-dimensional Hubbard model for $N=4$ and $U=12$.
$H_p$ as a function of $p$ for $E=-1.6$, $-1.2$, $-1$, $-0.9$, and $-0.8$.
$H_0$ is computed analytically and $H_p$ ($p > 0$) by Monte Carlo.
\caption{One-dimensional Hubbard model with $N=4$ and $U=12$.
$H_p$ as a function of $p$ for $E=-1.6$, $-1.2$, $-1.0$, $-0.9$, and $-0.8$.
$H_0$ is computed analytically and $H_p$ ($p > 0$) is computed stochastically.
Error bars are smaller than the symbol size.}
\label{fig1}
\end{figure}
%%% FIG 2 %%%
\begin{figure}[h!]
\begin{figure}
\includegraphics[width=\columnwidth]{fig2}
\caption{One-dimensional Hubbard model for $N=4$ and $U=12$.
\caption{One-dimensional Hubbard model with $N=4$ and $U=12$.
$\cE(E)$ as a function of $E$.
The horizontal and vertical lines are at $\cE(E_0)=E_0$ and $E=E_0$, respectively.
$E_0 = -0.768068\ldots$ is the exact energy.
The dotted line is the \titou{two-component} extrapolation.
The dotted line is the two-component extrapolation defined in Eq.~\eqref{eq:2comp}.
Error bars are smaller than the symbol size.}
\label{fig2}
\end{figure}
@ -950,7 +971,7 @@ Table \ref{tab1} illustrates the dependence of the Monte Carlo results upon the
The reference energy is $E=-1$.
The first column indicates the various domains consisting of the union of some elementary domains as explained above.
The first line of the table gives the results when using a minimal single-state domain for all states, and the last one for the maximal domain containing the full linear space.
The size of the various domains is given in column 2, the average trapping time for the state $\ket{I_0}$ in the third column, and an estimate of the speed of convergence of the $p$-expansion for the energy in the fourth column.
The size of the various domains is given in the second column, the average trapping time for the state $\ket{I_0}$ in the third column, and an estimate of the speed of convergence of the $p$-expansion for the energy in the fourth column.
To quantify the rate of convergence, we report the quantity, $p_{conv}$, defined as the smallest value of $p$ for which the energy is converged with six decimal places.
The smaller $p_{conv}$, the better the convergence is.
Although this is a rough estimate, it is sufficient here for our purpose.
@ -958,24 +979,27 @@ As clearly seen, the speed of convergence is directly related to the magnitude o
The longer the stochastic trajectories remain trapped within the domain, the better the convergence.
Of course, when the domain is chosen to be the full space, the average trapping time becomes infinite.
Let us emphasize that the rate of convergence has no reason to be related to the size of the domain.
For example, the domain ${\cal D}(0,3) \cup {\cal D}(1,0)$ has a trapping time for the N\'eel state of 6.2, while the domain ${\cal D}(0,3) \cup {\cal D}(1,1)$ having almost the same number of states (28 states), has an average trapping time about 6 times longer.
For example, the domain $\cD(0,3) \cup \cD(1,0)$ has a trapping time for the N\'eel state of $6.2$, while the domain $\cD(0,3) \cup \cD(1,1)$ having almost the same number of states (28 states), has an average trapping time about 6 times longer.
Finally, the last column gives the energy obtained for $E=-1$.
The energy is expected to be independent of the domain and to converge to a common value, which is indeed the case here.
The exact value, $\cE(E=-1)=-0.75272390...$, can be found at the last row of the Table for the case of a domain corresponding to the full space.
The exact value, $\cE(E=-1)=-0.75272390\ldots$, can be found at the last row of Table \ref{tab1} for the case of a domain corresponding to the full space.
In sharp contrast, the statistical error depends strongly on the type of domains used.
As expected, the largest error of $3 \times 10^{-5}$ is obtained in the case of a single-state domain for all states. The smallest statistical error is obtained for the "best" domain having the largest average trapping time.
As expected, the largest error of $3 \times 10^{-5}$ is obtained in the case of a single-state domain for all states.
The smallest statistical error is obtained for the ``best'' domain having the largest average trapping time.
Using this domain leads to a reduction in the statistical error as large as about three orders of magnitude, nicely illustrating the critical importance of the domains employed.
\begin{table}[h!]
%%% TABLE I %%%
\begin{table}
\centering
\caption{$N=4$, $U=12$, $E=-1$, $\alpha=1.292$, $\beta=0.552$, $p_{ex}=4$.
Simulation with 20 independent blocks and $10^5$ stochastic paths starting from the N\'eel state.
\caption{One-dimensional Hubbard model with $N=4$, $U=12$, $E=-1$, $\alpha=1.292$, $\beta=0.552$, and $p_{ex}=4$.
The simulation is performed with 20 independent blocks and $10^5$ stochastic paths starting from the N\'eel state.
$\bar{t}_{I_0}$ is the average trapping time for the N\'eel state.
$p_{\rm conv}$ is a measure of the convergence of $\cE_{QMC}(p)$ as a function of $p$, see text.}
$p_\text{conv}$ is a measure of the convergence of $\cE^\text{DMC}(p)$ as a function of $p$.
See text for more details.}
\label{tab1}
\begin{ruledtabular}
\begin{tabular}{lrrrl}
Domain & Size & $\bar{t}_{I_0}$ & $p_{\rm conv}$ & $\;\;\;\;\;\;\cE_{QMC}$ \\
Domain & Size & $\bar{t}_{I_0}$ & $p_\text{conv}$ & \mcc{$\cE^\text{DMC}$} \\
\hline
Single & 1 & 0.026 & 88 &$-0.75276(3)$\\
$\cD(0,3)$ & 2 & 2.1 & 110 &$-0.75276(3)$\\
@ -997,53 +1021,53 @@ $\cD(0,1)\cup\cD(1,0)\cup\cD$(2,0) & 36 & $\infty$&1&$-0.75272390$\\
\end{ruledtabular}
\end{table}
As a general rule, it is always good to avoid the Monte Carlo calculation of a quantity which is computable analytically. Here, we apply
this idea to the case of the energy, Eq.(\ref{calE}), where the first $p$-components can be evaluated exactly up to some maximal value of $p$, $p_{ex}$.
Table \ref{tab2} shows the results both for the case of a single-state main domain and for the domain having the largest average trapping time,
namely ${\cal D}(0,1) \cup {\cal D}(1,1)$ (see Table \ref{tab1}). Table \ref{tab2} reports the statistical fluctuations of the
energy for the simulation of Table \ref{tab1}. Results show that it is indeed interesting to compute exactly as many components
as possible. For the single-state domain, a factor 2 of reduction of the statistical error is obtained when passing from no analytical computation, $p_{ex}=0$, to the
case where eight components for $H_p$ and $S_p$ are exactly computed.
For the best domain, the impact is much more important with a huge reduction of about three orders of
magnitude in the statistical error. Table \ref{tab3} reports
the energies converged as a function of $p$ with their statistical error on the last digit for $E=
-0.8, -0.795, -0.79, -0.785$, and $-0.78$. The values are displayed on Fig.\ref{fig2}. As seen on the figure the behavior of $\cE$ as a function of
$E$ is very close to the linearity. The extrapolated values obtained from the five values of the energy with the three fitting functions are reported.
Using the linear fitting function
leads to an energy of -0.7680282(5) to compare with the exact value of -0.768068... A small bias of about $4 \times 10^{-5}$ is observed. This bias vanishes within the
statistical error with the quadratic fit. It is also true when using the two-component function.
Our final value is in full agreement with the
exact value with about six decimal places.
As a general rule, it is always good to avoid the Monte Carlo calculation of a quantity which is computable analytically.
Here, we apply this idea to the case of the energy, Eq.~\eqref{eq:calE}, where the first $p$-components can be evaluated exactly up to some maximal value of $p$, $p_{ex}$.
Table \ref{tab2} shows the results both for the case of a single-state main domain and for the domain having the largest average trapping time, namely $\cD(0,1) \cup \cD(1,1)$ (see Table \ref{tab1}).
Table \ref{tab2} reports the statistical fluctuations of the energy for the simulation of Table \ref{tab1}.
Results show that it is indeed interesting to compute exactly as many components as possible.
For the single-state domain, a factor 2 of reduction of the statistical error is obtained when passing from no analytical computation, $p_{ex}=0$, to the case where eight components for $H_p$ and $S_p$ are exactly computed.
For the best domain, the impact is much more important with a huge reduction of about three orders of magnitude in the statistical error.
Table \ref{tab3} reports the energies converged as a function of $p$ with their statistical error on the last digit for $E=
-0.8$, $-0.795$, $-0.79$, $-0.785$, and $-0.78$.
The values are displayed in Fig.~\ref{fig2}.
As seen on the figure the behavior of $\cE$ as a function of $E$ is very close to the linearity.
The extrapolated values obtained from the five values of the energy with the three fitting functions are reported.
Using the linear fitting function leads to an energy of $-0.7680282(5)$ to compare with the exact value of $-0.768068\ldots$.
A small bias of about $4 \times 10^{-5}$ is observed.
This bias vanishes within the statistical error with the quadratic fit.
It is also true when using the two-component function.
Our final value is in full agreement with the exact value with about six decimal places.
Table \ref{tab4} shows the evolution of the average trapping times and extrapolated energies as a function of $U$ when using ${\cal D}(0,1) \cup {\cal D}(1,0)$ as the main
domain. We also report the variational and exact energies
together with the values of the optimized parameters of the trial wave function. As $U$ increases the configurations with zero or one double occupation
become more and more predominant and the average trapping time increases. For very large values of $U$ (say, $U \ge 12$) the increase of $\bar{t}_{I_0}$
becomes particularly steep.\\
Table \ref{tab4} shows the evolution of the average trapping times and extrapolated energies as a function of $U$ when using $\cD(0,1) \cup \cD(1,0)$ as the main domain.
We also report the variational and exact energies together with the values of the optimized parameters of the trial wave function.
As $U$ increases the configurations with zero or one double occupation become more and more predominant and the average trapping time increases.
For very large values of $U$ (say, $U \ge 12$) the increase of $\bar{t}_{I_0}$ becomes particularly steep.
Finally, in Table \ref{tab4} we report the results obtained for larger systems at $U=12$ for a size of the chain ranging from $N=4$ (36 states)
to $N=12$ ($\sim 10^6$ states). No careful construction of domains maximizing the average trapping time has been performed, we have merely chosen domains
of reasonable size (not more than 2682 ) by taking not too large number of double occupations (only, $n_D=0,1$) and not too small number of
nearest-neighbor antiparallel pairs.
As seen, as the number of sites increases, the average trapping time for the domains chosen decreases. This point is of course undesirable since
the efficiency of the approach may gradually deteriorate when considering large systems. A more elaborate way of constructing domains is clearly called for.
The exact QMC energies extrapolated using the two-component function are also reported.
Similarly to what has been done for $N=4$ the extrapolation is performed using about five values for the reference energy. The extrapolated QMC
energies are in full agreement
with the exact value within error bars. However, an increase of the statistical error is observed when the size increases. To get lower error bars
we need to use better trial wave functions, better domains, and also larger simulation times.
laptop. Of course, it will also be particularly interesting to take advantage of the fully parallelizable character of the algorithm to get much lower error bars.
to $N=12$ ($\sim 10^6$ states).
No careful construction of domains maximizing the average trapping time has been performed, we have merely chosen domains of reasonable size (not more than 2682) by taking not too large number of double occupations (only, $n_D=0,1$) and not too small number of nearest-neighbor antiparallel pairs.
As seen, as the number of sites increases, the average trapping time for the domains chosen decreases.
This point is of course undesirable since the efficiency of the approach may gradually deteriorate when considering large systems.
A more elaborate way of constructing domains is clearly called for.
The exact DMC energies extrapolated using the two-component function are also reported.
Similarly to what has been done for $N=4$ the extrapolation is performed using about five values for the reference energy.
The extrapolated DMC energies are in full agreement with the exact value within error bars.
However, an increase of the statistical error is observed when the size increases.
To get lower error bars we need to use better trial wave functions, better domains, and also larger simulation times.
Of course, it will also be particularly interesting to take advantage of the fully parallelizable character of the algorithm to get much lower error bars.
All these aspects will be considered in a forthcoming work.
%%% TABLE II %%%
\begin{table}
\caption{$N=4$, $U=12$, and $E=-1$.
\caption{One-dimensional Hubbard model with $N=4$, $U=12$, and $E=-1$.
Dependence of the statistical error on the energy with the number of $p$-components calculated analytically.
Same simulation as for Table \ref{tab1}.
Results are presented when a single-state domain is used for all states and when $\cD(0,1) \cup \cD(1,0)$ is chosen as main domain.}
The simulation is performed the same way as Table \ref{tab1}.
Results are presented when a single-state domain is used for all states and when $\cD(0,1) \cup \cD(1,0)$ is chosen as the main domain.}
\label{tab2}
\begin{ruledtabular}
\begin{tabular}{lcc}
$p_{ex}$ & single-state & ${\cal D}(0,1) \cup {\cal D}(1,0)$ \\
$p_{ex}$ & single-state & $\cD(0,1) \cup \cD(1,0)$ \\
\hline
$0$ & $4.3 \times 10^{-5}$ &$ 347 \times 10^{-8}$ \\
$1$ & $4.0 \times10^{-5}$ &$ 377 \times 10^{-8}$\\
@ -1058,60 +1082,64 @@ $8$ & $2.2 \times10^{-5}$ &$ 0.05 \times 10^{-8}$\\
\end{ruledtabular}
\end{table}
%%% TABLE III %%%
\begin{table}
\caption{$N=4$, $U=12$, $\alpha=1.292$, $\beta=0.552$, and $p_{ex}=4$.
Main domain = $\cD(0,1) \cup \cD(1,0)$.
Simulation with 20 independent blocks and $10^6$ paths.
The various fits are done with the five values of $E$}
\caption{One-dimensional Hubbard model with $N=4$, $U=12$, $\alpha=1.292$, $\beta=0.552$, and $p_{ex}=4$.
The main domain is $\cD(0,1) \cup \cD(1,0)$.
The simulation is performed with 20 independent blocks and $10^6$ stochastic paths \titou{starting from the N\'eel state}.
The fits are performed with the five values of $E$.}
\label{tab3}
\begin{ruledtabular}
\begin{tabular}{lc}
$E$ & $E_{QMC}$ \\
$E$ & $E^\text{DMC}$ \\
\hline
-0.8 &-0.7654686(2)\\
-0.795&-0.7658622(2)\\
-0.79 &-0.7662607(3)\\
-0.785&-0.7666642(4)\\
-0.78 &-0.7670729(5)\\
$E_0$ linear fit & -0.7680282(5)\\
$E_0$ quadratic fit & -0.7680684(5)\\
$E_0$ two-component fit & -0.7680676(5)\\
$E_0$ exact & -0.768068...\\
$-0.8$ &$-0.7654686(2)$\\
$-0.795$&$-0.7658622(2)$\\
$-0.79$ &$-0.7662607(3)$\\
$-0.785$&$-0.7666642(4)$\\
$-0.78$ &$-0.7670729(5)$\\
$E_0$ linear fit & $-0.7680282(5)$\\
$E_0$ quadratic fit & $-0.7680684(5)$\\
$E_0$ two-component fit & $-0.7680676(5)$\\
$E_0$ exact & $-0.768068\ldots$\\
\end{tabular}
\end{ruledtabular}
\end{table}
%%% TABLE IV %%%
\begin{table}
\caption{$N=4$, Domain $\cD(0,1) \cup \cD(1,0)$}
\caption{One-dimensional Hubbard model with $N=4$.
The main domain is $\cD(0,1) \cup \cD(1,0)$.}
\label{tab4}
\begin{ruledtabular}
\begin{tabular}{cccccc}
$U$ & $\alpha,\beta$ & $E_\text{T}$ & $E_{ex}$ & $\bar{t}_{I_0}$ \\
\begin{tabular}{rcccr}
$U$ & $(\alpha,\beta)$ & $E_\text{T}$ & $E_{ex}$ & $\bar{t}_{I_0}$ \\
\hline
8 & 0.908,\;0.520 & -0.770342... &-1.117172... & 33.5\\
10 & 1.116,\;0.539 & -0.604162... &-0.911497... & 63.3\\
12 & 1.292,\;0.552 & -0.495361... &-0.768068... & 108.7\\
14 & 1.438,\;0.563 & -0.419163... &-0.662871... & 171.7 \\
20 & 1.786,\;0.582 & -0.286044... &-0.468619... & 504.5 \\
50 & 2.690,\;0.609 & -0.110013... &-0.188984... & 8040.2 \\
200 & 4.070,\;0.624& -0.026940... &-0.047315... & 523836.0 \\
8 & (0.908,\,0.520) & $-0.770342\ldots$ &$-1.117172\ldots$ & 33.5\\
10 & (1.116,\,0.539) & $-0.604162\ldots$ &$-0.911497\ldots$ & 63.3\\
12 & (1.292,\,0.552) & $-0.495361\ldots$ &$-0.768068\ldots$ & 108.7\\
14 & (1.438,\,0.563) & $-0.419163\ldots$ &$-0.662871\ldots$ & 171.7 \\
20 & (1.786,\,0.582) & $-0.286044\ldots$ &$-0.468619\ldots$ & 504.5 \\
50 & (2.690,\,0.609) & $-0.110013\ldots$ &$-0.188984\ldots$ & 8040.2 \\
200 & (4.070,\,0.624)& $-0.026940\ldots$ &$-0.047315\ldots$ & 523836.0 \\
\end{tabular}
\end{ruledtabular}
\end{table}
%%% TABLE V %%%
\begin{table*}
\caption{$U=12$. The fits to extrapolate the QMC energies are done using the two-component function}
\caption{One-dimensional Hubbard model with $U=12$.
The fits to extrapolate the DMC energies are done using the two-component function}
\label{tab5}
\begin{ruledtabular}
\begin{tabular}{crcrccccc}
$N$ & Size Hilbert space & Domain & Domain size & $\alpha,\beta$ &$\bar{t}_{I_0}$ & $E_{var}$ & $E_{ex}$ & $E_{QMC}$\\
\begin{tabular}{crcrccccr}
$N$ & Hilbert space size & Domain & Domain size & $(\alpha,\beta)$ &$\bar{t}_{I_0}$ & $E_\text{T}$ & $E_{ex}$ & $E^\text{DMC}$\\
\hline
4 & 36 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ & 30 &1.292, \; 0.552& 108.7 & -0.495361 & -0.768068 & -0.7680676(5)\\\
6 & 400 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ &200 & 1.124,\; 0.689 &57.8 & -0.633297 & -1.215395& -1.215389(9)\\
8 & 4 900 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ & 1 190 & 0.984,\; 0.788 & 42.8 & -0.750995 & -1.66395& -1.6637(2)\\
10 & 63 504 & ${\cal D}(0,5) \cup {\cal D}(1,4)$ & 2 682 & 0.856,\; 0.869& 31.0 & -0.855958 & -2.113089& -2.1120(7)\\
12 & 853 776 & ${\cal D}(0,8) \cup {\cal D}(1,7)$ & 1 674 & 0.739,\; 0.938 & 16.7 & -0.952127 & -2.562529& -2.560(6)\\
4 & 36 & $\cD(0,1) \cup \cD(1,0)$ & 30 &(1.292,\,0.552)& 108.7 & $-0.495361$ & $-0.768068$ & $-0.7680676(5)$\\
6 & 400 & $\cD(0,1) \cup \cD(1,0)$ &200 &(1.124,\,0.689)&57.8 & $-0.633297$ & $-1.215395$& $-1.215389(9)$\\
8 & 4 900 & $\cD(0,1) \cup \cD(1,0)$ & 1 190 &(0.984,\,0.788)& 42.8 & $-0.750995 $& $-1.66395$& $-1.6637(2)$\\
10 & 63 504 & $\cD(0,5) \cup \cD(1,4)$ & 2 682 &(0.856,\,0.869)& 31.0 & $-0.855958$ & $-2.113089$& $-2.1120(7)$\\
12 & 853 776 & $\cD(0,8) \cup \cD(1,7)$ & 1 674 &(0.739,\,0.938)& 16.7 & $-0.952127$ & $-2.562529$& $-2.560(6)$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
@ -1145,7 +1173,7 @@ In order to consider larger systems, an elaborate implementation of the present
This is outside the scope of the present study and will be presented in a forthcoming work.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\acknowledgements{
\acknowledgments{
P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481).
This work was supported by the European Centre of Excellence in Exascale Computing TREX --- Targeting Real Chemical Accuracy at the Exascale.
This project has received funding from the European Union's Horizon 2020 --- Research and Innovation program --- under grant agreement no.~952165.