saving work

This commit is contained in:
Pierre-Francois Loos 2022-09-14 17:31:00 +02:00
parent 4d1119a3a9
commit 39a49eb6cc

274
g.tex
View File

@ -11,7 +11,7 @@
\newcommand{\roland}[1]{\textcolor{cyan}{\bf #1}} \newcommand{\roland}[1]{\textcolor{cyan}{\bf #1}}
\newcommand{\manu}[1]{\textcolor{blue}{\bf #1}} \newcommand{\manu}[1]{\textcolor{blue}{\bf #1}}
\newcommand{\vijay}[1]{\textcolor{green}{\bf #1}} \newcommand{\vijay}[1]{\textcolor{green}{\bf #1}}
\newcommand{\titou}[1]{\textcolor{red}{\bf #1}} \newcommand{\titou}[1]{\textcolor{red}{#1}}
\newcommand{\toto}[1]{\textcolor{purple}{\bf #1}} \newcommand{\toto}[1]{\textcolor{purple}{\bf #1}}
\newcommand{\mimi}[1]{\textcolor{orange}{\bf #1}} \newcommand{\mimi}[1]{\textcolor{orange}{\bf #1}}
\newcommand{\be}{\begin{equation}} \newcommand{\be}{\begin{equation}}
@ -60,6 +60,8 @@
\newcommand{\EEP}{E_\text{EP}} \newcommand{\EEP}{E_\text{EP}}
\newcommand{\laEP}{\lambda_\text{EP}} \newcommand{\laEP}{\lambda_\text{EP}}
\newcommand{\PsiT}{\Psi_\text{T}}
\newcommand{\Ne}{N} % Number of electrons \newcommand{\Ne}{N} % Number of electrons
\newcommand{\Nn}{M} % Number of nuclei \newcommand{\Nn}{M} % Number of nuclei
@ -74,7 +76,7 @@
\newcommand{\n}[1]{n_{#1}} \newcommand{\n}[1]{n_{#1}}
\newcommand{\Dv}{\Delta v} \newcommand{\Dv}{\Delta v}
\newcommand{\ra}{\rightarrow} \newcommand{\ra}{\to}
% Center tabularx columns % Center tabularx columns
\newcolumntype{Y}{>{\centering\arraybackslash}X} \newcolumntype{Y}{>{\centering\arraybackslash}X}
@ -150,6 +152,7 @@ Although this work presents the method for finite linear spaces, it can be gener
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction} \section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Diffusion Monte Carlo (DMC) is a class of stochastic methods for evaluating the ground-state properties of quantum systems. Diffusion Monte Carlo (DMC) is a class of stochastic methods for evaluating the ground-state properties of quantum systems.
They have been extensively used in virtually all domains of physics and chemistry where the many-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011} quantum liquids,\cite{Holzmann_2006} nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc). They have been extensively used in virtually all domains of physics and chemistry where the many-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011} quantum liquids,\cite{Holzmann_2006} nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc).
DMC can be used either for systems defined in a continuous configuration space (typically, a set of particles moving in space) for which the Hamiltonian operator is defined in a Hilbert space of infinite dimension or systems defined in a discrete configuration space where the Hamiltonian reduces to a matrix. DMC can be used either for systems defined in a continuous configuration space (typically, a set of particles moving in space) for which the Hamiltonian operator is defined in a Hilbert space of infinite dimension or systems defined in a discrete configuration space where the Hamiltonian reduces to a matrix.
@ -195,6 +198,7 @@ The general case is then treated in Subsec.~\ref{sec:general_domains}.
In Subsec.~\ref{sec:Green}, both the time- and energy-dependent Green's function using domains is derived. In Subsec.~\ref{sec:Green}, both the time- and energy-dependent Green's function using domains is derived.
Section \ref{sec:numerical} presents the application of the approach to the one-dimensional Hubbard model. Section \ref{sec:numerical} presents the application of the approach to the one-dimensional Hubbard model.
Finally, in Sec.\ref{sec:conclu}, some conclusions and perspectives are given. Finally, in Sec.\ref{sec:conclu}, some conclusions and perspectives are given.
Atomic units are used throughout.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Diffusion Monte Carlo} \section{Diffusion Monte Carlo}
@ -205,109 +209,108 @@ Finally, in Sec.\ref{sec:conclu}, some conclusions and perspectives are given.
\subsection{Path-integral representation} \subsection{Path-integral representation}
\label{sec:path} \label{sec:path}
%=======================================% %=======================================%
As previously mentioned, DMC is a stochastic implementation of the power method defined by the following operator: As previously mentioned, DMC is a stochastic implementation of the power method defined by the following operator:
\be \be
T = \mathds{1} -\tau (H-E\mathds{1}), T = \mathds{1} -\tau (H-E\mathds{1}),
\ee \ee
where $\mathds{1}$ is the identity operator, $\tau$ a small positive parameter playing the role of a time-step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. Starting from some initial vector, $\ket{\Psi_0}$, we have where $\mathds{1}$ is the identity operator, $\tau$ a small positive parameter playing the role of a time-step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. Starting from some initial vector, $\ket{\Psi_0}$, we have
\be \be
\lim_{N \to \infty} T^N \ket{\Psi_0} = \ket{\Phi_0} \lim_{N \to \infty} T^N \ket{\Psi_0} = \ket{\Phi_0},
\ee \ee
where $\ket{\Phi_0}$ is the ground-state wave function. where $\ket{\Phi_0}$ is the ground-state wave function.
The equality is up to a global phase factor playing no role in physical quantum averages. The equality is up to a global phase factor playing no role in physical quantum averages.
This result is true for any $\ket{\Psi_0}$ provided that $\braket{\Phi_0}{\Psi_0} \ne 0$ and for $\tau$ sufficiently small. This result is true for any $\ket{\Psi_0}$ provided that $\braket{\Phi_0}{\Psi_0} \ne 0$ and for $\tau$ sufficiently small.
At large but finite $N$, the vector $T^N \ket{\Psi_0}$ differs from $\ket{\Phi_0}$ only by an exponentially small correction, making easy the extrapolation of the finite-N results to $N=\infty$.\\ At large but finite $N$, the vector $T^N \ket{\Psi_0}$ differs from $\ket{\Phi_0}$ only by an exponentially small correction, making easy to extrapolate the finite-$N$ results to $N \to \infty$.
Ground-state properties may be obtained at large $N$. For example, in the important case of the energy one can use the formula Likewise, ground-state properties may be obtained at large $N$.
For example, in the important case of the energy, one can rely on the following formula
\be \be
E_0 = \lim_{N\rightarrow \infty } \frac{\langle \Psi_T|H T^N|\Psi_0 \rangle} \label{eq:E0}
{\langle \Psi_T|T^N|\Psi_0 \rangle} E_0 = \lim_{N \to \infty } \frac{\mel{\PsiT}{H T^N}{\Psi_0}}{\mel{\PsiT}{T^N}{\Psi_0}},
\label{E0}
\ee \ee
where $|\Psi_T\rangle$ is some trial vector (some approximation of the true ground-state) on which $T^N|\Psi_0 \rangle$ is projected out. where $|\PsiT\rangle$ is some trial vector (some approximation of the true ground-state wave function) on which $T^N \ket{\Psi_0}$ is projected out.
To proceed further we introduce the time-dependent Green's matrix $G^{(N)}$ defined as To proceed further we introduce the time-dependent Green's matrix $G^{(N)}$ defined as
\be \be
G^{(N)}_{ij}=\langle j|T^N |i\rangle. G^{(N)}_{ij}=\mel{j}{T^N}{i}.
\ee \ee
The denomination ``time-dependent Green's matrix'' is used here since $G$ may be viewed as a short-time approximation of the (time-imaginary) evolution operator, The denomination ``time-dependent Green's matrix'' is used here since $G$ may be viewed as a short-time approximation of the (time-imaginary) evolution operator,
$e^{-N\tau H}$ which is usually referred to as the imaginary-time dependent Green's function.\\ $e^{-N\tau H}$ which is usually referred to as the imaginary-time dependent Green's function.
Introducing the convenient notation, $i_k$, for the $N-1$ indices of the intermediate states in the $N$-th product of $T$, $G^{(N)}$ can be written in \titou{Introducing the set of $N-1$ intermediate states, $\{ i_k \}_{1 \le k \le N-1}$, in the $N$th product of $T$,} $G^{(N)}$ can be written in the following expanded form
the expanded form
\be \be
G^{(N)}_{i_0 i_N} = \sum_{i_1} \sum_{i_2} ... \sum_{i_{N-1}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}. \label{eq:cn}
\label{cn} G^{(N)}_{i_0 i_N} = \sum_{i_1} \sum_{i_2} ... \sum_{i_{N-1}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}},
\ee \ee
Here, each index $i_k$ runs over all basis vectors.\\ \titou{where $T_{i_{k} i_{k+1}} = ??$}.
Here, each index $i_k$ runs over all basis vectors.
In quantum physics, Eq.(\ref{cn}) is referred to as In quantum physics, Eq.~\eqref{eq:cn} is referred to as the path-integral representation of the Green's matrix (function).
the path-integral representation of the Green's matrix (function). The series of states $\ket{i_0}, \ldots,\ket{i_N}$ is interpreted as a ``path'' in the Hilbert space starting at vector $\ket{i_0}$ and ending at vector $\ket{i_N}$ where $k$ plays the role of a time index.
The series of states $|i_0\rangle,...,|i_N\rangle$ is interpreted as a "path" in the Hilbert space starting at vector $|i_0\rangle$ and ending at Each path is associated with a weight $\prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}$ and the path integral expression of $G$ can be recast in the more suggestive form as follows:
vector $|i_N\rangle$ where $k$ plays the role of
a time index. To each path is associated the weight $\prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}$
and the path integral expression of $G$ is written in the more suggestive form
\be \be
G^{(N)}_{i_0 i_N}= \sum_{\rm all \; paths\; |i_1\rangle,...,|i_{N-1}\rangle} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}} \label{eq:G}
\label{G} G^{(N)}_{i_0 i_N}= \sum_{\text{all paths $\ket{i_1},\ldots,\ket{i_{N-1}}$}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}
\ee \ee
This expression allows a simple and vivid interpretation of the solution: In the $N \rightarrow \infty$-limit This expression allows a simple and vivid interpretation of the solution.
the $i$-th component of the ground-state (obtained as $\lim_{N\rightarrow \infty} G^{(N)}_{i i_0})$ is the weighted sum over all possible paths arriving In the limit $N \to \infty$, the $i$th component of the ground state wave funciton (obtained as $\lim_{N \to \infty} G^{(N)}_{\titou{i_0 i_N}})$ is the weighted sum over all possible paths arriving at vector \titou{$\ket{i_N}$}.
at vector $|i\rangle$. This result is independent of the initial This result is independent of the initial vector $\ket{i_0}$, apart from some irrelevant global phase factor.
vector $|i_0\rangle$, apart from some irrelevant global phase factor. When the size of the linear space is small the explicit calculation of the full sums involving $M^N$ terms (where $M$ is the size of the Hilbert space) can be performed.
When the size of the linear space is small the explicit calculation of the full sums involving $M^N$ terms (here, $M$ is the size of the Hilbert space) We are in the realm of what one would call the ``deterministic'' power methods, such as the Lancz\`os or Davidson approaches.
can be performed. We are in the realm of what can be called the ``deterministic'' power methods, such as If not, probabilistic techniques for generating only the paths contributing significantly to the sums are to be prefered.
the Lancz\`os or Davidson approaches. If not, probabilistic techniques for generating only the This is the
paths contributing significantly to the sums are to be used.
%=======================================%
\subsection{Probabilistic framework} \subsection{Probabilistic framework}
\label{sec:proba} \label{sec:proba}
In order to derive %=======================================%
a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $|\Psi^+\rangle$,
having strictly positive components, $\Psi^+_i > 0$, and apply a similarity transformation to the operators $G^{(N)}$ and $T$
\be
{\bar T}_{ij}= \frac{\Psi^+_j}{\Psi^+_i} T_{ij}
\label{defT}
\ee
and
\be
{\bar G}^{(N)}_{ij}= \frac{\Psi^+_j}{\Psi^+_i} G^{(N)}_{ij}
\ee
Note that under the similarity transformation the path integral expression, Eq.(\ref{G}), relating $G^{(N)}$ and $T$ remains
unchanged for the similarity-transformed operators, ${\bar G}^{(N)}$ and ${\bar T}$.
Next, the matrix elements of ${\bar T}$ are expressed as those of a stochastic matrix (or transition probability In order to derive a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $\ket{\Psi^+}$, having strictly positive components, \ie, $\Psi^+_i > 0$, and apply a similarity transformation to the operators $G^{(N)}$ and $T$
matrix) multiplied by some residual weight, namely \begin{align}
\label{eq:defT}
\bar{T}_{ij} & = \frac{\Psi^+_j}{\Psi^+_i} T_{ij}
\\
\bar{G}^{(N)}_{ij}& = \frac{\Psi^+_j}{\Psi^+_i} G^{(N)}_{ij}
\end{align}
Note that under the similarity transformation the path integral expression, Eq.~\eqref{eq:G}, relating $G^{(N)}$ and $T$ remains unchanged for the similarity-transformed operators, $\bar{G}^{(N)}$ and $\bar{T}$.
Next, the matrix elements of $\bar{T}$ are expressed as those of a stochastic matrix multiplied by some residual weight, namely
\be \be
{\bar T_{ij}}= p(i \rightarrow j) w_{ij} \label{eq:defTij}
\label{defTij} \bar{T}_{ij}= p_{i \to j} w_{ij}
\ee \ee
Here, we recall that a stochastic matrix is defined as a matrix with positive entries and obeying Here, we recall that a stochastic matrix is defined as a matrix with positive entries and obeying
\be \be
\sum_j p(i \rightarrow j)=1. \label{eq:sumup}
\label{sumup} \sum_j p_{i \to j}=1.
\ee \ee
To build the transition probability density the following operator is introduced To build the transition probability density the following operator is introduced
%As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as %As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as
%a vector with all components positive). %a vector with all components positive).
\be \be
T^+=\mathds{1} - \tau [ H^+-{\rm diag}(E_L^+)] T^+=\mathds{1} - \tau [ H^+-E_L^+\mathds{1}]
\ee \ee
where where
$H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to be negative $H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to be negative
\be \be
H^+_{ii}=H_{ii} \;\;\;{\rm and} \;\;\; H^+_{ij}=-|H_{ij}| \;\;\; {\rm for} \;\;\;i \ne j. H^+_{ij}=
\begin{cases}
\phantom{-}H_{ij}, & \text{if $i=j$}.
\\
-\abs{H_{ij}}, & \text{if $i\neq j$}.
\end{cases}
\ee \ee
Here, ${\rm diag}(E_L^+)$ is the diagonal matrix whose diagonal elements are defined as Here, $E_L^+ \mathds{1}$ is the diagonal matrix whose diagonal elements are defined as
\be \be
E^+_{Li}= \frac{\sum_j H^+_{ij}\Psi^+_j}{\Psi^+_i}. E^+_{Li}= \frac{\sum_j H^+_{ij}\Psi^+_j}{\Psi^+_i}.
\ee \ee
The vector $|E^+_L\rangle$ is known as the local energy vector associated with $|\Psi^+\rangle$.\\ The vector $\ket{E^+_L}$ is known as the local energy vector associated with $\ket{\Psi^+}$.
Actually, the operator $H^+-diag(E^+_L)$ in the definition of the operator $T^+$ has been chosen to admit by construction $|\Psi^+ \rangle$ as ground-state with zero eigenvalue Actually, the operator $H^+-E^+_L \mathds{1}$ in the definition of the operator $T^+$ has been chosen to admit by construction $|\Psi^+ \rangle$ as ground-state with zero eigenvalue
\be \be
[H^+ - {\rm diag}(E_L^+)]|\Psi^+\rangle=0, \label{eq:defTplus}
\label{defTplus} [H^+ - E_L^+ \mathds{1}]|\Psi^+\rangle=0,
\ee \ee
leading to the relation leading to the relation
\be \be
@ -317,62 +320,61 @@ T^+ |\Psi^+\rangle=|\Psi^+\rangle.
The stochastic matrix is now defined as The stochastic matrix is now defined as
\be \be
p(i \rightarrow j) = \frac{\Psi^+_j}{\Psi^+_i} T^+_{ij}. \label{eq:pij}
\label{pij} p_{i \to j} = \frac{\Psi^+_j}{\Psi^+_i} T^+_{ij}.
\ee \ee
The diagonal matrix elements of the stochastic matrix write The diagonal matrix elements of the stochastic matrix write
\be \be
p(i \rightarrow i) = 1 -\tau (H^+_{ii}- E^+_{Li}) p_{i \to i} = 1 -\tau (H^+_{ii}- E^+_{Li})
\ee \ee
while for $i \ne j$ while, for $i \ne j$,
\be \be
p(i \rightarrow j) = \tau \frac{\Psi^+_{j}}{\Psi^+_{i}} |H_{ij}| \ge 0 \;\;\; i \ne j. p_{i \to j} = \tau \frac{\Psi^+_{j}}{\Psi^+_{i}} |H_{ij}| \ge 0
\ee \ee
As seen, the off-diagonal terms, $p(i \rightarrow j)$ are positive while As seen, the off-diagonal terms, $p_{i \to j}$ are positive while the diagonal ones, $p_{i \to i}$, can be made positive if $\tau$ is chosen sufficiently small.
the diagonal ones, $p(i \rightarrow i)$, can be made positive if $\tau$ is chosen More precisely, the condition writes
sufficiently small. More precisely, the condition writes
\be \be
\tau \leq \frac{1}{{\rm Max}_i| H^+_{ii}-E^+_{Li}|} \label{eq:cond}
\label{cond} \tau \leq \frac{1}{\max_i\abs{H^+_{ii}-E^+_{Li}}}
\ee \ee
The sum-over-states condition, Eq.(\ref{sumup}), follows from the fact that $|\Psi^+\rangle$ is eigenvector of $T^+$, Eq.(\ref{relT+}) The sum-over-states condition, Eq.~\eqref{eq:sumup}, follows from the fact that $|\Psi^+\rangle$ is eigenvector of $T^+$, Eq.(\ref{relT+})
\be \be
\sum_j p(i \rightarrow j)= \frac{1}{\Psi^+_{i}} \langle i |T^+| \Psi^ +\rangle =1. \sum_j p_{i \to j}= \frac{1}{\Psi^+_{i}} \langle i |T^+| \Psi^ +\rangle =1.
\ee \ee
We have then verified that $p(i \rightarrow j)$ is indeed a stochastic matrix.\\ We have then verified that $p_{i \to j}$ is indeed a stochastic matrix.
At first sight, the condition defining the maximum value of $\tau$ allowed, Eq.(\ref{cond}), may appear as rather tight At first sight, the condition defining the maximum value of $\tau$ allowed, Eq.~\eqref{eq:cond}, may appear as rather tight
since for very large matrices it may impose an extremely small value for the time step. However, in practice during the simulation only a (tiny) since for very large matrices it may impose an extremely small value for the time step. However, in practice during the simulation only a (tiny)
fraction of the linear space is sampled, and the maximum value of $|H^+_{ii} -E^+_{Li}|$ for the sampled states turns out to be not too large, so reasonable values of $\tau$ fraction of the linear space is sampled, and the maximum value of $\abs{H^+_{ii} - E^+_{Li}}$ for the sampled states turns out to be not too large, so reasonable values of $\tau$
can be used without violating the positivity of the transition probability matrix. can be used without violating the positivity of the transition probability matrix.
Note that we can even escape from this condition by slightly generalizing the transition probability Note that we can even escape from this condition by slightly generalizing the transition probability
matrix used as follows matrix used as follows
\be \be
p(i \rightarrow j) = \frac{ \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle| } { \sum_j \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle|} p_{i \to j} = \frac{ \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle| } { \sum_j \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle|}
= \frac{ \Psi^+_{j} |\langle i | T^+ | j\rangle| }{\sum_j \Psi^+_{j} |\langle i | T^+ | j\rangle|} = \frac{ \Psi^+_{j} |\langle i | T^+ | j\rangle| }{\sum_j \Psi^+_{j} |\langle i | T^+ | j\rangle|}
\ee \ee
This new transition probability matrix with positive entries reduces to Eq.(\ref{pij}) when $T^+_{ij}$ is positive.\\ This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive.
Now, using Eqs.(\ref{defT},\ref{defTij},\ref{pij}) the residual weight $w_{ij}$ writes Now, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weight reads
\be \be
w_{ij}=\frac{T_{ij}}{T^+_{ij}}. w_{ij}=\frac{T_{ij}}{T^+_{ij}}.
\ee \ee
Using these notations the Green's matrix components can be rewritten as Using these notations the Green's matrix components can be rewritten as
\be \be
{\bar G}^{(N)}_{i i_0}=\sum_{i_1,..., i_{N-1}} \Big[ \prod_{k=0}^{N-1} p(i_{k} \rightarrow i_{k+1})\Big] \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\bar G}^{(N)}_{i i_0}=\sum_{i_1,\ldots,i_{N-1}} \qty[ \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ] \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}
\ee \ee
where $i$ is identified to $i_N$.\\ where $i$ is identified to $i_N$.
The product $\prod_{k=0}^{N-1} p(i_{k} \rightarrow i_{k+1})$ is the probability, denoted ${\rm Prob}_{i_0 \rightarrow i_N}(i_1,...,i_{N-1})$, The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted ${\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})$,
for the path starting at $|i_0\rangle$ and ending at $|i_N\rangle$ to occur. for the path starting at $|i_0\rangle$ and ending at $|i_N\rangle$ to occur.
Using the fact that $p(i \rightarrow j) \ge 0$ and Eq.(\ref{sumup}) we verify that ${\rm Prob}_{i_0 \rightarrow i_N}$ is positive and obeys Using the fact that $p_{i \to j} \ge 0$ and Eq.~\eqref{eq:sumup} we verify that ${\rm Prob}_{i_0 \to i_N}$ is positive and obeys
\be \be
\sum_{i_1,..., i_{N-1}} {\rm Prob}_{i_0 \rightarrow i_N}(i_1,...,i_{N-1})=1 \sum_{i_1,..., i_{N-1}} {\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})=1
\ee \ee
as it should be. as it should be.
The probabilistic average associated with this probability for the path, denoted here as, $ \Big \langle ... \Big \rangle$ is then defined as The probabilistic average associated with this probability for the path, denoted here as, $ \Big \langle ... \Big \rangle$ is then defined as
\be \be
\Big \langle F \Big \rangle = \sum_{i_1,..., i_{N-1}} F(i_0,...,i_N) {\rm Prob}_{i_0 \rightarrow i_N}(i_1,...,i_{N-1}), \Big \langle F \Big \rangle = \sum_{i_1,..., i_{N-1}} F(i_0,...,i_N) {\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1}),
\label{average} \label{average}
\ee \ee
where $F$ is an arbitrary function. where $F$ is an arbitrary function.
@ -384,12 +386,12 @@ Finally, the path-integral expressed as a probabilistic average writes
To calculate the probabilistic average, Eq.(\ref{average}), To calculate the probabilistic average, Eq.(\ref{average}),
an artificial (mathematical) ``particle'' called walker (or psi-particle) is introduced. an artificial (mathematical) ``particle'' called walker (or psi-particle) is introduced.
During the Monte Carlo simulation the walker moves in configuration space by drawing new states with During the Monte Carlo simulation the walker moves in configuration space by drawing new states with
probability $p(i_k \rightarrow i_{k+1})$, thus realizing the path of probability probability $p(i_k \to i_{k+1})$, thus realizing the path of probability
${\rm Prob}(i_0 \rightarrow i_n)$. ${\rm Prob}(i_0 \to i_n)$.
The energy, Eq.(\ref{E0}) is given as The energy, Eq.(\ref{E0}) is given as
\be \be
E_0 = \lim_{N \rightarrow \infty } \frac{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\Psi_T)}_{i_N} \Big \rangle} E_0 = \lim_{N \to \infty } \frac{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N} \Big \rangle}
{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\Psi_T}_{i_N} \Big \rangle} { \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} \Big \rangle}
\ee \ee
Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over the population. Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over the population.
In addition, thanks to the ergodic property of the stochastic matrix (see, Refs \onlinecite{Caffarel_1988}) In addition, thanks to the ergodic property of the stochastic matrix (see, Refs \onlinecite{Caffarel_1988})
@ -408,10 +410,16 @@ found, for example, in refs \onlinecite{Foulkes_2001,Kolorenc_2011}.
%\ee %\ee
%In the numerical applications to follow, we shall use the walker representation. %In the numerical applications to follow, we shall use the walker representation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{DMC with domains} \section{DMC with domains}
\label{sec:DMC_domains} \label{sec:DMC_domains}
%%%%%%%%%%%%%%%%%%%%%%%%%%%
%=======================================%
\subsection{Domains consisting of a single state} \subsection{Domains consisting of a single state}
\label{sec:single_domains} \label{sec:single_domains}
%=======================================%
During the simulation, walkers move from state to state with the possibility of being trapped a certain number of times on the same state before During the simulation, walkers move from state to state with the possibility of being trapped a certain number of times on the same state before
exiting to a different state. This fact can be exploited in order to integrate out some part of the dynamics, thus leading to a reduction of the statistical exiting to a different state. This fact can be exploited in order to integrate out some part of the dynamics, thus leading to a reduction of the statistical
fluctuations. This idea was proposed some time ago\cite{assaraf_99,Assaraf_1999B,caffarel_00} and applied to the SU(N) one-dimensional Hubbard model. fluctuations. This idea was proposed some time ago\cite{assaraf_99,Assaraf_1999B,caffarel_00} and applied to the SU(N) one-dimensional Hubbard model.
@ -419,29 +427,29 @@ fluctuations. This idea was proposed some time ago\cite{assaraf_99,Assaraf_1999B
Let us consider a given state $|i\rangle$. The probability that the walker remains exactly $n$ times on $|i\rangle$ ($n$ from Let us consider a given state $|i\rangle$. The probability that the walker remains exactly $n$ times on $|i\rangle$ ($n$ from
1 to $\infty$) and then exits to a different state $j$ is 1 to $\infty$) and then exits to a different state $j$ is
\be \be
{\cal P}(i \rightarrow j, n) = [p(i \rightarrow i)]^{n-1} p(i \rightarrow j) \;\;\;\; j \ne i. {\cal P}(i \to j, n) = [p(i \to i)]^{n-1} p(i \to j) \;\;\;\; j \ne i.
\ee \ee
Using the relation $\sum_{n=1}^{\infty} p^{n-1}(i \rightarrow i)=\frac{1}{1-p(i \rightarrow i)}$ and the normalization Using the relation $\sum_{n=1}^{\infty} p^{n-1}(i \to i)=\frac{1}{1-p(i \to i)}$ and the normalization
of the $p(i \rightarrow j)$, Eq.(\ref{sumup}), we verify that of the $p(i \to j)$, Eq.(\ref{sumup}), we verify that
the probability is normalized to one the probability is normalized to one
\be \be
\sum_{j \ne i} \sum_{n=1}^{\infty} {\cal P}(i \rightarrow j,n) = 1. \sum_{j \ne i} \sum_{n=1}^{\infty} {\cal P}(i \to j,n) = 1.
\ee \ee
The probability of being trapped during $n$ steps is obtained by summing over all possible exit states The probability of being trapped during $n$ steps is obtained by summing over all possible exit states
\be \be
P_i(n)=\sum_{j \ne i} {\cal P}(i \rightarrow j,n) = [p(i \rightarrow i)]^{n-1}(1-p(i \rightarrow i)). P_i(n)=\sum_{j \ne i} {\cal P}(i \to j,n) = [p(i \to i)]^{n-1}(1-p(i \to i)).
\ee \ee
This probability defines a Poisson law This probability defines a Poisson law
with an average number $\bar{n}_i$ of trapping events given by with an average number $\bar{n}_i$ of trapping events given by
\be \be
\bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p(i \rightarrow i)}. \bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p(i \to i)}.
\ee \ee
Introducing the continuous time $t_i=n_i\tau$ the average trapping time is given by Introducing the continuous time $t_i=n_i\tau$ the average trapping time is given by
\be \be
\bar{t_i}= \frac{1}{H^+_{ii}-E^+_{Li}}. \bar{t_i}= \frac{1}{H^+_{ii}-E^+_{Li}}.
\ee \ee
Taking the limit $\tau \rightarrow 0$ the Poisson probability takes the usual form Taking the limit $\tau \to 0$ the Poisson probability takes the usual form
\be \be
P_{i}(t) = \frac{1}{\bar{t}_i} e^{-\frac{t}{\bar{t}_i}} P_{i}(t) = \frac{1}{\bar{t}_i} e^{-\frac{t}{\bar{t}_i}}
\ee \ee
@ -450,8 +458,12 @@ The time-averaged contribution of the on-state weight can be easily calculated t
\bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}} \bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}}
\ee \ee
Details of the implementation of the effective dynamics can be in found in Refs. (\onlinecite{assaraf_99},\onlinecite{caffarel_00}). Details of the implementation of the effective dynamics can be in found in Refs. (\onlinecite{assaraf_99},\onlinecite{caffarel_00}).
%=======================================%
\subsection{General domains} \subsection{General domains}
\label{sec:general_domains} \label{sec:general_domains}
%=======================================%
Let us now extend the results of the preceding section to a general domain. For that, Let us now extend the results of the preceding section to a general domain. For that,
let us associate to each state $|i\rangle$ a set of states, called the domain of $|i\rangle$ and let us associate to each state $|i\rangle$ a set of states, called the domain of $|i\rangle$ and
denoted ${\cal D}_i$, consisting of the state $|i\rangle$ plus a certain number of states. No particular constraints on the type of domains denoted ${\cal D}_i$, consisting of the state $|i\rangle$ plus a certain number of states. No particular constraints on the type of domains
@ -461,11 +473,11 @@ non-zero-probability to reach any other state in a finite number of steps). In p
Let us write an arbitrary path of length $N$ as Let us write an arbitrary path of length $N$ as
\be \be
|i_0 \rangle \rightarrow |i_1 \rangle \rightarrow ... \rightarrow |i_N \rangle |i_0 \rangle \to |i_1 \rangle \to ... \to |i_N \rangle
\ee \ee
where the successive states are drawn using the transition probability matrix, $p(i \rightarrow j)$. This series can be rewritten where the successive states are drawn using the transition probability matrix, $p(i \to j)$. This series can be rewritten
\be \be
(|I_0\rangle,n_0) \rightarrow (|I_1 \rangle,n_1) \rightarrow... \rightarrow (|I_p\rangle,n_p) (|I_0\rangle,n_0) \to (|I_1 \rangle,n_1) \to... \to (|I_p\rangle,n_p)
\label{eff_series} \label{eff_series}
\ee \ee
where $|I_0\rangle=|i_0\rangle$ is the initial state, where $|I_0\rangle=|i_0\rangle$ is the initial state,
@ -484,10 +496,10 @@ In what follows, we shall systematically write the integers representing the exi
Let us define the probability of being $n$ times within the domain of $|I_0\rangle$ and, then, to exit at $|I\rangle \notin {\cal D}_{I_0}$. Let us define the probability of being $n$ times within the domain of $|I_0\rangle$ and, then, to exit at $|I\rangle \notin {\cal D}_{I_0}$.
It is given by It is given by
$$ $$
{\cal P}(I_0 \rightarrow I,n)= \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}} {\cal P}(I_0 \to I,n)= \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}}
$$ $$
\be \be
p(I_0 \rightarrow i_1) ... p(i_{n-2} \rightarrow i_{n-1}) p(i_{n-1} \rightarrow I) p(I_0 \to i_1) ... p(i_{n-2} \to i_{n-1}) p(i_{n-1} \to I)
\label{eq1C} \label{eq1C}
\ee \ee
To proceed we need to introduce the projector associated with each domain To proceed we need to introduce the projector associated with each domain
@ -502,7 +514,7 @@ T^+_I= P_I T^+ P_I.
$T^+_I$ is the operator governing the dynamics of the walkers moving within ${\cal D}_{I}$. $T^+_I$ is the operator governing the dynamics of the walkers moving within ${\cal D}_{I}$.
Using Eqs.(\ref{eq1C}) and (\ref{pij}), the probability can be rewritten as Using Eqs.(\ref{eq1C}) and (\ref{pij}), the probability can be rewritten as
\be \be
{\cal P}(I_0 \rightarrow I,n)= {\cal P}(I_0 \to I,n)=
\frac{1}{\Psi^+_{I_0}} \langle I_0 | {T^+_{I_0}}^{n-1} F^+_{I_0}|I\rangle \Psi^+_{I} \frac{1}{\Psi^+_{I_0}} \langle I_0 | {T^+_{I_0}}^{n-1} F^+_{I_0}|I\rangle \Psi^+_{I}
\label{eq3C} \label{eq3C}
\ee \ee
@ -538,10 +550,16 @@ t_{I}={\bar n}_{I} \tau= \frac{1}{\Psi^+_{I}} \langle I | P_{I} \frac{1}{H^+ -
\ee \ee
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-E_L^+)$ in ${\cal D}_{I}$. Note that In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-E_L^+)$ in ${\cal D}_{I}$. Note that
it is possible only if the dimension of the domains is not too large (say, less than a few thousands). it is possible only if the dimension of the domains is not too large (say, less than a few thousands).
%=======================================%
\subsection{Expressing the Green's matrix using domains} \subsection{Expressing the Green's matrix using domains}
\label{sec:Green} \label{sec:Green}
%=======================================%
%--------------------------------------------%
\subsubsection{Time-dependent Green's matrix} \subsubsection{Time-dependent Green's matrix}
\label{time} \label{sec:time}
%--------------------------------------------%
In this section we generalize the path-integral expression of the Green's matrix, Eqs.(\ref{G}) and (\ref{cn_stoch}), to the case where domains are used. In this section we generalize the path-integral expression of the Green's matrix, Eqs.(\ref{G}) and (\ref{cn_stoch}), to the case where domains are used.
For that we introduce the Green's matrix associated with each domain For that we introduce the Green's matrix associated with each domain
\be \be
@ -614,20 +632,24 @@ $$
\delta(\sum_k n_k=N+1) \delta(\sum_k n_k=N+1)
$$ $$
\be \be
\prod_{k=0}^{N-1}{\cal P}(I_k \rightarrow I_{k+1},n_k-1) F(I_0,n_0;...;I_N,n_N) \prod_{k=0}^{N-1}{\cal P}(I_k \to I_{k+1},n_k-1) F(I_0,n_0;...;I_N,n_N)
\ee \ee
In practice, a schematic DMC algorithm to compute the average is as follows.\\ In practice, a schematic DMC algorithm to compute the average is as follows.\\
i) Choose some initial vector $|I_0\rangle$\\ i) Choose some initial vector $|I_0\rangle$\\
ii) Generate a stochastic path by running over $k$ (starting at $k=0$) as follows.\\ ii) Generate a stochastic path by running over $k$ (starting at $k=0$) as follows.\\
$\;\;\;\bullet$ Draw $n_k$ using the probability $P_{I_k}(n)$, Eq.(\ref{PiN})\\ $\;\;\;\bullet$ Draw $n_k$ using the probability $P_{I_k}(n)$, Eq.(\ref{PiN})\\
$\;\;\;\bullet$ Draw the exit state, $|I_{k+1}\rangle$, using the conditional probability $$\frac{{\cal P}(I_k \rightarrow I_{k+1},n_k)}{P_{I_k}(n_k)}$$\\ $\;\;\;\bullet$ Draw the exit state, $|I_{k+1}\rangle$, using the conditional probability $$\frac{{\cal P}(I_k \to I_{k+1},n_k)}{P_{I_k}(n_k)}$$\\
iii) Terminate the path when $\sum_k n_k=N$ is greater than some target value $N_{\rm max}$ and compute $F(I_0,n_0;...;I_N,n_N)$\\ iii) Terminate the path when $\sum_k n_k=N$ is greater than some target value $N_{\rm max}$ and compute $F(I_0,n_0;...;I_N,n_N)$\\
iv) Go to step ii) until some maximum number of paths is reached.\\ iv) Go to step ii) until some maximum number of paths is reached.\\
\\ \\
At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_{max}$ is obtained. At large $N_{max}$ where the At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_{max}$ is obtained. At large $N_{max}$ where the
convergence of the average as a function of $p$ is reached, such values can be averaged. convergence of the average as a function of $p$ is reached, such values can be averaged.
%--------------------------------------------%
\subsubsection{Integrating out the trapping times : The Domain Green's Function Monte Carlo approach} \subsubsection{Integrating out the trapping times : The Domain Green's Function Monte Carlo approach}
\label{energy} \label{sec:energy}
%--------------------------------------------%
Now, let us show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective Now, let us show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective
stochastic dynamics involving now only the exit states. Physically, it means that we are going to compute exactly within the time-evolution of all stochastic dynamics involving now only the exit states. Physically, it means that we are going to compute exactly within the time-evolution of all
stochastic paths trapped within each domain. We shall present two different ways to derive the new dynamics and renormalized probabilistic averages. stochastic paths trapped within each domain. We shall present two different ways to derive the new dynamics and renormalized probabilistic averages.
@ -640,7 +662,7 @@ G^E_{ij}= \tau \sum_{N=0}^{\infty} \langle i|T^N|j\rangle.
$$ $$
By summing over $N$ we obtain By summing over $N$ we obtain
\be \be
G^E_{ij}= \langle i | \frac{1}{H-E}|j\rangle. G^E_{ij}= \mel{i}{\frac{1}{H-E}}{j}.
\ee \ee
This quantity, which no longer depends on the time-step, is referred to as the energy-dependent Green's matrix. Note that in the continuum this quantity is This quantity, which no longer depends on the time-step, is referred to as the energy-dependent Green's matrix. Note that in the continuum this quantity is
essentially the Laplace transform of the time-dependent Green's function. Here, we then use the same denomination. The remarkable property essentially the Laplace transform of the time-dependent Green's function. Here, we then use the same denomination. The remarkable property
@ -708,14 +730,14 @@ which is identical to Eq.(\ref{eqfond}) when $G$ is expanded iteratively.\\
\\ \\
Let us use as effective transition probability density Let us use as effective transition probability density
\be \be
P(I \rightarrow J) = \frac{1} {\Psi^+(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \Psi^+(J) P(I \to J) = \frac{1} {\Psi^+(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \Psi^+(J)
\ee \ee
and the weight and the weight
\be \be
W^E_{IJ} = W^E_{IJ} =
\frac{\langle I|\frac{1}{H-E} P_I (-H)(1-P_I) |J\rangle }{\langle I|\frac{1}{H^+-E^+_L} P_I (-H^+)(1-P_I) |J\rangle} \frac{\langle I|\frac{1}{H-E} P_I (-H)(1-P_I) |J\rangle }{\langle I|\frac{1}{H^+-E^+_L} P_I (-H^+)(1-P_I) |J\rangle}
\ee \ee
Using Eqs.(\ref{eq1C},\ref{eq3C},\ref{relation}) we verify that $P(I \rightarrow J) \ge 0$ and $\sum_J P(I \rightarrow J)=1$. Using Eqs.(\ref{eq1C},\ref{eq3C},\ref{relation}) we verify that $P(I \to J) \ge 0$ and $\sum_J P(I \to J)=1$.
Finally, the probabilistic expression writes Finally, the probabilistic expression writes
$$ $$
\langle I_0| \frac{1}{H-E}|I_N\rangle \langle I_0| \frac{1}{H-E}|I_N\rangle
@ -727,7 +749,7 @@ $$
\ee \ee
{\it Energy estimator.} To calculate the energy we introduce the following quantity {\it Energy estimator.} To calculate the energy we introduce the following quantity
\be \be
{\cal E}(E) = \frac{ \langle I_0|\frac{1}{H-E}|H\Psi_T\rangle} {\langle I_0|\frac{1}{H-E}|\Psi_T\rangle}. {\cal E}(E) = \frac{ \langle I_0|\frac{1}{H-E}|H\PsiT\rangle} {\langle I_0|\frac{1}{H-E}|\PsiT\rangle}.
\label{calE} \label{calE}
\ee \ee
and search for the solution $E=E_0$ of and search for the solution $E=E_0$ of
@ -741,7 +763,7 @@ Using the spectral decomposition of $H$ we have
\ee \ee
with with
\be \be
c_i = \langle I_0| \Phi_0\rangle \langle \Phi_0| \Psi_T\rangle c_i = \langle I_0| \Phi_0\rangle \langle \Phi_0| \PsiT\rangle
\ee \ee
It is easy to check that in the vicinity of $E=E_0$, ${\cal E}(E)$ is a linear function of $E-E_0$. It is easy to check that in the vicinity of $E=E_0$, ${\cal E}(E)$ is a linear function of $E-E_0$.
So, in practice we compute a few values of ${\cal E}(E^{k})$ and fit the data using some function of $E$ close to the linearity So, in practice we compute a few values of ${\cal E}(E^{k})$ and fit the data using some function of $E$ close to the linearity
@ -781,8 +803,11 @@ more and more $p$-components with an important increase of statistical fluctuati
a tradoff has to be found between the possible bias in the extrapolation and the amount of simulation time a tradoff has to be found between the possible bias in the extrapolation and the amount of simulation time
required. required.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Numerical application to the Hubbard model} \section{Numerical application to the Hubbard model}
\label{sec:numerical} \label{sec:numerical}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Let us consider the one-dimensional Hubbard Hamiltonian for a chain of $N$ sites Let us consider the one-dimensional Hubbard Hamiltonian for a chain of $N$ sites
\be \be
\hat{H}= -t \sum_{\langle i j\rangle \sigma} \hat{a}^+_{i\sigma} \hat{a}_{j\sigma} \hat{H}= -t \sum_{\langle i j\rangle \sigma} \hat{a}^+_{i\sigma} \hat{a}_{j\sigma}
@ -907,11 +932,11 @@ Following Eqs.(\ref{final_E},\ref{calE}), the practical formula used for computi
where $p_{ex}+1$ is the number of pairs, ($H_p$, $S_p$), computed analytically. For $p_{ex} < p \le p_{max}$ where $p_{ex}+1$ is the number of pairs, ($H_p$, $S_p$), computed analytically. For $p_{ex} < p \le p_{max}$
the Monte Carlo estimates are written as the Monte Carlo estimates are written as
\be \be
H^{QMC}_p= \Bigg \langle \Big( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|H\Psi_T\rangle \Bigg \rangle H^{QMC}_p= \Bigg \langle \Big( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|H\PsiT\rangle \Bigg \rangle
\ee \ee
and and
\be \be
S^{QMC}_p= \Bigg \langle \Big(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|\Psi_T\rangle \Bigg \rangle. S^{QMC}_p= \Bigg \langle \Big(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|\PsiT\rangle \Bigg \rangle.
\ee \ee
Let us begin with a small chain of 4 sites with $U=12$. From now on, we shall take $t=1$. Let us begin with a small chain of 4 sites with $U=12$. From now on, we shall take $t=1$.
@ -932,7 +957,7 @@ Green's matrix at $E=E_0$ where the expansion does not converge at all. Note the
parity effect specific to this system. In practice, it is not too much of a problem since parity effect specific to this system. In practice, it is not too much of a problem since
a smoothly convergent behavior is nevertheless observed for the even- and odd-parity curves. a smoothly convergent behavior is nevertheless observed for the even- and odd-parity curves.
The ratio, ${\cal E}_{QMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in figure \ref{fig2}. Here, $p_{max}$ is taken sufficiently large The ratio, ${\cal E}_{QMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in figure \ref{fig2}. Here, $p_{max}$ is taken sufficiently large
so that the convergence at large $p$ is reached. The values of $E$ are $-0.780,-0.790,-0,785,-0,780$, and $-0.775$. For smaller $E$ the curve is extrapolated using so that the convergence at large $p$ is reached. The values of $E$ are $-0.780$, $-0.790$, $-0,785$, $-0,780$, and $-0.775$. For smaller $E$ the curve is extrapolated using
the two-component expression. The estimate of the energy obtained from ${\cal E}(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068...$. the two-component expression. The estimate of the energy obtained from ${\cal E}(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068...$.
\begin{figure}[h!] \begin{figure}[h!]
@ -1120,8 +1145,11 @@ $N$ & Size Hilbert space & Domain & Domain size & $\alpha,\beta$ &$\bar{t}_{I_0}
\end{ruledtabular} \end{ruledtabular}
\end{table*} \end{table*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Summary and perspectives} \section{Summary and perspectives}
\label{sec:conclu} \label{sec:conclu}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In this work it has been shown how to integrate out exactly within a DMC framework the contribution of all In this work it has been shown how to integrate out exactly within a DMC framework the contribution of all
stochastic trajectories trapped in some given domains of the Hilbert space and the corresponding general equations have been derived. stochastic trajectories trapped in some given domains of the Hilbert space and the corresponding general equations have been derived.
In this way a new effective stochastic dynamics connecting only the domains and not the individual states is defined. In this way a new effective stochastic dynamics connecting only the domains and not the individual states is defined.
@ -1145,17 +1173,26 @@ Here, we have mainly focused on the theoretical aspects of the approach. To reac
more elaborate implementation of the method in order to keep under control the cost of the simulation. more elaborate implementation of the method in order to keep under control the cost of the simulation.
Doing this was out of the scope of the present work and will be presented in a forthcoming work. Doing this was out of the scope of the present work and will be presented in a forthcoming work.
\section*{Acknowledgement} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\acknowledgements{
P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481). P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481).
This work was supported by the European Centre of This work was supported by the European Centre of
Excellence in Exascale Computing TREX --- Targeting Real Chemical Excellence in Exascale Computing TREX --- Targeting Real Chemical
Accuracy at the Exascale. This project has received funding from the Accuracy at the Exascale. This project has received funding from the
European Union's Horizon 2020 --- Research and Innovation program --- European Union's Horizon 2020 --- Research and Innovation program ---
under grant agreement no.~952165. under grant agreement no.~952165.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\appendix \appendix
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Particular case of the $2\times2$ matrix} \section{Particular case of the $2\times2$ matrix}
\label{A} \label{app:A}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
For the simplest case of a two-state system the fundamental equation (\ref{eqfond}) writes For the simplest case of a two-state system the fundamental equation (\ref{eqfond}) writes
$$ $$
{\cal I}= {\cal I}=
@ -1178,13 +1215,14 @@ the equation has been slightly generalized to the case of a general vector for t
\ee \ee
where where
$|1\rangle$ and $|2\rangle$ denote the two states. Let us choose a single-state domain for both states, namely ${\cal D}_1=\{|1\rangle \}$ and ${\cal D}_2=\{ |2\rangle \}$. Note that the single exit state for each state is the other state. Thus there are only two possible deterministic "alternating" paths, namely either $|1\rangle$ and $|2\rangle$ denote the two states. Let us choose a single-state domain for both states, namely ${\cal D}_1=\{|1\rangle \}$ and ${\cal D}_2=\{ |2\rangle \}$. Note that the single exit state for each state is the other state. Thus there are only two possible deterministic "alternating" paths, namely either
$|1\rangle \rightarrow |2\rangle \rightarrow |1\rangle,...$ or $|2\rangle \rightarrow |1\rangle \rightarrow |2\rangle,...$ $|1\rangle \to |2\rangle \to |1\rangle,...$ or $|2\rangle \to |1\rangle \to |2\rangle,...$
We introduce the following quantities We introduce the following quantities
\be \begin{align}
A_1= \langle 1| P_1 \frac{1}{H-E} P_1 (-H)(1-P_1)|2\rangle \;\;\; A_1 & = \langle 1| P_1 \frac{1}{H-E} P_1 (-H)(1-P_1)|2\rangle \;\;\;
A_2= \langle 2| P_2 \frac{1}{H-E} P_2 (-H) (1-P_2)|1\rangle \\
A_2 & = \langle 2| P_2 \frac{1}{H-E} P_2 (-H) (1-P_2)|1\rangle
\label{defA} \label{defA}
\ee \end{align}
and and
\be \be
C_1= \langle 1| P_1 \frac{1}{H-E} P_1 |\Psi\rangle \;\;\; C_1= \langle 1| P_1 \frac{1}{H-E} P_1 |\Psi\rangle \;\;\;