it was shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
Although this work presents the method for finite linear spaces, it can be generalized without fundamental difficulties to continuous configuration spaces.
Diffusion Monte Carlo (DMC) is a class of stochastic methods for evaluating the ground-state properties of quantum systems.
They have been extensively used in virtually all domains of physics and chemistry where the many-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011} quantum liquids,\cite{Holzmann_2006} nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc).
DMC can be used either for systems defined in a continuous configuration space (typically, a set of particles moving in space) for which the Hamiltonian operator is defined in a Hilbert space of infinite dimension or systems defined in a discrete configuration space where the Hamiltonian reduces to a matrix.
Here, we shall consider only the discrete case, that is, the general problem of calculating the lowest eigenvalue and/or eigenstate of a (very large) matrix.
In essence, DMC is based on \textit{stochastic} power methods, a family of old and widely employed numerical approaches able to extract the largest or smallest eigenvalues of a matrix (see, \eg, Ref.~\onlinecite{Golub_2012}).
These approaches are particularly simple as they merely consist in applying a given matrix (or some simple function of it) as many times as required on some arbitrary vector belonging to the linear space.
Thus, the basic step of the corresponding algorithm essentially reduces to successive matrix-vector multiplications.
In practice, power methods are employed under more sophisticated implementations, such as, \eg, the Lancz\`os algorithm (based on Krylov subspaces) \cite{Golub_2012} or Davidson's method where a diagonal preconditioning is performed. \cite{Davidson_1975}
When the size of the matrix is too large, matrix-vector multiplications become unfeasible and probabilistic techniques to sample only the most important contributions of the matrix-vector product are required.
This is the basic idea of DMC.
There exist several variants of DMC known under various names: pure DMC, \cite{Caffarel_1988} DMC with branching, \cite{Reynolds_1982} reptation Monte Carlo, \cite{Baroni_1999} stochastic reconfiguration Monte Carlo, \cite{Sorella_1998,Assaraf_2000} etc.
However, all the ideas presented in this work can be adapted without too much difficulty to the other variants, so the denomination DMC must ultimately be understood here as a generic name for this broad class of methods.
Without entering into the mathematical details (which are presented below), the main ingredient of DMC in order to perform the matrix-vector multiplications probabilistically is the stochastic matrix (or transition probability matrix) that generates stepwise a series of states over which statistical averages are evaluated.
The critical aspect of any Monte Carlo scheme is the amount of computational effort required to reach a given statistical error.
Two important avenues to decrease the error are the use of variance reduction techniques (for example, by introducing improved estimators \cite{Assaraf_1999A}) or to improve the quality of the sampling (minimization of the correlation time between states).
Another possibility, at the heart of the present work, is to integrate out exactly some parts of the dynamics, thus reducing the number of degrees of freedom and, hence, the amount of statistical fluctuations.
In previous works,\cite{Assaraf_1999B,Caffarel_2000} it has been shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out to generate an effective dynamics connecting only different states with some renormalized estimators for the properties.
Here, we extend this idea to the general case where a walker remains a certain amount of time within a finite domain no longer restricted to a single state.
It is shown how to define an effective stochastic dynamics describing walkers moving from one domain to another.
The equations of the effective dynamics are derived and a numerical application for a model (one-dimensional) problem is presented.
In particular, it shows that the statistical convergence of the energy can be greatly enhanced when domains associated with large average trapping times are considered.
It should be noted that the use of domains in quantum Monte Carlo is not new.
In their pioneering work, \cite{Kalos_1974} Kalos and collaborators introduced the so-called domain Green's function Monte Carlo approach in continuous space that they applied to a system of bosons with hard-sphere interaction.
The domain used was the Cartesian product of small spheres around each particle, the Hamiltonian being approximated by the kinetic part only within the domain.
Some time later, Kalos proposed to extend these ideas to more general domains such as rectangular and/or cylindrical domains. \cite{Kalos_2000} In both works, the size of the domains is infinitely small in the limit of a vanishing time-step.
Here, the domains are of arbitrary size, thus greatly increasing the efficiency of the approach.
Finally, note that some general equations for arbitrary domains in continuous space have also been proposed in Ref.~\onlinecite{Assaraf_1999B}.
The paper is organized as follows.
Section \ref{sec:DMC} presents the basic equations and notations of DMC.
First, the path integral representation of the Green's function is given in Subsec.~\ref{sec:path}.
The probabilistic framework allowing the Monte Carlo calculation of the Green's function is presented in Subsec.~\ref{sec:proba}.
Section \ref{sec:DMC_domains} is devoted to the use of domains in DMC.
where $\Id$ is the identity operator, $\tau$ a small positive parameter playing the role of a time step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. For any initial vector $\ket{\Psi_0}$ provided that $\braket{\Phi_0}{\Psi_0}\ne0$ and for $\tau$ sufficiently small, we have
where $\ket{\Phi_0}$ is the ground-state wave function, \ie, $H \ket{\Phi_0}= E_0\ket{\Phi_0}$.
The equality in Eq.~\eqref{eq:limTN} holds up to a global phase factor playing no role in physical quantum averages.
At large but finite $N$, the vector $T^N \ket{\Psi_0}$ differs from $\ket{\Phi_0}$ only by an exponentially small correction, making it straightforward to extrapolate the finite-$N$ results to $N \to\infty$.
The denomination ``time-dependent Green's matrix'' is used here since $G$ may be viewed as a short-time approximation of the (time-imaginary) evolution operator $e^{-N\tau H}$, which is usually referred to as the imaginary-time dependent Green's function.
In quantum physics, Eq.~\eqref{eq:cn} is referred to as the path-integral representation of the Green's matrix (or function).
The series of states $\ket{i_0}, \ldots,\ket{i_N}$ is interpreted as a ``path'' in the Hilbert space starting at vector $\ket{i_0}$ and ending at vector $\ket{i_N}$, where $k$ plays the role of a time index.
Each path is associated with a weight $\prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}$ and the path integral expression of $G$ can be recast in the more suggestive form as follows:
In the limit $N \to\infty$, the $i_N$th component of the ground-state wave function (obtained as $\lim_{N \to\infty} G^{(N)}_{i_0 i_N})$ is the weighted sum over all possible paths arriving at vector $\ket{i_N}$.
When the size of the linear space is small, the explicit calculation of the full sums involving $M^N$ terms (where $M$ is the size of the Hilbert space) can be performed.
To derive a probabilistic expression for the Green's matrix, we introduce a guiding wave function, $\ket{\PsiG}$, having strictly positive components, \ie, $\PsiG_i > 0$, in order to perform a similarity transformation to the operators $G^{(N)}$ and $T$, as follows:
Note that, thanks to the properties of similarity transformations, the path integral expression relating $G^{(N)}$ and $T$ [see Eq.~\eqref{eq:G}] remains unchanged for $\bar{G}^{(N)}$ and $\bar{T}$.
%As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as
By construction, the operator $H^+-\EL^+\Id$ in the definition of the operator $T^+$ [see Eq.~\eqref{eq:T+}] has been chosen to admit $\ket{\PsiG}$ as a ground-state wave function with zero eigenvalue, \ie, $\qty(H^+- E_L^+\Id)\ket{\PsiG}=0$, leading to the relation
As readily seen in Eq.~\eqref{eq:pij}, the off-diagonal terms of the stochastic matrix are positive, while the diagonal terms can be made positive if $\tau$ is chosen sufficiently small via the condition
At first sight, the condition defining the maximum value of $\tau$ [see Eq.~\eqref{eq:cond}] may appear rather tight since, for very large matrices, it may impose an extremely small value of the time step.
However, in practice, during the simulation only a (tiny) fraction of the linear space is sampled, and the maximum absolute value of $H^+_{ii}-(\EL^+)_{i}$ for the sampled states turns out to be not too large.
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive as $\sum_j \PsiG_{j} T^+_{ij}=1$.
Note that, instead of using a single walker, it is common to introduce a population of independent walkers and to calculate the averages over this population.
In addition, thanks to the ergodic property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}), a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
We shall not here insist on these practical details that are discussed, for example, in Refs.~\onlinecite{Foulkes_2001,Kolorenc_2011}.
During the simulation, walkers move from state to state with the possibility of being trapped a certain number of times on the same state before
exiting to a different state. This fact can be exploited in order to integrate out some part of the dynamics, thus leading to a reduction of the statistical
fluctuations. This idea was proposed some time ago and applied to the SU($N$) one-dimensional Hubbard model.\cite{Assaraf_1999A,Assaraf_1999B,Caffarel_2000}
Considering a given state $\ket{i}$, the probability that a walker remains exactly $n$ times in $\ket{i}$ (with $1\le n < \infty$) and then exits to a different state $j$ (with $j \neq i$) is
To do so, we associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states.
The only important condition is that the set of all domains ensures the ergodicity property of the effective stochastic dynamics, that is, starting from any state, there is a non-zero probability to reach any other state in a finite number of steps.
where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ is the number of times the walker remains in the domain of $\ket{i_0}$ (with $1\le n_0\le N+1$), $\ket{I_1}$ is the first exit state that does not belong to $\cD_{i_0}$, $n_1$ is the number of times the walker remains in $\cD_{i_1}$ (with $1\le n_1\le N+1-n_0$), $\ket{I_2}$ is the second exit state, and so on.
Here, the integer $p$ (with $0\le p \le N$) indicates the number of exit events occurring along the path.
The two extreme values, $p=0$ and $p=N$, correspond to the cases where the walker remains in the initial domain during the entire path, and where the walker exits a domain at each step, respectively.
%Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the
%set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states
%averages for renormalized quantities will be defined.\\
In this section we generalize the path-integral expression of the Green's matrix, Eqs.~\eqref{eq:G} and \eqref{eq:cn_stoch}, to the case where domains are used.
Starting from Eq.~\eqref{eq:cn} and using the representation of the paths in terms of exit states and trapping times [see Eq.~\eqref{eq:eff_series}], we get
where $\cC_p$ is the set of paths with $p$ exit states, $\ket{I_k}$, and trapping times $n_k$ with the constraints that $\ket{I_k}\notin\cD_{k-1}$ (with $1\le n_k \le N+1$ and $\sum_{k=0}^p n_k= N+1$).
This expression is the path-integral representation of the Green's matrix using only the variables $(\ket{I_k},n_k)$ of the effective dynamics defined over the set of domains.
The standard formula derived above [see Eq.~\eqref{eq:G}] may be considered as the particular case where the domain associated with each state is empty.
To express the fundamental equation of $G$ under the form of a probabilistic average, we rely on the importance-sampled version of Eq.~\eqref{eq:G}, \ie,
The aim of this section is to show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective stochastic dynamics involving now only the exit states.
The first one, called the pedestrian way, consists in starting from the preceding time expression for $G$ and make the explicit integration over the $n_k$'s.
The remarkable property is that, thanks to the summation over $N$ up to infinity, the constrained multiple sums appearing in Eq.~\eqref{eq:Gt} can be factorized in terms of a product of unconstrained single sums, as follows
Therefore, in practice, we compute the value of $\cE(E)$ for several values of $E$, and fit these data using a linear, quadratic or a more complicated function of $E$ in order to obtain, via extrapolation, an estimate of $E_0$.
where $\expval{ i j }$ denotes the summation over two neighboring sites, $\hat{a}_{i\sigma}$ ($\hat{a}_{i\sigma}$) is the fermionic creation (annihilation) operator of an spin-$\sigma$ electron (with $\sigma$ = $\uparrow$ or $\downarrow$) on site $i$, $\hat{n}_{i\sigma}=\hat{a}^+_{i\sigma}\hat{a}_{i\sigma}$ the number operator, $t$ the hopping amplitude, and $U$ the on-site Coulomb repulsion.
It is then possible to equate the guiding and trial vectors, \ie, $\ket{c^+}=\ket{c_\text{T}}$.
As trial wave function, we shall employ a generalization of the Gutzwiller wave function \cite{Gutzwiller_1963}
\be
\braket{ n }{ c_\text{T}}= e^{-\alpha n_D(n)-\beta n_A(n)},
\ee
where $n_D(n)$ is the number of doubly occupied sites for the configuration $\ket{n}$ and $n_A(n)$ the number of nearest-neighbor antiparallel pairs defined as
\be
n_A(n)= \sum_{\expval{ i j }} n_{i\uparrow} n _{j\downarrow}\delta_{n_{i\uparrow},1}\delta_{n_{j\downarrow},1}.
\ee
The parameters $\alpha$ and $\beta$ are optimized by minimizing the variational energy of the trial wave function, \ie,
As discussed above, the efficiency of the method depends on the choice of states forming each domain.
As a general guiding principle, it is advantageous to build domains associated with a large average trapping time in order to integrate out the most important part of the Green's matrix.
Here, as a first illustration of the method, we shall consider the large-$U$ regime of the Hubbard model where the construction of such domains is rather natural.
Indeed, at large $U$ and half-filling, the Hubbard model approaches the Heisenberg limit where only the $2^N$ states with no double occupancy, $n_D(n)=0$, have a significant weight in the wave function.
For simplicity and reducing the number of diagonalizations to perform, we shall consider only one non-trivial domain called here the main domain and denoted as $\cD$.
This domain will be chosen common to all states belonging to it, that is,
which means that $\cD$ contains the set of states having up to $n_D^\text{max}$ double occupations and, for each state with a number of double occupations equal to $m$, a number of nearest-neighbor antiparallel pairs between $n_A^\text{min}(m)$ and $n_A^\text{max}(m)$.
The size of the linear space is ${\binom{4}{2}}^2=36$ and the ground-state energy obtained by exact diagonalization is $E_0=-0.768068\ldots$.
The two variational parameters of the trial vector have been optimized and fixed at the values of $\alpha=1.292$, and $\beta=0.552$ with a variational energy of $E_\text{T}=-0.495361\ldots$.
In what follows $\ket{I_0}$ will be systematically chosen as one of the two N\'eel states, \eg, $\ket{I_0}=\ket{\uparrow,\downarrow, \uparrow,\ldots}$.
In contrast, when $E$ approaches the exact energy, a slower convergence is observed, as expected from the divergence of the matrix elements of the Green's matrix at $E=E_0$ where the expansion does not converge at all.
Note the oscillations of the curves as a function of $p$ due to a parity effect specific to this system.
In practice, it is not too much of a problem since a smoothly convergent behavior is nevertheless observed for the even- and odd-parity curves.
The first line of the table gives the results when using a minimal single-state domain for all states, and the last one for the maximal domain containing the full linear space.
The size of the various domains is given in the second column, the average trapping time for the state $\ket{I_0}$ in the third column, and an estimate of the speed of convergence of the $p$-expansion for the energy in the fourth column.
To quantify the rate of convergence, we report the quantity, $p_{conv}$, defined as the smallest value of $p$ for which the energy is converged with six decimal places.
The smaller $p_{conv}$, the better the convergence is.
For example, the domain $\cD(0,3)\cup\cD(1,0)$ has a trapping time for the N\'eel state of $6.2$, while the domain $\cD(0,3)\cup\cD(1,1)$ having almost the same number of states (28 states), has an average trapping time about 6 times longer.
The exact value, $\cE(E=-1)=-0.75272390\ldots$, can be found at the last row of Table \ref{tab1} for the case of a domain corresponding to the full space.
Using this domain leads to a reduction in the statistical error as large as about three orders of magnitude, nicely illustrating the critical importance of the domains employed.
As a general rule, it is always good to avoid the Monte Carlo calculation of a quantity which is computable analytically.
Here, we apply this idea to the case of the energy, Eq.~\eqref{eq:calE}, where the first $p$-components can be evaluated exactly up to some maximal value of $p$, $p_{ex}$.
Table \ref{tab2} shows the results both for the case of a single-state main domain and for the domain having the largest average trapping time, namely $\cD(0,1)\cup\cD(1,1)$ (see Table \ref{tab1}).
Table \ref{tab2} reports the statistical fluctuations of the energy for the simulation of Table \ref{tab1}.
Results show that it is indeed interesting to compute exactly as many components as possible.
For the single-state domain, a factor 2 of reduction of the statistical error is obtained when passing from no analytical computation, $p_{ex}=0$, to the case where eight components for $H_p$ and $S_p$ are exactly computed.
For the best domain, the impact is much more important with a huge reduction of about three orders of magnitude in the statistical error.
Table \ref{tab3} reports the energies converged as a function of $p$ with their statistical error on the last digit for $E=
-0.8$, $-0.795$, $-0.79$, $-0.785$, and $-0.78$.
The values are displayed in Fig.~\ref{fig2}.
As seen on the figure the behavior of $\cE$ as a function of $E$ is very close to the linearity.
The extrapolated values obtained from the five values of the energy with the three fitting functions are reported.
Using the linear fitting function leads to an energy of $-0.7680282(5)$ to compare with the exact value of $-0.768068\ldots$.
A small bias of about $4\times10^{-5}$ is observed.
This bias vanishes within the statistical error with the quadratic fit.
It is also true when using the two-component function.
Our final value is in full agreement with the exact value with about six decimal places.
Table \ref{tab4} shows the evolution of the average trapping times and extrapolated energies as a function of $U$ when using $\cD(0,1)\cup\cD(1,0)$ as the main domain.
We also report the variational and exact energies together with the values of the optimized parameters of the trial wave function.
As $U$ increases the configurations with zero or one double occupation become more and more predominant and the average trapping time increases.
For very large values of $U$ (say, $U \ge12$) the increase of $\bar{t}_{I_0}$ becomes particularly steep.
No careful construction of domains maximizing the average trapping time has been performed, we have merely chosen domains of reasonable size (not more than 2682) by taking not too large number of double occupations (only, $n_D=0,1$) and not too small number of nearest-neighbor antiparallel pairs.
As seen, as the number of sites increases, the average trapping time for the domains chosen decreases.
This point is of course undesirable since the efficiency of the approach may gradually deteriorate when considering large systems.
A more elaborate way of constructing domains is clearly called for.
The exact DMC energies extrapolated using the two-component function are also reported.
Similarly to what has been done for $N=4$ the extrapolation is performed using about five values for the reference energy.
The extrapolated DMC energies are in full agreement with the exact value within error bars.
However, an increase of the statistical error is observed when the size increases.
To get lower error bars we need to use better trial wave functions, better domains, and also larger simulation times.
Of course, it will also be particularly interesting to take advantage of the fully parallelizable character of the algorithm to get much lower error bars.
In this work, it has been shown how to integrate out exactly --- within a DMC framework --- the contribution of all stochastic trajectories trapped in some given domains of the Hilbert space.
The corresponding equations have been derived in a general context.
In such a way, a new effective stochastic dynamics connecting only domains (and not the individual states) is defined.
A key property of this effective dynamics is that it does not depend on the ``shape'' of the domains used for each state.
Therefore, rather general domains (with or without overlap) can be considered.
To obtain the effective transition probability (which provides the probability of going from one domain to another) and the corresponding renormalized estimators, the domain Green's functions must be computed analytically, that is, in practice, matrices of the size of the number of states for the sampled domains have to be inverted.
This is the main computationally intensive step of the present approach.
The efficiency of the method is directly related to the importance of the average time spent by the stochastic trajectories in each domain.
Being able to define domains with large average trapping times is the key aspect of the method since it may lead to some important reduction of the statistical error, as illustrated in our numerical applications.
Therefore, a trade-off has to be found between maximizing the average trapping time and minimizing the cost of computing the domain Green's functions.
In practice, there is no general rule to construct such domains.
For each system at hand, one needs to determine, on physical grounds, which \titou{spatial} regions are preferentially sampled by the stochastic trajectories and to build domains of minimal size enclosing such regions.
In the first application presented here on the one-dimensional Hubbard model, we exploit the physics of the large-$U$ regime that is known to approach the Heisenberg limit where double occupations have small weights.
This simple example has been chosen to illustrate the various aspects of the approach.
Our goal is, of course, to tackle much larger systems, like those treated by state-of-the-art methods, such as selected CI, \cite{Huron_1973,Harrison_1991,Giner_2013,Holmes_2016,Schriber_2016,Tubman_2020}, FCIQMC, \cite{Booth_2009,Cleland_2010}, AFQMC, \cite{Zhang_2003} or DMRG. \cite{White_1999,Chan_2011}
Here, we have mainly focused on the theoretical aspects of the approach.
In order to consider larger systems, an elaborate implementation of the present method is necessary in order to keep under control the cost of the simulation.
This is outside the scope of the present study and will be presented in a forthcoming work.
P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481).
For the simplest case of a system containing only two states, $\ket{1}$ and $\ket{2}$, the fundamental equation given in Eq.~\eqref{eq:eqfond} simplifies to
To treat simultaneously the two possibilities for the final state, \ie, $\ket{I_N}=\ket{1}$ or $\ket{2}$, \titou{Eq.~\eqref{}} has been slightly generalized to the case of a general vector for the final state written as
Let us choose a single-state domain for both states, namely $\cD_1=\qty{\ket{1}}$ and $\cD_2=\qty{\ket{2}}$.
%\titou{Note that the single exit state for each state is the other state.}
Note that, due to the simplicity of the present two-state model, there are only two possible deterministic ``alternating'' paths, namely, $\ket{1}\to\ket{2}\to\ket{1},\ldots$ and $\ket{2}\to\ket{1}\to\ket{2},\ldots$.
For the sake of convenience, we introduce the following quantities