it was shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
Although this work presents the method for finite linear spaces, it can be generalized without fundamental difficulties to continuous configuration spaces.
Diffusion Monte Carlo (DMC) is a class of stochastic methods for evaluating the ground-state properties of quantum systems.
They have been extensively used in virtually all domains of physics and chemistry where the many-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011} quantum liquids,\cite{Holzmann_2006} nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc).
DMC can be used either for systems defined in a continuous configuration space (typically, a set of particles moving in space) for which the Hamiltonian operator is defined in a Hilbert space of infinite dimension or systems defined in a discrete configuration space where the Hamiltonian reduces to a matrix.
Here, we shall consider only the discrete case, that is, the general problem of calculating the lowest eigenvalue and/or eigenstate of a (very large) matrix.
In essence, DMC is based on \textit{stochastic} power methods, a family of well established numerical approaches able to extract the largest or smallest eigenvalues of a matrix (see, \eg, Ref.~\onlinecite{Golub_2012}).
These approaches are particularly simple as they merely consist in applying a given matrix (or some simple function of it) as many times as required on some arbitrary vector belonging to the linear space.
Thus, the basic step of the corresponding algorithm essentially reduces to successive matrix-vector multiplications.
In practice, power methods are employed under more sophisticated implementations, such as, \eg, the Lancz\`os algorithm (based on Krylov subspaces) \cite{Golub_2012} or Davidson's method where a diagonal preconditioning is performed. \cite{Davidson_1975}
When the size of the matrix is too large, matrix-vector multiplications become unfeasible and probabilistic techniques to sample only the most important contributions of the matrix-vector product are required.
This is the basic idea of DMC.
There exist several variants of DMC known under various names: pure DMC, \cite{Caffarel_1988} DMC with branching, \cite{Reynolds_1982} reptation Monte Carlo, \cite{Baroni_1999} stochastic reconfiguration Monte Carlo, \cite{Sorella_1998,Assaraf_2000} etc.
However, all the ideas presented in this work can be adapted without too much difficulty to the other variants, so the denomination DMC must ultimately be understood here as a generic name for this broad class of methods.
Without entering into the mathematical details (which are presented below), the main ingredient of DMC in order to perform the matrix-vector multiplications probabilistically is to introduce a stochastic matrix (or transition probability matrix) that generates stepwise a series of states over which statistical averages are evaluated.
Two important avenues to decrease the error are the use of variance reduction techniques (for example, by introducing improved estimators \cite{Assaraf_1999A}) or to improve the quality of the sampling (minimization of the correlation time between states).
Another possibility, at the heart of the present work, is to integrate out exactly some parts of the dynamics, thus reducing the number of degrees of freedom and, hence, the amount of statistical fluctuations.
In previous works,\cite{Assaraf_1999B,Caffarel_2000} it has been shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out to generate an effective dynamics connecting only different states with some renormalized estimators for the properties.
Here, we extend this idea to the general case where a walker remains a certain amount of time in a finite domain no longer restricted to a single state.
In particular, it shows that the statistical convergence of the energy can be greatly enhanced when domains associated with large average trapping times are considered.
It should be noted that the use of domains in quantum Monte Carlo is not new. Domains have been introduced within the context of the
Green's function Monte Carlo (GFMC) method pioneered by Kalos\cite{Kalos_1962,Kalos_1970}
and later developped and applied by Kalos and others.\cite{Kalos_1974,Ceperley_1979,Ceperley_1983,Moskowitz_1986}
In GFMC, an approximate Green's function that can be sampled is needed to propagate stochastically the wavefunction.
In the so-called Domain's GFMC version of GFMC introduced in Ref.\onlinecite{Kalos_1970} and \onlinecite{Kalos_1974}
the sampling is realized by using the restriction of the Green's function
to a small domain consisting of the Cartesian product of small spheres around each particle, the potential being considered as constant within the domain.
Fundamentally, the method presented in this work is closely related to the Domain's GFMC, although the way we present the formalism in terms of walkers
trapped within domains
and derive the equations may appear as different.
However, a key difference here is that we show how to use
domains of arbitrary size, a point which can greatly enhance the efficiency of the simulations when domains are suitably chosen, as illustrated in our numerical
Finally, from a general perspective, it is interesting to mention that the method proposed here is an illustration of how valuable and efficient can be the combination of stochastic and deterministic techniques.
Let us mention, for example, the semi-stochastic approach of Petruzielo {\it et al.}, \cite{Petruzielo_2012} two different hybrid algorithms for evaluating the second-order perturbation energy in selected configuration interaction methods, \cite{Garniron_2017b,Sharma_2017} the approach of Willow \textit{et al.} for computing stochastically second-order many-body perturbation energies, \cite{Willow_2012} or the zero variance Monte Carlo scheme for evaluating two-electron integrals in quantum chemistry. \cite{Caffarel_2019}
where $\Id$ is the identity operator, $\tau$ a small positive parameter playing the role of a time step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. For any initial vector $\ket{\Psi_0}$ provided that $\braket{\Phi_0}{\Psi_0}\ne0$ and for $\tau$ sufficiently small, we have
where $\ket{\Phi_0}$ is the ground-state wave function, \ie, $H \ket{\Phi_0}= E_0\ket{\Phi_0}$.
The equality in Eq.~\eqref{eq:limTN} holds up to a global phase factor playing no role in physical quantum averages.
At large but finite $N$, the vector $T^N \ket{\Psi_0}$ differs from $\ket{\Phi_0}$ only by an exponentially small correction, making it straightforward to extrapolate the finite-$N$ results to $N \to\infty$.
For example, in the important case of the energy, one can project out the vector $T^N \ket{\Psi_0}$ on some approximate vector, $\ket{\PsiT}$, as follows
The denomination ``time-dependent Green's matrix'' is used here since $G$ may be viewed as a short-time approximation of the (time-imaginary) evolution operator $e^{-N\tau H}$, which is usually referred to as the imaginary-time dependent Green's function.
Introducing the set of $N-1$ intermediate states, $\{\ket{i_k}\}_{1\le k \le N-1}$, between each $T$ in $T^N$, $G^{(N)}$ can be written in the following expanded form
In quantum physics, Eq.~\eqref{eq:cn} is referred to as the path-integral representation of the Green's matrix (or function).
The series of states $\ket{i_0}, \ldots,\ket{i_N}$ is interpreted as a ``path'' in the Hilbert space starting at vector $\ket{i_0}$ and ending at vector $\ket{i_N}$, where $k$ plays the role of a time index.
Each path is associated with a weight $\prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}$ and the path integral expression of $G$ can be recast in the more suggestive form as follows:
In the limit $N \to\infty$, the $i_N$th component of the ground-state wave function (obtained as $\lim_{N \to\infty} G^{(N)}_{i_0 i_N})$ is the weighted sum over all possible paths arriving at vector $\ket{i_N}$.
When the size of the linear space is small, the explicit calculation of the full sums involving $M^N$ terms (where $M$ is the size of the Hilbert space) can be performed.
Path integral representation of the exact coefficient $c_i=\braket{i}{\Phi_0}$ of the ground-state wave function $\ket{\Phi_0}$ obtained as an infinite sum of paths starting from $\ket{i_0}$ and ending at $\ket{i}$ [see Eq.\eqref{eq:G}].
To derive a probabilistic expression for the Green's matrix, we introduce a guiding wave function $\ket{\PsiG}$ having strictly positive components, \ie, $\PsiG_i > 0$, in order to perform a similarity transformation of the operators $G^{(N)}$ and $T$,
Note that, thanks to the properties of similarity transformations, the path integral expression relating $G^{(N)}$ and $T$ [see Eq.~\eqref{eq:G}] remains unchanged for $\bar{G}^{(N)}$ and $\bar{T}$.
By construction, the operator $H^+-\EL^+\Id$ in the definition of $T^+$ [see Eq.~\eqref{eq:T+}] has been chosen to admit $\ket{\PsiG}$ as a ground-state wave function with zero eigenvalue, \ie, $\qty(H^+- E_L^+\Id)\ket{\PsiG}=0$, leading to the relation
As readily seen in Eq.~\eqref{eq:pij}, the off-diagonal terms of the stochastic matrix are positive, while the diagonal terms can be made positive if $\tau$ is chosen sufficiently small via the condition
At first sight, the condition defining the maximum value of $\tau$ [see Eq.~\eqref{eq:cond}] may appear rather tight since, for very large matrices, it may impose an extremely small value of the time step.
However, in practice, during the simulation only a (tiny) fraction of the linear space is sampled, and the maximum absolute value of $H^+_{ii}-(\EL^+)_{i}$ for the sampled states turns out to be not too large.
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive as $\sum_j \PsiG_{j} T^+_{ij}=1$.
Now, we need to make the connection between the transition probability matrix, $p_{i \to j}$, defined from the approximate Hamiltonian $H^{+}$ via $T^+$ and the operator $T$ associated with the exact Hamiltonian $H$.
This is done thanks to Eq.~\eqref{eq:defTij} that connects $p_{i \to j}$ and $T_{ij}$ through the weight
Note that, instead of using a single walker, it is common to introduce a population of independent walkers and to calculate the averages over this population.
In addition, thanks to the ergodicity property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}), a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
During the simulation, walkers move from state to state with the possibility of being trapped a certain number of times on the same state before
exiting to a different state. This fact can be exploited in order to integrate out some part of the dynamics, thus leading to a reduction of the statistical
fluctuations. This idea was proposed some time ago and applied to the SU($N$) one-dimensional Hubbard model.\cite{Assaraf_1999A,Assaraf_1999B,Caffarel_2000}
Considering a given state $\ket{i}$, the probability that a walker remains exactly $n$ times in $\ket{i}$ (with $ n \ge1$) and then exits to a different state $j$ (with $j \neq i$) is
To do so, we associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states.
The only important condition is that the set of all domains ensures the ergodicity property of the effective stochastic dynamics, that is, starting from any state, there is a non-zero probability to reach any other state in a finite number of steps.
where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ is the number of times the walker remains in the domain of $\ket{i_0}$ (with $1\le n_0\le N+1$), $\ket{I_1}$ is the first exit state that does not belong to $\cD_{i_0}$, $n_1$ is the number of times the walker remains in $\cD_{i_1}$ (with $1\le n_1\le N+1-n_0$), $\ket{I_2}$ is the second exit state, and so on.
Here, the integer $p$ (with $0\le p \le N$) indicates the number of exit events occurring along the path.
The two extreme values, $p=0$ and $p=N$, correspond to the cases where the walker remains in the initial domain during the entire path, and where the walker exits a domain at each step, respectively.
In what follows, we shall systematically label exit states with upper-case letters $\ket{I_k}$, while lower-case letters denote elementary states $\ket{i_k}$.
Making this distinction is important since the effective stochastic dynamics used in practical Monte Carlo calculations only involve exit states $\ket{I_k}$, the contribution from the elementary states $\ket{i_k}$ being exactly integrated out.
Figure \ref{fig:domains} exemplifies how a path can be decomposed as proposed in Eq.~\eqref{eq:eff_series}.
To make things as clear as possible, let us describe how the path drawn in Fig.~\ref{fig:domains} evolves in time.
The walker starts at $\ket{i_0}$ in the domain $\cD_{i_0}$.
Then, it performs two steps from $\ket{i_0}$ to $\ket{i_2}$ and has left the domain at $\ket{i_3}$, which is thus the first exit state, \ie, $\ket{I_1}=\ket{i_3}$.
The trapping time in $\cD_{i_0}$ is $n_0=3$ since three states in $\cD_{i_0}$ have been visited (namely, $\ket{i_0}$, $\ket{i_1}$, and $\ket{i_2}$).
During the next steps, the domains $\cD_{I_1}$, $\cD_{I_2}$, and $\cD_{I_3}$ are successively visited with $n_1=2$, $n_2=3$, and $n_3=1$, respectively.
%Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the
%set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states
%averages for renormalized quantities will be defined.\\
states $\ket{i_k}$ along the path are represented by small black circles and the exit states, $\ket{I_k}$, by larger black squares.
By convention, the initial state is denoted using a capital letter, \ie, $\ket{i_0}=\ket{I_0}$, since it is the first state of the effective dynamics involving only exit states.
See text for additional comments on the time evolution of the path.}
Generalizing the single-state case treated previously, let us define the probability of remaining $n$ times in the domain of $\ket{I_0}$ and to exit at $\ket{I}\notin\cD_{I_0}$
where the operator $F^+_I = P_I T^+(1-P_I)$, corresponding to the last move connecting the inside and outside regions of the domain, has the following matrix elements:
Starting from Eq.~\eqref{eq:cn} and using the representation of the paths in terms of exit states and trapping times [see Eq.~\eqref{eq:eff_series}], we get
where $\cC_p$ is the set of paths with $p$ exit states, $\ket{I_k}$, and trapping times $n_k$ with the constraints that $\ket{I_k}\notin\cD_{k-1}$ (with $1\le n_k \le N+1$ and $\sum_{k=0}^p n_k= N+1$).
This expression is the path-integral representation of the Green's matrix using only the variables $(\ket{I_k},n_k)$ of the effective dynamics defined over the set of domains.
The standard formula for $G^{(N)}_{i_0 i_N}$ derived above [see Eq.~\eqref{eq:G}] may be considered as the particular case where the walker exits of the current state $\ket{i_k}$ at each step (no domains are introduced), leading to a number of exit events $p$ equal to $N$.
In this case, we have $\ket{I_k}=\ket{i_k}$, $n_k=1$ (for $0\le k \le N$), and we are left only with the $p$th component of the sum, that is, $G^{(N)}_{i_0 i_N}=\prod_{k=0}^{N-1}\mel{ I_k }{ F_{I_k}}{ I_{k+1}}$, with $F=T$, thus recovering Eq.~\eqref{eq:G}.
Note that the first contribution $G^{(N),\cD}_{i_0 i_N}$ corresponds to the case $p=0$ and collects all contributions to the Green's matrix coming from paths remaining indefinitely in the domain of $\ket{i_0}$ (no exit event).
This contribution is isolated from the sum in Eq.~\eqref{eq:Gt} since, as a domain Green's matrix, it is calculated exactly and is not subject to a stochastic treatment.
Note also that the last state $\ket{i_N}$ is never an exit state because of the very definition of our path representation.
% (if not, it would be associated to the next contribution corresponding to $p+1$ exit events).
In order to compute $G^{(N)}_{i_0 i_N}$ by resorting to Monte Carlo techniques, let us reformulate Eq.~\eqref{eq:Gt} using the transition probability $\cP_{I \to J}(n)$ introduced in Eq.~\eqref{eq:eq3C}.
where, for clarity, $\sum_{{(I,n)}_{p,N}}$ is used as a short-hand notation for $\sum_{\ket{I_1}\notin\cD_{I_0}, \ldots , \ket{I_p}\notin\cD_{I_{p-1}}}\sum_{n_0\ge1}\cdots\sum_{n_p \ge1}$ with the constraint $\sum_{k=0}^p n_k=N+1$.
where $\expval{\cdots}_p$ is the probabilistic average defined over the set of paths with $p$ exit events of probability $\prod_{k=0}^{p-1}\cP_{I_k \to I_{k+1}}(n_k)$, and
Stochastic paths starting at $\ket{I_0}$ are generated using the probability $\cP_{I_k \to I_{k+1}}(n_k)$ and are stopped when $\sum_k n_k$ is greater than some target value $N$.
Averages of the weight products $\prod_{k=0}^{p-1} W_{I_k I_{k+1}}$ times the end-point contributions ${(\cH/\cS)}_{n_p,I_p}$ are computed for each $p$.
The generation of the paths can be performed using a two-step process.
First, the integer $n_k$ is drawn using the probability $P_{I_k}(n)$ [see Eq.~\eqref{eq:PiN}] and, then, the exit state $\ket{I_{k+1}}$ is drawn using the conditional probability $\cP_{I_k \to I_{k+1}}(n_k)/P_{I_k}(n_k)$.
The aim of this section is to show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective stochastic dynamics involving now only the exit states.
The first one, called the pedestrian way, consists in starting from the preceding time expression for $G$ and make the explicit integration over the $n_k$'s.
The remarkable property is that, thanks to the summation over $N$ up to infinity, the constrained multiple sums appearing in Eq.~\eqref{eq:Gt} can be factorized in terms of a product of unconstrained sums, as follows
Using the fact that $G^E_{ij}=\tau\sum_{N=0}^{\infty} G^{(N)}_{ij}$, where $G^{(N)}_{ij}$ is given by Eq.~\eqref{eq:Gt}, and summing over the variables $n_k$, we get
where $c_i =\braket{ I_0}{\Phi_i }\braket{\Phi_i}{\PsiT}$ and $\Phi_i$ are the eigenstates of $H$, \ie, $H \ket{\Phi_i}= E_i \ket{\Phi_i}$.
The important observation is that for all eigenstates we have $\cE(E_i)= E_i$. Thus, to get the ground-state energy we propose to search for the solution
of the non-linear equation $\cE(E)= E$ in the vicinity of $E_0$.
Avoiding the stochastic calculation of quantities, such as $H_0$ or $S_0$, that can be evaluated analytically is computationally very appealing as the statistical error associated with these quantities is completely removed.
We thus suggest to extend the exact calculation of $H_p$ and $S_p$ to higher $p$ values, up to the point where the exponential increase of the number of intermediate states involved in the explicit sums makes the calculation unfeasible.
Let us emphasize that, since the magnitude of $H_p$ and $S_p$ decreases as a function of $p$, most of the statistical error is removed by computing the dominant terms analytically.
This will be illustrated in the numerical application presented below.
Therefore, in practice, we compute the value of $\cE(E)$ for several values of $E$, and fit these data using a linear, quadratic or a more complicated function of $E$ in order to obtain, via extrapolation, an estimate of $E_0$.
where $\expval{ i j }$ denotes the summation over two neighboring sites, $\hat{a}_{i\sigma}$ ($\hat{a}_{i\sigma}$) is the fermionic creation (annihilation) operator of an spin-$\sigma$ electron (with $\sigma$ = $\uparrow$ or $\downarrow$) on site $i$, $\hat{n}_{i\sigma}=\hat{a}^+_{i\sigma}\hat{a}_{i\sigma}$ the number operator, $t$ the hopping amplitude, and $U$ the on-site Coulomb repulsion.
It is then possible to equate the guiding and trial vectors, \ie, $\ket{c^+}=\ket{c_\text{T}}$.
As trial wave function, we shall employ a generalization of the Gutzwiller wave function \cite{Gutzwiller_1963}
\be
\braket{ n }{ c_\text{T}}= e^{-\alpha n_D(n)-\beta n_A(n)},
\ee
where $n_D(n)$ is the number of doubly occupied sites for the configuration $\ket{n}$ and $n_A(n)$ the number of nearest-neighbor antiparallel pairs defined as
\be
n_A(n)= \sum_{\expval{ i j }} n_{i\uparrow} n _{j\downarrow}\delta_{n_{i\uparrow},1}\delta_{n_{j\downarrow},1}.
\ee
The parameters $\alpha$ and $\beta$ are optimized by minimizing the variational energy of the trial wave function, \ie,
As discussed above, the efficiency of the method depends on the choice of states forming each domain.
As a general guiding principle, it is advantageous to build domains associated with a large average trapping time in order to integrate out the most important part of the Green's matrix.
Here, as a first illustration of the method, we shall consider the large-$U$ regime of the Hubbard model where the construction of such domains is rather natural.
Indeed, at large $U$ and half-filling, the Hubbard model approaches the Heisenberg limit where only the $2^N$ states with no double occupancy, $n_D(n)=0$, have a significant weight in the wave function.
For simplicity and reducing the number of diagonalizations to perform, we shall consider only one non-trivial domain called here the main domain and denoted as $\cD$.
This domain will be chosen common to all states belonging to it, that is,
$\cD$ is defined as containing the set of states having up to $n_D^\text{max}$ double occupations and, for each state with a number of double occupations equal to $m$, a number of nearest-neighbor antiparallel pairs between $n_A^\text{min}(m)$ and $n_A^\text{max}(m)$.
Here, $n_A^\text{max}(m)$ is fixed and set at its maximum value for a given $m$, \ie, $n_A^\text{max}(m)=\max(N-1-2m,0)$.
The two variational parameters of the trial vector have been optimized and fixed at the values of $\alpha=1.292$ and $\beta=0.552$ with a variational energy $E_\text{T}=-0.495361\ldots$.
In what follows, $\ket{I_0}$ is systematically chosen as one of the two N\'eel states, \eg, $\ket{I_0}=\ket{\uparrow,\downarrow, \uparrow,\ldots}$.
In contrast, when $E$ approaches the exact energy, a slower convergence is observed, as expected from the divergence of the matrix elements of the Green's matrix at $E=E_0$ where the expansion does not converge at all.
The first line of the table gives the results when one uses a single-state domain for all states, and the last one for the maximal domain containing the full linear space.
The size of the various domains is given in the second column, the average trapping time for the state $\ket{I_0}$ in the third column, and an estimate of the speed of convergence of the $p$-expansion for the energy in the fourth column.
To quantify the rate of convergence, we report the quantity, $p_\text{conv}$, defined as the smallest value of $p$ for which the energy is converged to six decimal places.
The smaller $p_\text{conv}$, the better the convergence is.
For example, the domain $\cD(0,3)\cup\cD(1,0)$ has a trapping time for the N\'eel state of $6.2$, while the domain $\cD(0,3)\cup\cD(1,1)$, having almost the same number of states (28 states), has an average trapping time about 6 times longer.
The exact value, $\cE(E=-1)=-0.75272390\ldots$, can be found in the last row of Table \ref{tab1} for the case of a domain corresponding to the full space.
Using this domain leads to a reduction in the statistical error as large as about three orders of magnitude, which nicely illustrates the critical importance of the domains employed.
As explained above, it is very advantageous to calculate exactly as many $(H_p,S_p)$ as possible in order to avoid the statistical error on the largest components.
Table \ref{tab2} shows the results both for the case of a single-state domain and for the domain having the largest average trapping time, namely $\cD(0,1)\cup\cD(1,1)$ (see Table \ref{tab1}).
Results show that it is indeed worth computing exactly as many components as possible.
For the single-state domain, the statistical error is reduced by a factor two when passing from no analytical computation, $p_\text{ex}=0$, to the case where eight components for $H_p$ and $S_p$ are computed exactly.
Our final value is in full agreement with the exact value with about six decimal places.
Table \ref{tab4} shows the evolution of the average trapping times and extrapolated energies as a function of $U$ when using $\cD(0,1)\cup\cD(1,0)$ as the main domain.
We also report the variational and exact energies together with the values of the optimized parameters of the trial wave function.
As $U$ increases the configurations with zero or one double occupation become more and more predominant and the average trapping time increases.
For very large values of $U$ (say, $U \ge12$) the increase of $\bar{t}_{I_0}$ becomes particularly steep.
No careful construction of domains maximizing the average trapping time has been performed, we have merely chosen domains of reasonable size (no more than 2682) by taking not too large number of double occupations (only, $n_D=0,1$) and not too small number of nearest-neighbor antiparallel pairs.
As seen, as the number of sites increases, the average trapping time for the chosen domains decreases.
This point is, of course, undesirable since the efficiency of the approach may gradually deteriorate when considering large systems.
A more elaborate way of constructing domains is clearly desirable in this case.
Of course, it will also be particularly interesting to take advantage of the fully parallelizable character of the algorithm to get much lower error bars.
In this work, it has been shown how to integrate out exactly --- within a DMC framework --- the contribution of all stochastic trajectories trapped in some given domains of the Hilbert space.
The corresponding equations have been derived in a general context.
In such a way, a new effective stochastic dynamics connecting only domains (and not the individual states) is defined.
A key property of this effective dynamics is that it does not depend on the ``shape'' of the domains used for each state.
Therefore, rather general domains (with or without overlap) can be considered.
To obtain the effective transition probability (which provides the probability of going from one domain to another)
and the corresponding renormalized estimators, the Green's functions restricted to each domain must be computed analytically, that is, in practice, matrices of the size of the number of states for the sampled domains have to be inverted.
Being able to define domains with large average trapping times is the key aspect of the method since it may lead to some important reduction of the statistical error, as illustrated in our numerical applications.
Therefore, a trade-off has to be found between maximizing the average trapping time and minimizing the cost of computing the domain Green's functions.
For each system at hand, one needs to determine, on physical grounds, which regions of the configuration space are preferentially sampled by the stochastic trajectories and to build domains of minimal size enclosing such regions.
In the first application presented here on the one-dimensional Hubbard model, we exploit the physics of the large-$U$ regime that is known to approach the Heisenberg limit where double occupations have small weights.
Our goal is, of course, to tackle much larger systems, like those treated by state-of-the-art methods, such as selected CI, \cite{Huron_1973,Harrison_1991,Giner_2013,Holmes_2016,Schriber_2016,Tubman_2020}, FCIQMC, \cite{Booth_2009,Cleland_2010}, AFQMC, \cite{Zhang_2003} or DMRG. \cite{White_1999,Chan_2011}
Here, we have mainly focused on the theoretical aspects of the approach.
In order to consider larger systems, an elaborate implementation of the present method is necessary in order to keep under control the cost of the simulation.
P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481).
For the simplest case of a system containing only two states, $\ket{1}$ and $\ket{2}$, the fundamental equation given in Eq.~\eqref{eq:eqfond} simplifies to
To treat simultaneously the two possibilities for the final state, \ie, $\ket{i_N}=\ket{1}$ or $\ket{2}$, Eq.~\eqref{eq:eqfond} has been slightly generalized to the case of a general vector for the final state written as
Let us choose a single-state domain for both states, namely $\cD_1=\qty{\ket{1}}$ and $\cD_2=\qty{\ket{2}}$.
Note that, due to the simplicity of the present two-state model, there are only two possible deterministic ``alternating'' paths, namely, $\ket{1}\to\ket{2}\to\ket{1},\ldots$ and $\ket{2}\to\ket{1}\to\ket{2},\ldots$.
For the sake of convenience, we introduce the following quantities
Alternatively, the quantity $\mel{1}{\qty(H-E \Id)^{-1}}{\Psi}$ in the left-hand-side of Eq.~\eqref{eq:final} can be directly calculated using the inverse of the matrix defined in Eq.~\eqref{eq:2x2_matrix}, yielding
As readily seen, the first component of the vector $\qty(H-E \Id)^{-1}\ket{\Psi}$ is identical to the one given in Eq.~\eqref{eq:final}, thus confirming independently the validity of this equation.