saving work

This commit is contained in:
Pierre-Francois Loos 2022-09-14 22:22:15 +02:00
parent 6895265172
commit c13c798bda

181
g.tex
View File

@ -41,18 +41,17 @@
\newcommand{\mr}{\multirow} \newcommand{\mr}{\multirow}
% operators % operators
\newcommand{\bH}{\mathbf{H}} \newcommand{\bH}{\boldsymbol{H}}
\newcommand{\bV}{\mathbf{V}} \newcommand{\bV}{\boldsymbol{V}}
\newcommand{\bh}{\mathbf{h}} \newcommand{\bh}{\boldsymbol{h}}
\newcommand{\bQ}{\mathbf{Q}} \newcommand{\bQ}{\boldsymbol{Q}}
\newcommand{\bSig}{\mathbf{\Sigma}} \newcommand{\br}{\boldsymbol{r}}
\newcommand{\br}{\mathbf{r}} \newcommand{\bp}{\boldsymbol{p}}
\newcommand{\bp}{\mathbf{p}}
\newcommand{\cP}{\mathcal{P}} \newcommand{\cP}{\mathcal{P}}
\newcommand{\cS}{\mathcal{S}} \newcommand{\cS}{\mathcal{S}}
\newcommand{\cT}{\mathcal{T}} \newcommand{\cT}{\mathcal{T}}
\newcommand{\cC}{\mathcal{C}} \newcommand{\cC}{\mathcal{C}}
\newcommand{\PT}{\mathcal{PT}} \newcommand{\cD}{\mathcal{D}}
\newcommand{\EPT}{E_{\PT}} \newcommand{\EPT}{E_{\PT}}
\newcommand{\laPT}{\lambda_{\PT}} \newcommand{\laPT}{\lambda_{\PT}}
@ -61,7 +60,9 @@
\newcommand{\laEP}{\lambda_\text{EP}} \newcommand{\laEP}{\lambda_\text{EP}}
\newcommand{\PsiT}{\Psi_\text{T}} \newcommand{\PsiT}{\Psi_\text{T}}
\newcommand{\PsiG}{\Psi^{+}}
\newcommand{\EL}{E_\text{L}}
\newcommand{\Id}{\mathds{1}}
\newcommand{\Ne}{N} % Number of electrons \newcommand{\Ne}{N} % Number of electrons
\newcommand{\Nn}{M} % Number of nuclei \newcommand{\Nn}{M} % Number of nuclei
@ -138,7 +139,7 @@
\noindent \noindent
The sampling of the configuration space in diffusion Monte Carlo (DMC) is done using walkers moving randomly. The sampling of the configuration space in diffusion Monte Carlo (DMC) is done using walkers moving randomly.
In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al. Phys. Rev. B \textbf{60}, 2299 (1999)}], In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al.~Phys.~Rev.~B \textbf{60}, 2299 (1999)}],
it was shown that the probability for a walker to stay a certain amount of time in the same \titou{state} obeys a Poisson law and that the \titou{on-state} dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states. it was shown that the probability for a walker to stay a certain amount of time in the same \titou{state} obeys a Poisson law and that the \titou{on-state} dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size. Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size.
The equations of the resulting effective stochastic dynamics are derived. The equations of the resulting effective stochastic dynamics are derived.
@ -212,9 +213,9 @@ Atomic units are used throughout.
As previously mentioned, DMC is a stochastic implementation of the power method defined by the following operator: As previously mentioned, DMC is a stochastic implementation of the power method defined by the following operator:
\be \be
T = \mathds{1} -\tau (H-E\mathds{1}), T = \Id -\tau (H-E\Id),
\ee \ee
where $\mathds{1}$ is the identity operator, $\tau$ a small positive parameter playing the role of a time-step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. Starting from some initial vector, $\ket{\Psi_0}$, we have where $\Id$ is the identity operator, $\tau$ a small positive parameter playing the role of a time-step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. Starting from some initial vector, $\ket{\Psi_0}$, we have
\be \be
\lim_{N \to \infty} T^N \ket{\Psi_0} = \ket{\Phi_0}, \lim_{N \to \infty} T^N \ket{\Psi_0} = \ket{\Phi_0},
\ee \ee
@ -266,19 +267,19 @@ This is the
\label{sec:proba} \label{sec:proba}
%=======================================% %=======================================%
In order to derive a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $\ket{\Psi^+}$, having strictly positive components, \ie, $\Psi^+_i > 0$, and apply a similarity transformation to the operators $G^{(N)}$ and $T$ In order to derive a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $\ket{\PsiG}$, having strictly positive components, \ie, $\PsiG_i > 0$, and apply a similarity transformation to the operators $G^{(N)}$ and $T$
\begin{align} \begin{align}
\label{eq:defT} \label{eq:defT}
\bar{T}_{ij} & = \frac{\Psi^+_j}{\Psi^+_i} T_{ij} \bar{T}_{ij} & = \frac{\PsiG_j}{\PsiG_i} T_{ij}
\\ \\
\bar{G}^{(N)}_{ij}& = \frac{\Psi^+_j}{\Psi^+_i} G^{(N)}_{ij} \bar{G}^{(N)}_{ij}& = \frac{\PsiG_j}{\PsiG_i} G^{(N)}_{ij}
\end{align} \end{align}
Note that under the similarity transformation the path integral expression, Eq.~\eqref{eq:G}, relating $G^{(N)}$ and $T$ remains unchanged for the similarity-transformed operators, $\bar{G}^{(N)}$ and $\bar{T}$. Note that under the similarity transformation the path integral expression, Eq.~\eqref{eq:G}, relating $G^{(N)}$ and $T$ remains unchanged for the similarity-transformed operators, $\bar{G}^{(N)}$ and $\bar{T}$.
Next, the matrix elements of $\bar{T}$ are expressed as those of a stochastic matrix multiplied by some residual weight, namely Next, the matrix elements of $\bar{T}$ are expressed as those of a stochastic matrix multiplied by some residual weight, namely
\be \be
\label{eq:defTij} \label{eq:defTij}
\bar{T}_{ij}= p_{i \to j} w_{ij} \bar{T}_{ij}= p_{i \to j} w_{ij}.
\ee \ee
Here, we recall that a stochastic matrix is defined as a matrix with positive entries and obeying Here, we recall that a stochastic matrix is defined as a matrix with positive entries and obeying
\be \be
@ -289,7 +290,7 @@ To build the transition probability density the following operator is introduced
%As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as %As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as
%a vector with all components positive). %a vector with all components positive).
\be \be
T^+=\mathds{1} - \tau [ H^+-E_L^+\mathds{1}] T^+= \Id - \tau \qty( H^+ - \EL^+ \Id ),
\ee \ee
where where
$H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to be negative $H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to be negative
@ -301,45 +302,45 @@ $H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to b
-\abs{H_{ij}}, & \text{if $i\neq j$}. -\abs{H_{ij}}, & \text{if $i\neq j$}.
\end{cases} \end{cases}
\ee \ee
Here, $E_L^+ \mathds{1}$ is the diagonal matrix whose diagonal elements are defined as Here, $\EL^+ \Id$ is the diagonal matrix whose diagonal elements are defined as
\be \be
E^+_{Li}= \frac{\sum_j H^+_{ij}\Psi^+_j}{\Psi^+_i}. (\EL^+)_{i}= \frac{\sum_j H^+_{ij}\PsiG_j}{\PsiG_i}.
\ee \ee
The vector $\ket{E^+_L}$ is known as the local energy vector associated with $\ket{\Psi^+}$. The vector $\EL^+$ is known as the local energy vector associated with $\PsiG$.
Actually, the operator $H^+-E^+_L \mathds{1}$ in the definition of the operator $T^+$ has been chosen to admit by construction $|\Psi^+ \rangle$ as ground-state with zero eigenvalue Actually, the operator $H^+ - \EL^+ \Id$ in the definition of the operator $T^+$ has been chosen to admit by construction $\ket{\PsiG}$ as ground-state with zero eigenvalue
\be \be
\label{eq:defTplus} \label{eq:defTplus}
[H^+ - E_L^+ \mathds{1}]|\Psi^+\rangle=0, \qty(H^+ - E_L^+ \Id) \ket{\PsiG} = 0,
\ee \ee
leading to the relation leading to the relation
\be \be
T^+ |\Psi^+\rangle=|\Psi^+\rangle. \label{eq:relT+}
\label{relT+} T^+ \ket{\PsiG} = \ket{\PsiG}.
\ee \ee
The stochastic matrix is now defined as The stochastic matrix is now defined as
\be \be
\label{eq:pij} \label{eq:pij}
p_{i \to j} = \frac{\Psi^+_j}{\Psi^+_i} T^+_{ij}. p_{i \to j} = \frac{\PsiG_j}{\PsiG_i} T^+_{ij}.
\ee \ee
The diagonal matrix elements of the stochastic matrix write The diagonal matrix elements of the stochastic matrix write
\be \be
p_{i \to i} = 1 -\tau (H^+_{ii}- E^+_{Li}) p_{i \to i} = 1 - \tau \qty[ H^+_{ii}- (\EL^+)_{i} ]
\ee \ee
while, for $i \ne j$, while, for $i \ne j$,
\be \be
p_{i \to j} = \tau \frac{\Psi^+_{j}}{\Psi^+_{i}} |H_{ij}| \ge 0 p_{i \to j} = \tau \frac{\PsiG_{j}}{\PsiG_{i}} \abs{H_{ij}} \ge 0
\ee \ee
As seen, the off-diagonal terms, $p_{i \to j}$ are positive while the diagonal ones, $p_{i \to i}$, can be made positive if $\tau$ is chosen sufficiently small. As seen, the off-diagonal terms, $p_{i \to j}$ are positive while the diagonal ones, $p_{i \to i}$, can be made positive if $\tau$ is chosen sufficiently small.
More precisely, the condition writes More precisely, the condition writes
\be \be
\label{eq:cond} \label{eq:cond}
\tau \leq \frac{1}{\max_i\abs{H^+_{ii}-E^+_{Li}}} \tau \leq \frac{1}{\max_i\abs{H^+_{ii}-(\EL^+)_{i}}}
\ee \ee
The sum-over-states condition, Eq.~\eqref{eq:sumup}, follows from the fact that $|\Psi^+\rangle$ is eigenvector of $T^+$, Eq.(\ref{relT+}) The sum-over-states condition, Eq.~\eqref{eq:sumup}, follows from the fact that $|\PsiG\rangle$ is eigenvector of $T^+$, Eq.~\eqref{eq:relT+}
\be \be
\sum_j p_{i \to j}= \frac{1}{\Psi^+_{i}} \langle i |T^+| \Psi^ +\rangle =1. \sum_j p_{i \to j}= \frac{1}{\PsiG_{i}} \mel{i}{T^+}{\PsiG} = 1.
\ee \ee
We have then verified that $p_{i \to j}$ is indeed a stochastic matrix. We have then verified that $p_{i \to j}$ is indeed a stochastic matrix.
@ -350,37 +351,41 @@ can be used without violating the positivity of the transition probability matri
Note that we can even escape from this condition by slightly generalizing the transition probability Note that we can even escape from this condition by slightly generalizing the transition probability
matrix used as follows matrix used as follows
\be \be
p_{i \to j} = \frac{ \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle| } { \sum_j \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle|} p_{i \to j}
= \frac{ \Psi^+_{j} |\langle i | T^+ | j\rangle| }{\sum_j \Psi^+_{j} |\langle i | T^+ | j\rangle|} = \frac{ \frac{\PsiG_{j}}{\PsiG_{i}} \abs{\mel{i}{T^+}{j}} }
{ \sum_j \frac{\PsiG_{j}}{\PsiG_{i}} \abs{\mel{i}{T^+}{j}} }
= \frac{ \PsiG_{j} \abs{\mel{i}{T^+}{j}} }
{ \sum_j \PsiG_{j} \abs{\mel{i}{T^+}{j}} }
\ee \ee
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive. This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive.
Now, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weight reads Now, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weight reads
\be \be
w_{ij}=\frac{T_{ij}}{T^+_{ij}}. w_{ij}=\frac{T_{ij}}{T^+_{ij}}.
\ee \ee
Using these notations the Green's matrix components can be rewritten as Using these notations the Green's matrix components can be rewritten as
\be \be
{\bar G}^{(N)}_{i i_0}=\sum_{i_1,\ldots,i_{N-1}} \qty[ \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ] \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} \bar{G}^{(N)}_{\titou{i i_0}} =
\sum_{i_1,\ldots,i_{N-1}} \qty( \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ) \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}
\ee \ee
where $i$ is identified to $i_N$. \titou{where $i$ is identified to $i_N$.}
The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted ${\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})$, The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted $\text{Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})$,
for the path starting at $|i_0\rangle$ and ending at $|i_N\rangle$ to occur. for the path starting at $\ket{i_0}$ and ending at $\ket{i_N}$ to occur.
Using the fact that $p_{i \to j} \ge 0$ and Eq.~\eqref{eq:sumup} we verify that ${\rm Prob}_{i_0 \to i_N}$ is positive and obeys Using the fact that $p_{i \to j} \ge 0$ and Eq.~\eqref{eq:sumup} we verify that $\text{Prob}_{i_0 \to i_N}$ is positive and obeys
\be \be
\sum_{i_1,..., i_{N-1}} {\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})=1 \sum_{i_1,\ldots,i_{N-1}} \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}) = 1
\ee \ee
as it should be. as it should be.
The probabilistic average associated with this probability for the path, denoted here as, $ \Big \langle ... \Big \rangle$ is then defined as The probabilistic average associated with this probability for the path, denoted here as, $ \expval{\cdots}$ is then defined as
\be \be
\Big \langle F \Big \rangle = \sum_{i_1,..., i_{N-1}} F(i_0,...,i_N) {\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1}), \expval{F} = \sum_{i_1,\ldots,i_{N-1}} F(i_0,\ldots,i_N) \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}),
\label{average} \label{average}
\ee \ee
where $F$ is an arbitrary function. where $F$ is an arbitrary function.
Finally, the path-integral expressed as a probabilistic average writes Finally, the path-integral expressed as a probabilistic average reads
\be \be
{\bar G}^{(N)}_{ii_0}= \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} \Big \rangle \bar{G}^{(N)}_{ii_0}= \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} \Big \rangle
\label{cn_stoch} \label{cn_stoch}
\ee \ee
To calculate the probabilistic average, Eq.(\ref{average}), To calculate the probabilistic average, Eq.(\ref{average}),
@ -388,16 +393,15 @@ an artificial (mathematical) ``particle'' called walker (or psi-particle) is int
During the Monte Carlo simulation the walker moves in configuration space by drawing new states with During the Monte Carlo simulation the walker moves in configuration space by drawing new states with
probability $p(i_k \to i_{k+1})$, thus realizing the path of probability probability $p(i_k \to i_{k+1})$, thus realizing the path of probability
${\rm Prob}(i_0 \to i_n)$. ${\rm Prob}(i_0 \to i_n)$.
The energy, Eq.(\ref{E0}) is given as The energy, Eq.~\eqref{eq:E0} is given as
\be \be
E_0 = \lim_{N \to \infty } \frac{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N} \Big \rangle} E_0 = \lim_{N \to \infty }
{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} \Big \rangle} \frac{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N}} }
{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} }}
\ee \ee
Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over the population. Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over the population.
In addition, thanks to the ergodic property of the stochastic matrix (see, Refs \onlinecite{Caffarel_1988}) In addition, thanks to the ergodic property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}) a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
a unique path of infinite length from which We shall not here insist on these practical details that can be found, for example, in refs \onlinecite{Foulkes_2001,Kolorenc_2011}.
sub-paths of length $N$ can be extracted may also be used. We shall not here insist on these practical details that can be
found, for example, in refs \onlinecite{Foulkes_2001,Kolorenc_2011}.
%{\it Spawner representation} In this representation, we no longer consider moving particles but occupied or non-occupied states $|i\rangle$. %{\it Spawner representation} In this representation, we no longer consider moving particles but occupied or non-occupied states $|i\rangle$.
%To each state is associated the (positive or negative) quantity $c_i$. %To each state is associated the (positive or negative) quantity $c_i$.
@ -427,37 +431,38 @@ fluctuations. This idea was proposed some time ago\cite{assaraf_99,Assaraf_1999B
Let us consider a given state $|i\rangle$. The probability that the walker remains exactly $n$ times on $|i\rangle$ ($n$ from Let us consider a given state $|i\rangle$. The probability that the walker remains exactly $n$ times on $|i\rangle$ ($n$ from
1 to $\infty$) and then exits to a different state $j$ is 1 to $\infty$) and then exits to a different state $j$ is
\be \be
{\cal P}(i \to j, n) = [p(i \to i)]^{n-1} p(i \to j) \;\;\;\; j \ne i. \cP_{i \to j}(n) = \qty(p_{i \to i})^{n-1} p_{i \to j} \qq{$j \ne i$.}
\ee \ee
Using the relation $\sum_{n=1}^{\infty} p^{n-1}(i \to i)=\frac{1}{1-p(i \to i)}$ and the normalization Using the relation
of the $p(i \to j)$, Eq.(\ref{sumup}), we verify that
the probability is normalized to one
\be \be
\sum_{j \ne i} \sum_{n=1}^{\infty} {\cal P}(i \to j,n) = 1. \sum_{n=1}^{\infty} p^{n-1}(i \to i)=\frac{1}{1-p(i \to i)}
\ee
and the normalization of the $p(i \to j)$, Eq.~\eqref{eq:sumup}, we verify that the probability is normalized to one
\be
\sum_{j \ne i} \sum_{n=1}^{\infty} \cP_{i \to j}(n) = 1.
\ee \ee
The probability of being trapped during $n$ steps is obtained by summing over all possible exit states The probability of being trapped during $n$ steps is obtained by summing over all possible exit states
\be \be
P_i(n)=\sum_{j \ne i} {\cal P}(i \to j,n) = [p(i \to i)]^{n-1}(1-p(i \to i)). P_i(n)=\sum_{j \ne i} \cP_{i \to j}(n) = \qty(p_{i \to i})^{n-1} \qty( 1 - p_{i \to i} ).
\ee \ee
This probability defines a Poisson law This probability defines a Poisson law with an average number $\bar{n}_i$ of trapping events given by
with an average number $\bar{n}_i$ of trapping events given by
\be \be
\bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p(i \to i)}. \bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p_{i \to i}}.
\ee \ee
Introducing the continuous time $t_i=n_i\tau$ the average trapping time is given by Introducing the continuous time $t_i=n_i\tau$ the average trapping time is given by
\be \be
\bar{t_i}= \frac{1}{H^+_{ii}-E^+_{Li}}. \bar{t_i}= \frac{1}{H^+_{ii}-(\EL^+)_{i}}.
\ee \ee
Taking the limit $\tau \to 0$ the Poisson probability takes the usual form Taking the limit $\tau \to 0$, the Poisson probability takes the usual form
\be \be
P_{i}(t) = \frac{1}{\bar{t}_i} e^{-\frac{t}{\bar{t}_i}} P_{i}(t) = \frac{1}{\bar{t}_i} \exp(-\frac{t}{\bar{t}_i})
\ee \ee
The time-averaged contribution of the on-state weight can be easily calculated to be The time-averaged contribution of the \titou{on-state} weight can be easily calculated to be
\be \be
\bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}} \bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}}
\ee \ee
Details of the implementation of the effective dynamics can be in found in Refs. (\onlinecite{assaraf_99},\onlinecite{caffarel_00}). Details of the implementation of the effective dynamics can be in found in Refs.~\onlinecite{assaraf_99} and \onlinecite{caffarel_00}.
%=======================================% %=======================================%
\subsection{General domains} \subsection{General domains}
@ -473,49 +478,41 @@ non-zero-probability to reach any other state in a finite number of steps). In p
Let us write an arbitrary path of length $N$ as Let us write an arbitrary path of length $N$ as
\be \be
|i_0 \rangle \to |i_1 \rangle \to ... \to |i_N \rangle \ket{i_0} \to \ket{i_1} \to \cdots \to \ket{i_N}
\ee \ee
where the successive states are drawn using the transition probability matrix, $p(i \to j)$. This series can be rewritten where the successive states are drawn using the transition probability matrix, $p_{i \to j}$. This series can be rewritten
\be \be
(|I_0\rangle,n_0) \to (|I_1 \rangle,n_1) \to... \to (|I_p\rangle,n_p) \label{eq:eff_series}
\label{eff_series} (\ket*{I_0},n_0) \to (\ket*{I_1},n_1) \to \cdots \to (\ket*{I_p},n_p)
\ee \ee
where $|I_0\rangle=|i_0\rangle$ is the initial state, where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ the number of times the walker remains within the domain of $\ket{i_0}$ ($n_0=1$ to $N+1$), $\ket{I_1}$ is the first exit state, that is not belonging to $\cD_{i_0}$, $n_1$ is the number of times the walker remains within $\cD_{i_1}$ ($n_1=1$ to $N+1-n_0$), $\ket{I_2}$ the second exit state, and so on.
$n_0$ the number of times the walker remains within the domain of $|i_0\rangle$ ($n_0=1$ to $N+1$), $|I_1\rangle$ is the first exit state, Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path. The two extreme cases, $p=0$ and $p=N$, correspond to the cases where the walker remains for ever within the initial domain, and to the case where the walker leaves the current domain at each step, respectively.
that is not belonging to
${\cal D}_{i_0}$, $n_1$ is the number of times the walker remains within ${\cal D}_{i_1}$ ($n_1=1$ to $N+1-n_0$), $|I_2\rangle$ the second exit state, and so on.
Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path. The two extreme cases, $p=0$ and $p=N$,
correspond to the cases where the walker remains for ever within the initial domain, and to the case where the walker leaves the current domain at each step,
respectively.
In what follows, we shall systematically write the integers representing the exit states in capital letter. In what follows, we shall systematically write the integers representing the exit states in capital letter.
%Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the %Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the
%set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states %set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states
%averages for renormalized quantities will be defined.\\ %averages for renormalized quantities will be defined.\\
Let us define the probability of being $n$ times within the domain of $|I_0\rangle$ and, then, to exit at $|I\rangle \notin {\cal D}_{I_0}$. Let us define the probability of being $n$ times within the domain of $\ket{I_0}$ and, then, to exit at $\ket{I} \notin \cD_{I_0}$.
It is given by It is given by
$$
{\cal P}(I_0 \to I,n)= \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}}
$$
\be \be
p(I_0 \to i_1) ... p(i_{n-2} \to i_{n-1}) p(i_{n-1} \to I) \label{eq:eq1C}
\label{eq1C} \cP_{I_0 \to I}(n) = \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}}
p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I}
\ee \ee
To proceed we need to introduce the projector associated with each domain To proceed we need to introduce the projector associated with each domain
\be \be
P_I= \sum_{|k\rangle \in {\cal D}_I} |k\rangle \langle k| P_I= \sum_{\ket{k} \in \cD_I} \dyad{k}{k}
\label{pi} \label{pi}
\ee \ee
and to define the restriction of the operator $T^+$ to the domain and to define the restriction of the operator $T^+$ to the domain
\be \be
T^+_I= P_I T^+ P_I. T^+_I= P_I T^+ P_I.
\ee \ee
$T^+_I$ is the operator governing the dynamics of the walkers moving within ${\cal D}_{I}$. $T^+_I$ is the operator governing the dynamics of the walkers moving within ${\cal D}_{I}$.
Using Eqs.(\ref{eq1C}) and (\ref{pij}), the probability can be rewritten as Using Eqs.(\ref{eq1C}) and (\ref{pij}), the probability can be rewritten as
\be \be
{\cal P}(I_0 \to I,n)= \cP+{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I}
\frac{1}{\Psi^+_{I_0}} \langle I_0 | {T^+_{I_0}}^{n-1} F^+_{I_0}|I\rangle \Psi^+_{I}
\label{eq3C} \label{eq3C}
\ee \ee
where the operator $F$, corresponding to the last move connecting the inside and outside regions of the where the operator $F$, corresponding to the last move connecting the inside and outside regions of the
@ -531,7 +528,7 @@ Physically, $F$ may be seen as a flux operator through the boundary of ${\cal D}
Now, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by Now, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by
\be \be
P_{I}(n)= P_{I}(n)=
\frac{1}{\Psi^+_{I}} \langle I | {T^+_{I}}^{n-1} F^+_{I}|\Psi^+ \rangle. \frac{1}{\PsiG_{I}} \langle I | {T^+_{I}}^{n-1} F^+_{I}|\PsiG \rangle.
\label{PiN} \label{PiN}
\ee \ee
Using the fact that Using the fact that
@ -541,12 +538,12 @@ Using the fact that
\ee \ee
we have we have
\be \be
\sum_{n=0}^{\infty} P_{I}(n) = \frac{1}{\Psi^+_{I}} \sum_{n=1}^{\infty} \Big( \langle I | {T^+_{I}}^{n-1} |\Psi^+\rangle \sum_{n=0}^{\infty} P_{I}(n) = \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \Big( \langle I | {T^+_{I}}^{n-1} |\PsiG\rangle
- \langle I | {T^+_{I}}^{n} |\Psi^+\rangle \Big) = 1 - \langle I | {T^+_{I}}^{n} |\PsiG\rangle \Big) = 1
\ee \ee
and the average trapping time and the average trapping time
\be \be
t_{I}={\bar n}_{I} \tau= \frac{1}{\Psi^+_{I}} \langle I | P_{I} \frac{1}{H^+ -E_L^+} P_{I} | \Psi^+\rangle t_{I}={\bar n}_{I} \tau= \frac{1}{\PsiG_{I}} \langle I | P_{I} \frac{1}{H^+ -E_L^+} P_{I} | \PsiG\rangle
\ee \ee
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-E_L^+)$ in ${\cal D}_{I}$. Note that In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-E_L^+)$ in ${\cal D}_{I}$. Note that
it is possible only if the dimension of the domains is not too large (say, less than a few thousands). it is possible only if the dimension of the domains is not too large (say, less than a few thousands).
@ -607,7 +604,7 @@ $$
\sum_{n_0 \ge 1} ... \sum_{n_p \ge 1} \sum_{n_0 \ge 1} ... \sum_{n_p \ge 1}
\ee \ee
\be \be
\delta(\sum_k n_k=N+1) \Big[ \prod_{k=0}^{p-1} [\frac{\Psi^+_{I_{k+1}}}{\Psi^+_{I_k}} \langle I_k| T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big] \delta(\sum_k n_k=N+1) \Big[ \prod_{k=0}^{p-1} [\frac{\PsiG_{I_{k+1}}}{\PsiG_{I_k}} \langle I_k| T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big]
{\bar G}^{(n_p-1),{\cal D}}_{I_p I_N}. {\bar G}^{(n_p-1),{\cal D}}_{I_p I_N}.
\label{Gbart} \label{Gbart}
\ee \ee
@ -730,7 +727,7 @@ which is identical to Eq.(\ref{eqfond}) when $G$ is expanded iteratively.\\
\\ \\
Let us use as effective transition probability density Let us use as effective transition probability density
\be \be
P(I \to J) = \frac{1} {\Psi^+(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \Psi^+(J) P(I \to J) = \frac{1} {\PsiG(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \PsiG(J)
\ee \ee
and the weight and the weight
\be \be