saving work

This commit is contained in:
Pierre-Francois Loos 2022-09-14 22:22:15 +02:00
parent 6895265172
commit c13c798bda

181
g.tex
View File

@ -41,18 +41,17 @@
\newcommand{\mr}{\multirow}
% operators
\newcommand{\bH}{\mathbf{H}}
\newcommand{\bV}{\mathbf{V}}
\newcommand{\bh}{\mathbf{h}}
\newcommand{\bQ}{\mathbf{Q}}
\newcommand{\bSig}{\mathbf{\Sigma}}
\newcommand{\br}{\mathbf{r}}
\newcommand{\bp}{\mathbf{p}}
\newcommand{\bH}{\boldsymbol{H}}
\newcommand{\bV}{\boldsymbol{V}}
\newcommand{\bh}{\boldsymbol{h}}
\newcommand{\bQ}{\boldsymbol{Q}}
\newcommand{\br}{\boldsymbol{r}}
\newcommand{\bp}{\boldsymbol{p}}
\newcommand{\cP}{\mathcal{P}}
\newcommand{\cS}{\mathcal{S}}
\newcommand{\cT}{\mathcal{T}}
\newcommand{\cC}{\mathcal{C}}
\newcommand{\PT}{\mathcal{PT}}
\newcommand{\cD}{\mathcal{D}}
\newcommand{\EPT}{E_{\PT}}
\newcommand{\laPT}{\lambda_{\PT}}
@ -61,7 +60,9 @@
\newcommand{\laEP}{\lambda_\text{EP}}
\newcommand{\PsiT}{\Psi_\text{T}}
\newcommand{\PsiG}{\Psi^{+}}
\newcommand{\EL}{E_\text{L}}
\newcommand{\Id}{\mathds{1}}
\newcommand{\Ne}{N} % Number of electrons
\newcommand{\Nn}{M} % Number of nuclei
@ -138,7 +139,7 @@
\noindent
The sampling of the configuration space in diffusion Monte Carlo (DMC) is done using walkers moving randomly.
In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al. Phys. Rev. B \textbf{60}, 2299 (1999)}],
In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al.~Phys.~Rev.~B \textbf{60}, 2299 (1999)}],
it was shown that the probability for a walker to stay a certain amount of time in the same \titou{state} obeys a Poisson law and that the \titou{on-state} dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size.
The equations of the resulting effective stochastic dynamics are derived.
@ -212,9 +213,9 @@ Atomic units are used throughout.
As previously mentioned, DMC is a stochastic implementation of the power method defined by the following operator:
\be
T = \mathds{1} -\tau (H-E\mathds{1}),
T = \Id -\tau (H-E\Id),
\ee
where $\mathds{1}$ is the identity operator, $\tau$ a small positive parameter playing the role of a time-step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. Starting from some initial vector, $\ket{\Psi_0}$, we have
where $\Id$ is the identity operator, $\tau$ a small positive parameter playing the role of a time-step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. Starting from some initial vector, $\ket{\Psi_0}$, we have
\be
\lim_{N \to \infty} T^N \ket{\Psi_0} = \ket{\Phi_0},
\ee
@ -266,19 +267,19 @@ This is the
\label{sec:proba}
%=======================================%
In order to derive a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $\ket{\Psi^+}$, having strictly positive components, \ie, $\Psi^+_i > 0$, and apply a similarity transformation to the operators $G^{(N)}$ and $T$
In order to derive a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $\ket{\PsiG}$, having strictly positive components, \ie, $\PsiG_i > 0$, and apply a similarity transformation to the operators $G^{(N)}$ and $T$
\begin{align}
\label{eq:defT}
\bar{T}_{ij} & = \frac{\Psi^+_j}{\Psi^+_i} T_{ij}
\bar{T}_{ij} & = \frac{\PsiG_j}{\PsiG_i} T_{ij}
\\
\bar{G}^{(N)}_{ij}& = \frac{\Psi^+_j}{\Psi^+_i} G^{(N)}_{ij}
\bar{G}^{(N)}_{ij}& = \frac{\PsiG_j}{\PsiG_i} G^{(N)}_{ij}
\end{align}
Note that under the similarity transformation the path integral expression, Eq.~\eqref{eq:G}, relating $G^{(N)}$ and $T$ remains unchanged for the similarity-transformed operators, $\bar{G}^{(N)}$ and $\bar{T}$.
Next, the matrix elements of $\bar{T}$ are expressed as those of a stochastic matrix multiplied by some residual weight, namely
\be
\label{eq:defTij}
\bar{T}_{ij}= p_{i \to j} w_{ij}
\bar{T}_{ij}= p_{i \to j} w_{ij}.
\ee
Here, we recall that a stochastic matrix is defined as a matrix with positive entries and obeying
\be
@ -289,7 +290,7 @@ To build the transition probability density the following operator is introduced
%As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as
%a vector with all components positive).
\be
T^+=\mathds{1} - \tau [ H^+-E_L^+\mathds{1}]
T^+= \Id - \tau \qty( H^+ - \EL^+ \Id ),
\ee
where
$H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to be negative
@ -301,45 +302,45 @@ $H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to b
-\abs{H_{ij}}, & \text{if $i\neq j$}.
\end{cases}
\ee
Here, $E_L^+ \mathds{1}$ is the diagonal matrix whose diagonal elements are defined as
Here, $\EL^+ \Id$ is the diagonal matrix whose diagonal elements are defined as
\be
E^+_{Li}= \frac{\sum_j H^+_{ij}\Psi^+_j}{\Psi^+_i}.
(\EL^+)_{i}= \frac{\sum_j H^+_{ij}\PsiG_j}{\PsiG_i}.
\ee
The vector $\ket{E^+_L}$ is known as the local energy vector associated with $\ket{\Psi^+}$.
The vector $\EL^+$ is known as the local energy vector associated with $\PsiG$.
Actually, the operator $H^+-E^+_L \mathds{1}$ in the definition of the operator $T^+$ has been chosen to admit by construction $|\Psi^+ \rangle$ as ground-state with zero eigenvalue
Actually, the operator $H^+ - \EL^+ \Id$ in the definition of the operator $T^+$ has been chosen to admit by construction $\ket{\PsiG}$ as ground-state with zero eigenvalue
\be
\label{eq:defTplus}
[H^+ - E_L^+ \mathds{1}]|\Psi^+\rangle=0,
\qty(H^+ - E_L^+ \Id) \ket{\PsiG} = 0,
\ee
leading to the relation
\be
T^+ |\Psi^+\rangle=|\Psi^+\rangle.
\label{relT+}
\label{eq:relT+}
T^+ \ket{\PsiG} = \ket{\PsiG}.
\ee
The stochastic matrix is now defined as
\be
\label{eq:pij}
p_{i \to j} = \frac{\Psi^+_j}{\Psi^+_i} T^+_{ij}.
p_{i \to j} = \frac{\PsiG_j}{\PsiG_i} T^+_{ij}.
\ee
The diagonal matrix elements of the stochastic matrix write
\be
p_{i \to i} = 1 -\tau (H^+_{ii}- E^+_{Li})
p_{i \to i} = 1 - \tau \qty[ H^+_{ii}- (\EL^+)_{i} ]
\ee
while, for $i \ne j$,
\be
p_{i \to j} = \tau \frac{\Psi^+_{j}}{\Psi^+_{i}} |H_{ij}| \ge 0
p_{i \to j} = \tau \frac{\PsiG_{j}}{\PsiG_{i}} \abs{H_{ij}} \ge 0
\ee
As seen, the off-diagonal terms, $p_{i \to j}$ are positive while the diagonal ones, $p_{i \to i}$, can be made positive if $\tau$ is chosen sufficiently small.
More precisely, the condition writes
\be
\label{eq:cond}
\tau \leq \frac{1}{\max_i\abs{H^+_{ii}-E^+_{Li}}}
\tau \leq \frac{1}{\max_i\abs{H^+_{ii}-(\EL^+)_{i}}}
\ee
The sum-over-states condition, Eq.~\eqref{eq:sumup}, follows from the fact that $|\Psi^+\rangle$ is eigenvector of $T^+$, Eq.(\ref{relT+})
The sum-over-states condition, Eq.~\eqref{eq:sumup}, follows from the fact that $|\PsiG\rangle$ is eigenvector of $T^+$, Eq.~\eqref{eq:relT+}
\be
\sum_j p_{i \to j}= \frac{1}{\Psi^+_{i}} \langle i |T^+| \Psi^ +\rangle =1.
\sum_j p_{i \to j}= \frac{1}{\PsiG_{i}} \mel{i}{T^+}{\PsiG} = 1.
\ee
We have then verified that $p_{i \to j}$ is indeed a stochastic matrix.
@ -350,37 +351,41 @@ can be used without violating the positivity of the transition probability matri
Note that we can even escape from this condition by slightly generalizing the transition probability
matrix used as follows
\be
p_{i \to j} = \frac{ \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle| } { \sum_j \frac{\Psi^+_{j}}{\Psi^+_{i}} |\langle i | T^+ | j\rangle|}
= \frac{ \Psi^+_{j} |\langle i | T^+ | j\rangle| }{\sum_j \Psi^+_{j} |\langle i | T^+ | j\rangle|}
p_{i \to j}
= \frac{ \frac{\PsiG_{j}}{\PsiG_{i}} \abs{\mel{i}{T^+}{j}} }
{ \sum_j \frac{\PsiG_{j}}{\PsiG_{i}} \abs{\mel{i}{T^+}{j}} }
= \frac{ \PsiG_{j} \abs{\mel{i}{T^+}{j}} }
{ \sum_j \PsiG_{j} \abs{\mel{i}{T^+}{j}} }
\ee
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive.
Now, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weight reads
\be
w_{ij}=\frac{T_{ij}}{T^+_{ij}}.
w_{ij}=\frac{T_{ij}}{T^+_{ij}}.
\ee
Using these notations the Green's matrix components can be rewritten as
\be
{\bar G}^{(N)}_{i i_0}=\sum_{i_1,\ldots,i_{N-1}} \qty[ \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ] \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}
\bar{G}^{(N)}_{\titou{i i_0}} =
\sum_{i_1,\ldots,i_{N-1}} \qty( \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ) \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}
\ee
where $i$ is identified to $i_N$.
\titou{where $i$ is identified to $i_N$.}
The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted ${\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})$,
for the path starting at $|i_0\rangle$ and ending at $|i_N\rangle$ to occur.
Using the fact that $p_{i \to j} \ge 0$ and Eq.~\eqref{eq:sumup} we verify that ${\rm Prob}_{i_0 \to i_N}$ is positive and obeys
The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted $\text{Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})$,
for the path starting at $\ket{i_0}$ and ending at $\ket{i_N}$ to occur.
Using the fact that $p_{i \to j} \ge 0$ and Eq.~\eqref{eq:sumup} we verify that $\text{Prob}_{i_0 \to i_N}$ is positive and obeys
\be
\sum_{i_1,..., i_{N-1}} {\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1})=1
\sum_{i_1,\ldots,i_{N-1}} \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}) = 1
\ee
as it should be.
The probabilistic average associated with this probability for the path, denoted here as, $ \Big \langle ... \Big \rangle$ is then defined as
The probabilistic average associated with this probability for the path, denoted here as, $ \expval{\cdots}$ is then defined as
\be
\Big \langle F \Big \rangle = \sum_{i_1,..., i_{N-1}} F(i_0,...,i_N) {\rm Prob}_{i_0 \to i_N}(i_1,...,i_{N-1}),
\expval{F} = \sum_{i_1,\ldots,i_{N-1}} F(i_0,\ldots,i_N) \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}),
\label{average}
\ee
where $F$ is an arbitrary function.
Finally, the path-integral expressed as a probabilistic average writes
Finally, the path-integral expressed as a probabilistic average reads
\be
{\bar G}^{(N)}_{ii_0}= \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} \Big \rangle
\bar{G}^{(N)}_{ii_0}= \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} \Big \rangle
\label{cn_stoch}
\ee
To calculate the probabilistic average, Eq.(\ref{average}),
@ -388,16 +393,15 @@ an artificial (mathematical) ``particle'' called walker (or psi-particle) is int
During the Monte Carlo simulation the walker moves in configuration space by drawing new states with
probability $p(i_k \to i_{k+1})$, thus realizing the path of probability
${\rm Prob}(i_0 \to i_n)$.
The energy, Eq.(\ref{E0}) is given as
The energy, Eq.~\eqref{eq:E0} is given as
\be
E_0 = \lim_{N \to \infty } \frac{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N} \Big \rangle}
{ \Big \langle \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} \Big \rangle}
E_0 = \lim_{N \to \infty }
\frac{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N}} }
{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} }}
\ee
Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over the population.
In addition, thanks to the ergodic property of the stochastic matrix (see, Refs \onlinecite{Caffarel_1988})
a unique path of infinite length from which
sub-paths of length $N$ can be extracted may also be used. We shall not here insist on these practical details that can be
found, for example, in refs \onlinecite{Foulkes_2001,Kolorenc_2011}.
In addition, thanks to the ergodic property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}) a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
We shall not here insist on these practical details that can be found, for example, in refs \onlinecite{Foulkes_2001,Kolorenc_2011}.
%{\it Spawner representation} In this representation, we no longer consider moving particles but occupied or non-occupied states $|i\rangle$.
%To each state is associated the (positive or negative) quantity $c_i$.
@ -427,37 +431,38 @@ fluctuations. This idea was proposed some time ago\cite{assaraf_99,Assaraf_1999B
Let us consider a given state $|i\rangle$. The probability that the walker remains exactly $n$ times on $|i\rangle$ ($n$ from
1 to $\infty$) and then exits to a different state $j$ is
\be
{\cal P}(i \to j, n) = [p(i \to i)]^{n-1} p(i \to j) \;\;\;\; j \ne i.
\cP_{i \to j}(n) = \qty(p_{i \to i})^{n-1} p_{i \to j} \qq{$j \ne i$.}
\ee
Using the relation $\sum_{n=1}^{\infty} p^{n-1}(i \to i)=\frac{1}{1-p(i \to i)}$ and the normalization
of the $p(i \to j)$, Eq.(\ref{sumup}), we verify that
the probability is normalized to one
Using the relation
\be
\sum_{j \ne i} \sum_{n=1}^{\infty} {\cal P}(i \to j,n) = 1.
\sum_{n=1}^{\infty} p^{n-1}(i \to i)=\frac{1}{1-p(i \to i)}
\ee
and the normalization of the $p(i \to j)$, Eq.~\eqref{eq:sumup}, we verify that the probability is normalized to one
\be
\sum_{j \ne i} \sum_{n=1}^{\infty} \cP_{i \to j}(n) = 1.
\ee
The probability of being trapped during $n$ steps is obtained by summing over all possible exit states
\be
P_i(n)=\sum_{j \ne i} {\cal P}(i \to j,n) = [p(i \to i)]^{n-1}(1-p(i \to i)).
P_i(n)=\sum_{j \ne i} \cP_{i \to j}(n) = \qty(p_{i \to i})^{n-1} \qty( 1 - p_{i \to i} ).
\ee
This probability defines a Poisson law
with an average number $\bar{n}_i$ of trapping events given by
This probability defines a Poisson law with an average number $\bar{n}_i$ of trapping events given by
\be
\bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p(i \to i)}.
\bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p_{i \to i}}.
\ee
Introducing the continuous time $t_i=n_i\tau$ the average trapping time is given by
\be
\bar{t_i}= \frac{1}{H^+_{ii}-E^+_{Li}}.
\bar{t_i}= \frac{1}{H^+_{ii}-(\EL^+)_{i}}.
\ee
Taking the limit $\tau \to 0$ the Poisson probability takes the usual form
Taking the limit $\tau \to 0$, the Poisson probability takes the usual form
\be
P_{i}(t) = \frac{1}{\bar{t}_i} e^{-\frac{t}{\bar{t}_i}}
P_{i}(t) = \frac{1}{\bar{t}_i} \exp(-\frac{t}{\bar{t}_i})
\ee
The time-averaged contribution of the on-state weight can be easily calculated to be
The time-averaged contribution of the \titou{on-state} weight can be easily calculated to be
\be
\bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}}
\bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}}
\ee
Details of the implementation of the effective dynamics can be in found in Refs. (\onlinecite{assaraf_99},\onlinecite{caffarel_00}).
Details of the implementation of the effective dynamics can be in found in Refs.~\onlinecite{assaraf_99} and \onlinecite{caffarel_00}.
%=======================================%
\subsection{General domains}
@ -473,49 +478,41 @@ non-zero-probability to reach any other state in a finite number of steps). In p
Let us write an arbitrary path of length $N$ as
\be
|i_0 \rangle \to |i_1 \rangle \to ... \to |i_N \rangle
\ket{i_0} \to \ket{i_1} \to \cdots \to \ket{i_N}
\ee
where the successive states are drawn using the transition probability matrix, $p(i \to j)$. This series can be rewritten
where the successive states are drawn using the transition probability matrix, $p_{i \to j}$. This series can be rewritten
\be
(|I_0\rangle,n_0) \to (|I_1 \rangle,n_1) \to... \to (|I_p\rangle,n_p)
\label{eff_series}
\label{eq:eff_series}
(\ket*{I_0},n_0) \to (\ket*{I_1},n_1) \to \cdots \to (\ket*{I_p},n_p)
\ee
where $|I_0\rangle=|i_0\rangle$ is the initial state,
$n_0$ the number of times the walker remains within the domain of $|i_0\rangle$ ($n_0=1$ to $N+1$), $|I_1\rangle$ is the first exit state,
that is not belonging to
${\cal D}_{i_0}$, $n_1$ is the number of times the walker remains within ${\cal D}_{i_1}$ ($n_1=1$ to $N+1-n_0$), $|I_2\rangle$ the second exit state, and so on.
Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path. The two extreme cases, $p=0$ and $p=N$,
correspond to the cases where the walker remains for ever within the initial domain, and to the case where the walker leaves the current domain at each step,
respectively.
where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ the number of times the walker remains within the domain of $\ket{i_0}$ ($n_0=1$ to $N+1$), $\ket{I_1}$ is the first exit state, that is not belonging to $\cD_{i_0}$, $n_1$ is the number of times the walker remains within $\cD_{i_1}$ ($n_1=1$ to $N+1-n_0$), $\ket{I_2}$ the second exit state, and so on.
Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path. The two extreme cases, $p=0$ and $p=N$, correspond to the cases where the walker remains for ever within the initial domain, and to the case where the walker leaves the current domain at each step, respectively.
In what follows, we shall systematically write the integers representing the exit states in capital letter.
%Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the
%set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states
%averages for renormalized quantities will be defined.\\
Let us define the probability of being $n$ times within the domain of $|I_0\rangle$ and, then, to exit at $|I\rangle \notin {\cal D}_{I_0}$.
Let us define the probability of being $n$ times within the domain of $\ket{I_0}$ and, then, to exit at $\ket{I} \notin \cD_{I_0}$.
It is given by
$$
{\cal P}(I_0 \to I,n)= \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}}
$$
\be
p(I_0 \to i_1) ... p(i_{n-2} \to i_{n-1}) p(i_{n-1} \to I)
\label{eq1C}
\label{eq:eq1C}
\cP_{I_0 \to I}(n) = \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}}
p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I}
\ee
To proceed we need to introduce the projector associated with each domain
\be
P_I= \sum_{|k\rangle \in {\cal D}_I} |k\rangle \langle k|
P_I= \sum_{\ket{k} \in \cD_I} \dyad{k}{k}
\label{pi}
\ee
and to define the restriction of the operator $T^+$ to the domain
\be
T^+_I= P_I T^+ P_I.
T^+_I= P_I T^+ P_I.
\ee
$T^+_I$ is the operator governing the dynamics of the walkers moving within ${\cal D}_{I}$.
Using Eqs.(\ref{eq1C}) and (\ref{pij}), the probability can be rewritten as
\be
{\cal P}(I_0 \to I,n)=
\frac{1}{\Psi^+_{I_0}} \langle I_0 | {T^+_{I_0}}^{n-1} F^+_{I_0}|I\rangle \Psi^+_{I}
\cP+{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I}
\label{eq3C}
\ee
where the operator $F$, corresponding to the last move connecting the inside and outside regions of the
@ -531,7 +528,7 @@ Physically, $F$ may be seen as a flux operator through the boundary of ${\cal D}
Now, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by
\be
P_{I}(n)=
\frac{1}{\Psi^+_{I}} \langle I | {T^+_{I}}^{n-1} F^+_{I}|\Psi^+ \rangle.
\frac{1}{\PsiG_{I}} \langle I | {T^+_{I}}^{n-1} F^+_{I}|\PsiG \rangle.
\label{PiN}
\ee
Using the fact that
@ -541,12 +538,12 @@ Using the fact that
\ee
we have
\be
\sum_{n=0}^{\infty} P_{I}(n) = \frac{1}{\Psi^+_{I}} \sum_{n=1}^{\infty} \Big( \langle I | {T^+_{I}}^{n-1} |\Psi^+\rangle
- \langle I | {T^+_{I}}^{n} |\Psi^+\rangle \Big) = 1
\sum_{n=0}^{\infty} P_{I}(n) = \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \Big( \langle I | {T^+_{I}}^{n-1} |\PsiG\rangle
- \langle I | {T^+_{I}}^{n} |\PsiG\rangle \Big) = 1
\ee
and the average trapping time
\be
t_{I}={\bar n}_{I} \tau= \frac{1}{\Psi^+_{I}} \langle I | P_{I} \frac{1}{H^+ -E_L^+} P_{I} | \Psi^+\rangle
t_{I}={\bar n}_{I} \tau= \frac{1}{\PsiG_{I}} \langle I | P_{I} \frac{1}{H^+ -E_L^+} P_{I} | \PsiG\rangle
\ee
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-E_L^+)$ in ${\cal D}_{I}$. Note that
it is possible only if the dimension of the domains is not too large (say, less than a few thousands).
@ -607,7 +604,7 @@ $$
\sum_{n_0 \ge 1} ... \sum_{n_p \ge 1}
\ee
\be
\delta(\sum_k n_k=N+1) \Big[ \prod_{k=0}^{p-1} [\frac{\Psi^+_{I_{k+1}}}{\Psi^+_{I_k}} \langle I_k| T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big]
\delta(\sum_k n_k=N+1) \Big[ \prod_{k=0}^{p-1} [\frac{\PsiG_{I_{k+1}}}{\PsiG_{I_k}} \langle I_k| T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big]
{\bar G}^{(n_p-1),{\cal D}}_{I_p I_N}.
\label{Gbart}
\ee
@ -730,7 +727,7 @@ which is identical to Eq.(\ref{eqfond}) when $G$ is expanded iteratively.\\
\\
Let us use as effective transition probability density
\be
P(I \to J) = \frac{1} {\Psi^+(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \Psi^+(J)
P(I \to J) = \frac{1} {\PsiG(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \PsiG(J)
\ee
and the weight
\be