From 6c45a387af516af0577b0a730d46905617d2e369 Mon Sep 17 00:00:00 2001 From: Pierre-Francois Loos Date: Thu, 15 Sep 2022 15:52:03 +0200 Subject: [PATCH] saving work --- g.tex | 77 +++++++++++++++++++++++++++++++---------------------------- 1 file changed, 40 insertions(+), 37 deletions(-) diff --git a/g.tex b/g.tex index 4ab7180..ace438b 100644 --- a/g.tex +++ b/g.tex @@ -243,7 +243,7 @@ $e^{-N\tau H}$ which is usually referred to as the imaginary-time dependent Gree \titou{Introducing the set of $N-1$ intermediate states, $\{ \ket{i_k} \}_{1 \le k \le N-1}$, in the $N$th product of $T$,} $G^{(N)}$ can be written in the following expanded form \be \label{eq:cn} - G^{(N)}_{i_0 i_N} = \sum_{i_1} \sum_{i_2} ... \sum_{i_{N-1}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}, + G^{(N)}_{i_0 i_N} = \sum_{i_1} \sum_{i_2} \cdots \sum_{i_{N-1}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}, \ee where $T_{ij} =\mel{i}{T}{j}$. Here, each index $i_k$ runs over all basis vectors. @@ -460,25 +460,27 @@ Details of the implementation of this effective dynamics can be in found in Refs \label{sec:general_domains} %=======================================% -Let us now extend the results of the preceding section to a general domain. For that, -let us associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ and -denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states. No particular constraints on the type of domains -are imposed, for example domains associated with different states can be identical, or may have or not common states. The only important condition is -that the set of all domains ensures the ergodicity property of the effective stochastic dynamics (that is, starting from any state there is a -non-zero-probability to reach any other state in a finite number of steps). In practice, it is not difficult to impose such a condition. +Let us now extend the results of Sec.~\ref{sec:single_domains} to a general domain. +To do so, let us associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states. +No particular constraints on the type of domains are imposed. +For example, domains associated with different states can be identical, and they may or may not have common states. +The only important condition is that the set of all domains ensures the ergodicity property of the effective stochastic dynamics (that is, starting from any state there is a non-zero-probability to reach any other state in a finite number of steps). +In practice, it is not difficult to impose such a condition. Let us write an arbitrary path of length $N$ as \be \ket{i_0} \to \ket{i_1} \to \cdots \to \ket{i_N} \ee -where the successive states are drawn using the transition probability matrix, $p_{i \to j}$. This series can be rewritten +where the successive states are drawn using the transition probability matrix, $p_{i \to j}$. +This series can be rewritten \be \label{eq:eff_series} (\ket*{I_0},n_0) \to (\ket*{I_1},n_1) \to \cdots \to (\ket*{I_p},n_p) \ee -where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ the number of times the walker remains within the domain of $\ket{i_0}$ ($n_0=1$ to $N+1$), $\ket{I_1}$ is the first exit state, that is not belonging to $\cD_{i_0}$, $n_1$ is the number of times the walker remains within $\cD_{i_1}$ ($n_1=1$ to $N+1-n_0$), $\ket{I_2}$ the second exit state, and so on. -Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path. The two extreme cases, $p=0$ and $p=N$, correspond to the cases where the walker remains for ever within the initial domain, and to the case where the walker leaves the current domain at each step, respectively. -In what follows, we shall systematically write the integers representing the exit states in capital letter. +where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ is the number of times the walker remains within the domain of $\ket{i_0}$ (with $1 \le n_0 \le N+1$), $\ket{I_1}$ is the first exit state that does not belong to $\cD_{i_0}$, $n_1$ is the number of times the walker remains in $\cD_{i_1}$ (with $1 \le n_1 \le N+1-n_0$), $\ket{I_2}$ is the second exit state, and so on. +Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path. +The two extreme cases, $p=0$ and $p=N$, correspond to the cases where the walker remains in the initial domain during the entire path, and to the case where the walker exits a domain at each step, respectively. +\titou{In what follows, we shall systematically write the integers representing the exit states in capital letter.} %Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the %set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states @@ -488,56 +490,57 @@ Let us define the probability of being $n$ times within the domain of $\ket{I_0} It is given by \be \label{eq:eq1C} - \cP_{I_0 \to I}(n) = \sum_{|i_1\rangle \in {\cal D}_{I_0}} ... \sum_{|i_{n-1}\rangle \in {\cal D}_{I_0}} -p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I} + \cP_{I_0 \to I}(n) + = \sum_{\ket{i_1} \in \cD_{I_0}} \cdots \sum_{\ket{i_{n-1}} \in \cD_{I_0}} + p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I} \ee To proceed we need to introduce the projector associated with each domain \be - P_I= \sum_{\ket{k} \in \cD_I} \dyad{k}{k} -\label{pi} +\label{eq:pi} + P_I = \sum_{\ket{k} \in \cD_I} \dyad{k}{k} \ee and to define the restriction of the operator $T^+$ to the domain \be T^+_I= P_I T^+ P_I. \ee -$T^+_I$ is the operator governing the dynamics of the walkers moving within ${\cal D}_{I}$. -Using Eqs.(\ref{eq1C}) and (\ref{pij}), the probability can be rewritten as +$T^+_I$ is the operator governing the dynamics of the walkers moving within $\cD_{I}$. +Using Eqs.~\eqref{eq:pij} and \eqref{eq:eq1C}, the probability can be rewritten as \be - \cP+{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I} -\label{eq3C} +\label{eq:eq3C} + \cP+{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I}, \ee where the operator $F$, corresponding to the last move connecting the inside and outside regions of the domain, is given by \be -F^+_I = P_I T^+ (1-P_I), -\label{Fi} +\label{eq:Fi} + F^+_I = P_I T^+ (1-P_I), \ee -that is, $(F^+_I)_{ij}= T^+_{ij}$ when $(|i\rangle \in {\cal D}_{I}, |j\rangle \notin {\cal D}_{I})$, and zero +that is, $(F^+_I)_{ij}= T^+_{ij}$ when $(\ket{i} \in \cD_{I}, \ket{j} \notin \cD_{I})$, and zero otherwise. Physically, $F$ may be seen as a flux operator through the boundary of ${\cal D}_{I}$. Now, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by \be -P_{I}(n)= -\frac{1}{\PsiG_{I}} \langle I | {T^+_{I}}^{n-1} F^+_{I}|\PsiG \rangle. -\label{PiN} +\label{eq:PiN} + P_{I}(n) = \frac{1}{\PsiG_{I}} \mel{ I }{ {T^+_{I}}^{n-1} F^+_{I} }{ \PsiG }. \ee Using the fact that \be -{T^+_I}^{n-1} F^+_I= {T^+_I}^{n-1} T^+ - {T^+_I}^n -\label{relation} +\label{eq:relation} + {T^+_I}^{n-1} F^+_I = {T^+_I}^{n-1} T^+ - {T^+_I}^n, \ee we have \be -\sum_{n=0}^{\infty} P_{I}(n) = \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \Big( \langle I | {T^+_{I}}^{n-1} |\PsiG\rangle -- \langle I | {T^+_{I}}^{n} |\PsiG\rangle \Big) = 1 + \sum_{n=0}^{\infty} P_{I}(n) + = \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \qty( \mel{ I }{ {T^+_{I}}^{n-1} }{ \PsiG } + - \mel{ I }{ {T^+_{I}}^{n} }{ \PsiG } ) = 1 \ee and the average trapping time \be -t_{I}={\bar n}_{I} \tau= \frac{1}{\PsiG_{I}} \langle I | P_{I} \frac{1}{H^+ -E_L^+} P_{I} | \PsiG\rangle + t_{I}={\bar n}_{I} \tau = \frac{1}{\PsiG_{I}} \mel{ I }{ P_{I} \frac{1}{H^+ - \EL^+} P_{I} }{ \PsiG }. \ee -In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-E_L^+)$ in ${\cal D}_{I}$. Note that -it is possible only if the dimension of the domains is not too large (say, less than a few thousands). +In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-\EL^+)$ in $\cD_{I}$. +Note that it is possible only if the dimension of the domains is not too large (say, less than a few thousands). %=======================================% \subsection{Expressing the Green's matrix using domains} @@ -548,12 +551,12 @@ it is possible only if the dimension of the domains is not too large (say, less \subsubsection{Time-dependent Green's matrix} \label{sec:time} %--------------------------------------------% -In this section we generalize the path-integral expression of the Green's matrix, Eqs.(\ref{G}) and (\ref{cn_stoch}), to the case where domains are used. +In this section we generalize the path-integral expression of the Green's matrix, Eqs.~\eqref{eq:G} and \eqref{eq:cn_stoch}, to the case where domains are used. For that we introduce the Green's matrix associated with each domain \be -G^{(N),{\cal D}}_{IJ}= \langle J| T_I^N| I\rangle. + G^{(N),\cD}_{IJ}= \mel{ J }{ T_I^N }{ I }. \ee -Starting from Eq.(\ref{cn}) +Starting from Eq.~\eqref{eq:cn} \be G^{(N)}_{i_0 i_N}= \sum_{i_1,...,i_{N-1}} \prod_{k=0}^{N-1} \langle i_k| T |i_{k+1} \rangle. \ee @@ -575,12 +578,12 @@ $$ \sum_{n_0 \ge 1} ... \sum_{n_p \ge 1} \ee \be +\label{eq:Gt} \delta(\sum_{k=0}^p n_k=N+1) \Big[ \prod_{k=0}^{p-1} \langle I_k|T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big] G^{(n_p-1),{\cal D}}_{I_p I_N} -\label{Gt} \ee This expression is the path-integral representation of the Green's matrix using only the variables $(|I_k\rangle,n_k)$ of the effective dynamics defined over the set -of domains. The standard formula derived above, Eq.(\ref{G}) may be considered as the particular case where the domain associated with each state is empty, +of domains. The standard formula derived above, Eq.~\eqref{eq:G} may be considered as the particular case where the domain associated with each state is empty, In that case, $p=N$ and $n_k=1$ for $k=0$ to $N$ and we are left only with the $p$-th component of the sum, that is, $G^{(N)}_{I_0 I_N} = \prod_{k=0}^{N-1} \langle I_k|F_{I_k}|I_{k+1} \rangle $ where $F=T$.