OK with IIIB

This commit is contained in:
Pierre-Francois Loos 2022-09-16 13:22:13 +02:00
parent 6c45a387af
commit 1006c4a2a5

193
g.tex
View File

@ -323,9 +323,9 @@ We are now in the position to define the stochastic matrix as
= \frac{\PsiG_j}{\PsiG_i} T^+_{ij}
=
\begin{cases}
1 - \tau \qty[ H^+_{ii}- (\EL^+)_{i} ], & \text{if $i=j$},
1 - \tau \qty[ H^+_{ii}- (\EL^+)_{i} ], & \qif* i=j,
\\
\tau \frac{\PsiG_{j}}{\PsiG_{i}} \abs{H_{ij}} \ge 0, & \text{if $i\neq j$}.
\tau \frac{\PsiG_{j}}{\PsiG_{i}} \abs{H_{ij}} \ge 0, & \qif* i\neq j.
\end{cases}
\ee
As readily seen in Eq.~\eqref{eq:pij}, the off-diagonal terms of the stochastic matrix are positive, while the diagonal ones can be made positive if $\tau$ is chosen sufficiently small via the condition
@ -418,7 +418,7 @@ We shall not here insist on these practical details that are discussed, for exam
During the simulation, walkers move from state to state with the possibility of being trapped a certain number of times on the same state before
exiting to a different state. This fact can be exploited in order to integrate out some part of the dynamics, thus leading to a reduction of the statistical
fluctuations. This idea was proposed some time ago \cite{Assaraf_1999A,Assaraf_1999B,Caffarel_2000} and applied to the SU($N$) one-dimensional Hubbard model.
fluctuations. This idea was proposed some time ago and applied to the SU($N$) one-dimensional Hubbard model.\cite{Assaraf_1999A,Assaraf_1999B,Caffarel_2000}
Considering a given state $\ket{i}$, the probability that a walker remains exactly $n$ times in $\ket{i}$ (with $1 \le n < \infty$) and then exits to a different state $j$ (with $j \neq i$) is
\be
@ -443,13 +443,13 @@ and this defines a Poisson law with an average number of trapping events
\ee
Introducing the continuous time $t_i = n \tau$, the average trapping time is thus given by
\be
\bar{t_i}= \frac{1}{H^+_{ii}-(\EL^+)_{i}}.
\bar{t_i}= \frac{1}{H^+_{ii}-(\EL^+)_{i}},
\ee
In the limit $\tau \to 0$, the Poisson probability takes the usual form
and, in the limit $\tau \to 0$, the Poisson probability takes the usual form
\be
P_{i}(t) = \frac{1}{\bar{t}_i} \exp(-\frac{t}{\bar{t}_i}).
\ee
The time-averaged contribution of the on-state weight can be easily calculated to be
The time-averaged contribution of the on-state weight can then be easily calculated to be
\be
\bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}}
\ee
@ -461,85 +461,89 @@ Details of the implementation of this effective dynamics can be in found in Refs
%=======================================%
Let us now extend the results of Sec.~\ref{sec:single_domains} to a general domain.
To do so, let us associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states.
To do so, we associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states.
No particular constraints on the type of domains are imposed.
For example, domains associated with different states can be identical, and they may or may not have common states.
The only important condition is that the set of all domains ensures the ergodicity property of the effective stochastic dynamics (that is, starting from any state there is a non-zero-probability to reach any other state in a finite number of steps).
The only important condition is that the set of all domains ensures the ergodicity property of the effective stochastic dynamics, that is, starting from any state, there is a non-zero probability to reach any other state in a finite number of steps.
In practice, it is not difficult to impose such a condition.
Let us write an arbitrary path of length $N$ as
\be
\ket{i_0} \to \ket{i_1} \to \cdots \to \ket{i_N}
\ket{i_0} \to \ket{i_1} \to \cdots \to \ket{i_N},
\ee
where the successive states are drawn using the transition probability matrix, $p_{i \to j}$.
This series can be rewritten
This series can be recast
\be
\label{eq:eff_series}
(\ket*{I_0},n_0) \to (\ket*{I_1},n_1) \to \cdots \to (\ket*{I_p},n_p)
(\ket*{I_0},n_0) \to (\ket*{I_1},n_1) \to \cdots \to (\ket*{I_p},n_p),
\ee
where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ is the number of times the walker remains within the domain of $\ket{i_0}$ (with $1 \le n_0 \le N+1$), $\ket{I_1}$ is the first exit state that does not belong to $\cD_{i_0}$, $n_1$ is the number of times the walker remains in $\cD_{i_1}$ (with $1 \le n_1 \le N+1-n_0$), $\ket{I_2}$ is the second exit state, and so on.
Here, the integer $p$ goes from 0 to $N$ and indicates the number of exit events occurring along the path.
The two extreme cases, $p=0$ and $p=N$, correspond to the cases where the walker remains in the initial domain during the entire path, and to the case where the walker exits a domain at each step, respectively.
where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ is the number of times the walker remains in the domain of $\ket{i_0}$ (with $1 \le n_0 \le N+1$), $\ket{I_1}$ is the first exit state that does not belong to $\cD_{i_0}$, $n_1$ is the number of times the walker remains in $\cD_{i_1}$ (with $1 \le n_1 \le N+1-n_0$), $\ket{I_2}$ is the second exit state, and so on.
Here, the integer $p$ (with $0 \le p \le N$) indicates the number of exit events occurring along the path.
The two extreme values, $p=0$ and $p=N$, correspond to the cases where the walker remains in the initial domain during the entire path, and where the walker exits a domain at each step, respectively.
\titou{In what follows, we shall systematically write the integers representing the exit states in capital letter.}
%Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the
%set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states
%averages for renormalized quantities will be defined.\\
Let us define the probability of being $n$ times within the domain of $\ket{I_0}$ and, then, to exit at $\ket{I} \notin \cD_{I_0}$.
It is given by
Let us define the probability of remaining $n$ times in the domain of $\ket{I_0}$ and to exit at $\ket{I} \notin \cD_{I_0}$ as
\be
\label{eq:eq1C}
\cP_{I_0 \to I}(n)
= \sum_{\ket{i_1} \in \cD_{I_0}} \cdots \sum_{\ket{i_{n-1}} \in \cD_{I_0}}
p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I}
p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I}.
\ee
To proceed we need to introduce the projector associated with each domain
\titou{To proceed} we must introduce the projector associated with each domain
\be
\label{eq:pi}
P_I = \sum_{\ket{k} \in \cD_I} \dyad{k}{k}
P_I = \sum_{\ket{i} \in \cD_I} \dyad{i}{i}
\ee
and to define the restriction of the operator $T^+$ to the domain
and the projection of the operator $T^+$ to the domain that governs the dynamics of the walkers moving in $\cD_{I}$, \ie,
\be
T^+_I= P_I T^+ P_I.
\ee
$T^+_I$ is the operator governing the dynamics of the walkers moving within $\cD_{I}$.
Using Eqs.~\eqref{eq:pij} and \eqref{eq:eq1C}, the probability can be rewritten as
\be
\label{eq:eq3C}
\cP+{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I},
\cP_{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I},
\ee
where the operator $F$, corresponding to the last move connecting the inside and outside regions of the
domain, is given by
where the operator
\be
\label{eq:Fi}
F^+_I = P_I T^+ (1-P_I),
\ee
that is, $(F^+_I)_{ij}= T^+_{ij}$ when $(\ket{i} \in \cD_{I}, \ket{j} \notin \cD_{I})$, and zero
otherwise.
corresponding to the last move connecting the inside and outside regions of the domain, has the following matrix elements:
\be
(F^+_I)_{ij} =
\begin{cases}
T^+_{ij}, & \qif* \ket{i} \in \cD_{I} \titou{\land} \ket{j} \notin \cD_{I},
\\
0, & \text{otherwise}.
\end{cases}
\ee
Physically, $F$ may be seen as a flux operator through the boundary of ${\cal D}_{I}$.
Now, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by
\titou{Now}, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by
\be
\label{eq:PiN}
P_{I}(n) = \frac{1}{\PsiG_{I}} \mel{ I }{ {T^+_{I}}^{n-1} F^+_{I} }{ \PsiG }.
P_{I}(n) = \frac{1}{\PsiG_{I}} \mel{ I }{ \qty(T^+_{I})^{n-1} F^+_{I} }{ \PsiG }.
\ee
Using the fact that
\be
\label{eq:relation}
{T^+_I}^{n-1} F^+_I = {T^+_I}^{n-1} T^+ - {T^+_I}^n,
\qty(T^+_{I})^{n-1} F^+_I = \qty(T^+_{I})^{n-1} T^+ - \qty(T^+_I)^n,
\ee
we have
\be
\sum_{n=0}^{\infty} P_{I}(n)
= \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \qty( \mel{ I }{ {T^+_{I}}^{n-1} }{ \PsiG }
- \mel{ I }{ {T^+_{I}}^{n} }{ \PsiG } ) = 1
= \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \qty[ \mel{ I }{ \qty(T^+_{I})^{n-1} }{ \PsiG }
- \mel{ I }{ \qty(T^+_{I})^{n} }{ \PsiG } ] = 1,
\ee
and the average trapping time
and the average trapping time is
\be
t_{I}={\bar n}_{I} \tau = \frac{1}{\PsiG_{I}} \mel{ I }{ P_{I} \frac{1}{H^+ - \EL^+} P_{I} }{ \PsiG }.
t_{I}={\bar n}_{I} \tau = \frac{1}{\PsiG_{I}} \mel{ I }{ P_{I} \frac{1}{H^+ - \EL^+ \Id} P_{I} }{ \PsiG }.
\ee
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-\EL^+)$ in $\cD_{I}$.
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-\EL^+ \Id)$ in $\cD_{I}$.
Note that it is possible only if the dimension of the domains is not too large (say, less than a few thousands).
%=======================================%
@ -556,85 +560,76 @@ For that we introduce the Green's matrix associated with each domain
\be
G^{(N),\cD}_{IJ}= \mel{ J }{ T_I^N }{ I }.
\ee
Starting from Eq.~\eqref{eq:cn}
%Starting from Eq.~\eqref{eq:cn}
%\be
%G^{(N)}_{i_0 i_N}= \sum_{i_1,...,i_{N-1}} \prod_{k=0}^{N-1} \langle i_k| T |i_{k+1} \rangle.
%\ee
Starting from Eq.~\eqref{eq:cn} and using the representation of the paths in terms of exit states and trapping times, we get
\be
G^{(N)}_{i_0 i_N}= \sum_{i_1,...,i_{N-1}} \prod_{k=0}^{N-1} \langle i_k| T |i_{k+1} \rangle.
G^{(N)}_{I_0 I_N} = \sum_{p=0}^N
\sum_{\cC_p} \sum_{(i_1,...,i_{N-1}) \in \cC_p}
\prod_{k=0}^{N-1} \mel{ i_k }{ T }{ i_{k+1} }
\ee
and using the representation of the paths in terms of exit states and trapping times we write
\be
G^{(N)}_{I_0 I_N} = \sum_{p=0}^N
\sum_{{\cal C}_p} \sum_{(i_1,...,i_{N-1}) \in \;{\cal C}_p}
\prod_{k=0}^{N-1} \langle i_k|T |i_{k+1} \rangle
\ee
where ${\cal C}_p$ is the set of paths with $p$ exit states, $|I_k\rangle$, and trapping times $n_k$ with the
constraints that $|I_k\rangle \notin {\cal D}_{k-1}$, $1 \le n_k \le N+1$ and $\sum_{k=0}^p n_k= N+1$.
where $\cC_p$ is the set of paths with $p$ exit states, $\ket{I_k}$, and trapping times $n_k$ with the constraints that $\ket{I_k} \notin \cD_{k-1}$ (with $1 \le n_k \le N+1$ and $\sum_{k=0}^p n_k= N+1$).
We then have
$$
G^{(N)}_{I_0 I_N}= G^{(N),{\cal D}}_{I_0 I_N} +
$$
\be
\sum_{p=1}^{N}
\sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}} }
\sum_{n_0 \ge 1} ... \sum_{n_p \ge 1}
\ee
\be
\begin{multline}
\label{eq:Gt}
\delta(\sum_{k=0}^p n_k=N+1) \Big[ \prod_{k=0}^{p-1} \langle I_k|T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big]
G^{(n_p-1),{\cal D}}_{I_p I_N}
\ee
This expression is the path-integral representation of the Green's matrix using only the variables $(|I_k\rangle,n_k)$ of the effective dynamics defined over the set
of domains. The standard formula derived above, Eq.~\eqref{eq:G} may be considered as the particular case where the domain associated with each state is empty,
G^{(N)}_{I_0 I_N}= G^{(N),{\cal D}}_{I_0 I_N} +
\sum_{p=1}^{N}
\sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}} }
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}
\delta_{\sum_{k=0}^p n_k,N+1}
\\
\times
\qty[ \prod_{k=0}^{p-1} \mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} } ]
G^{(n_p-1),{\cal D}}_{I_p I_N}
\end{multline}
This expression is the path-integral representation of the Green's matrix using only the variables $(\ket{I_k},n_k)$ of the effective dynamics defined over the set of domains.
The standard formula derived above, Eq.~\eqref{eq:G} may be considered as the particular case where the domain associated with each state is empty,
In that case, $p=N$ and $n_k=1$ for $k=0$ to $N$ and we are left only with the $p$-th component of the sum, that is, $G^{(N)}_{I_0 I_N}
= \prod_{k=0}^{N-1} \langle I_k|F_{I_k}|I_{k+1} \rangle $
where $F=T$.
= \prod_{k=0}^{N-1} \mel{ I_k }{ F_{I_k} }{ I_{k+1} } $ where $F=T$.
To express the fundamental equation for $G$ under the form of a probabilistic average, we write the importance-sampled version of the equation
$$
{\bar G}^{(N)}_{I_0 I_N}={\bar G}^{(N),{\cal D}}_{I_0 I_N} +
$$
\be
\sum_{p=1}^{N}
\sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}}}
\sum_{n_0 \ge 1} ... \sum_{n_p \ge 1}
\ee
\be
\delta(\sum_k n_k=N+1) \Big[ \prod_{k=0}^{p-1} [\frac{\PsiG_{I_{k+1}}}{\PsiG_{I_k}} \langle I_k| T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big]
{\bar G}^{(n_p-1),{\cal D}}_{I_p I_N}.
\label{Gbart}
\label{eq:Gbart}
{\bar G}^{(N)}_{I_0 I_N}={\bar G}^{(N),{\cal D}}_{I_0 I_N} +
\sum_{p=1}^{N}
\sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}}}
\sum_{n_0 \ge 1} ... \sum_{n_p \ge 1}
\delta(\sum_k n_k=N+1) \Big[ \prod_{k=0}^{p-1} [\frac{\PsiG_{I_{k+1}}}{\PsiG_{I_k}} \langle I_k| T^{n_k-1}_{I_k} F_{I_k} |I_{k+1} \rangle \Big]
{\bar G}^{(n_p-1),{\cal D}}_{I_p I_N}.
\ee
Introducing the weight
\be
W_{I_k I_{k+1}}=\frac{\langle I_k|T^{n_k-1}_{I_k} F_{I_k} |I_{k+1}\rangle}{\langle I_k|T^{+\;n_k-1}_{I_k} F^+_{I_k} |I_{k+1} \rangle}
W_{I_k I_{k+1}} = \frac{\mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} }}{\mel{ I_k }{ \qty(T^{+}_{I_k})^{n_k-1} F^+_{I_k} }{ I_{k+1} }}
\ee
and using the effective transition probability, Eq.(\ref{eq3C}), we get
and using the effective transition probability defined in Eq.~\eqref{eq:eq3C}, we get
\be
{\bar G}^{(N)}_{I_0 I_N}={\bar G}^{(N),{\cal D}}_{I_0 I_N}+ \sum_{p=1}^{N}
\bigg \langle
\Big( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} \Big)
{\bar G}^{(n_p-1), {\cal D}}_{I_p I_N}
\bigg \rangle
\label{Gbart}
\label{eq:Gbart}
\bar{G}^{(N)}_{I_0 I_N}=\bar{G}^{(N),\cD}_{I_0 I_N}+ \sum_{p=1}^{N}
\bigg \langle
\Big( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} \Big)
{\bar G}^{(n_p-1), {\cal D}}_{I_p I_N}
\bigg \rangle
\ee
where the average is defined as
$$
\bigg \langle F \bigg \rangle
= \sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}}}
\sum_{n_0 \ge 1} ... \sum_{n_p \ge 1}
\delta(\sum_k n_k=N+1)
$$
\be
\prod_{k=0}^{N-1}{\cal P}(I_k \to I_{k+1},n_k-1) F(I_0,n_0;...;I_N,n_N)
\expval{F}
= \sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}}}
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}
\delta_{\sum_k n_k,N+1}
\prod_{k=0}^{N-1}\cP_{I_k \to I_{k+1}}(n_k-1) F(I_0,n_0;...;I_N,n_N)
\ee
In practice, a schematic DMC algorithm to compute the average is as follows.\\
i) Choose some initial vector $|I_0\rangle$\\
i) Choose some initial vector $\ket{I_0}$\\
ii) Generate a stochastic path by running over $k$ (starting at $k=0$) as follows.\\
$\;\;\;\bullet$ Draw $n_k$ using the probability $P_{I_k}(n)$, Eq.(\ref{PiN})\\
$\;\;\;\bullet$ Draw the exit state, $|I_{k+1}\rangle$, using the conditional probability $$\frac{{\cal P}(I_k \to I_{k+1},n_k)}{P_{I_k}(n_k)}$$\\
$\;\;\;\bullet$ Draw $n_k$ using the probability $P_{I_k}(n)$ [see Eq.~\eqref{eq:PiN}]\\
$\;\;\;\bullet$ Draw the exit state, $\ket{I_{k+1}}$, using the conditional probability $$\frac{\cP_{I_k \to I_{k+1}}(n_k)}{P_{I_k}(n_k)}$$\\
iii) Terminate the path when $\sum_k n_k=N$ is greater than some target value $N_{\rm max}$ and compute $F(I_0,n_0;...;I_N,n_N)$\\
iv) Go to step ii) until some maximum number of paths is reached.\\
\\
At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_{max}$ is obtained. At large $N_{max}$ where the
convergence of the average as a function of $p$ is reached, such values can be averaged.
At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_\text{max}$ is obtained.
At large $N_\text{max}$ where the convergence of the average as a function of $p$ is reached, such values can be averaged.
%--------------------------------------------%
\subsubsection{Integrating out the trapping times : The Domain Green's Function Monte Carlo approach}
@ -649,22 +644,20 @@ $n_k$. The second, more direct and elegant, is based on the Dyson equation.\\
\\
{\it $\bullet$ The pedestrian way}. Let us define the quantity\\
$$
G^E_{ij}= \tau \sum_{N=0}^{\infty} \langle i|T^N|j\rangle.
G^E_{ij}= \tau \sum_{N=0}^{\infty} \mel{ i }{ T^N }{ j}.
$$
By summing over $N$ we obtain
\be
G^E_{ij}= \mel{i}{\frac{1}{H-E}}{j}.
G^E_{ij}= \mel{i}{\frac{1}{H-E \Id}}{j}.
\ee
This quantity, which no longer depends on the time-step, is referred to as the energy-dependent Green's matrix. Note that in the continuum this quantity is
essentially the Laplace transform of the time-dependent Green's function. Here, we then use the same denomination. The remarkable property
is that, thanks to the summation over $N$ up to the
infinity the constrained multiple sums appearing in Eq.(\ref{Gt}) can be factorized in terms of a product of unconstrained single sums as follows
$$
\sum_{N=1}^\infty \sum_{p=1}^N \sum_{n_0 \ge 1} ...\sum_{n_p \ge 1} \delta(n_0+...+n_p=N+1)
$$
$$
= \sum_{p=1}^{\infty} \sum_{n_0=1}^{\infty} ...\sum_{n_p=1}^{\infty}.
$$
\be
\sum_{N=1}^\infty \sum_{p=1}^N \sum_{n_0 \ge 1} ...\sum_{n_p \ge 1} \delta_{n_0+...+n_p,N+1}
= \sum_{p=1}^{\infty} \sum_{n_0=1}^{\infty} ...\sum_{n_p=1}^{\infty}.
\ee
It is then a trivial matter to integrate out exactly the $n_k$ variables, leading to
$$
\langle I_0|\frac{1}{H-E}|I_N\rangle = \langle I_0|P_0\frac{1}{H-E} P_0|I_N\rangle
@ -728,7 +721,7 @@ and the weight
W^E_{IJ} =
\frac{\langle I|\frac{1}{H-E} P_I (-H)(1-P_I) |J\rangle }{\langle I|\frac{1}{H^+-E^+_L} P_I (-H^+)(1-P_I) |J\rangle}
\ee
Using Eqs.(\ref{eq1C},\ref{eq3C},\ref{relation}) we verify that $P(I \to J) \ge 0$ and $\sum_J P(I \to J)=1$.
Using Eqs.~\eqref{eq:eq1C}, \eqref{eq:eq3C} and \eqref{eq:relation}, we verify that $P_{I \to J} \ge 0$ and $\sum_J P_{I \to J}=1$.
Finally, the probabilistic expression writes
$$
\langle I_0| \frac{1}{H-E}|I_N\rangle