saving work

This commit is contained in:
Pierre-Francois Loos 2022-10-03 15:58:53 +02:00
parent 9db2edeb88
commit b6da6bfd37

105
g.tex
View File

@ -54,6 +54,7 @@
\newcommand{\cD}{\mathcal{D}}
\newcommand{\cE}{\mathcal{E}}
\newcommand{\cI}{\mathcal{I}}
\newcommand{\cH}{\mathcal{H}}
\newcommand{\EPT}{E_{\PT}}
\newcommand{\laPT}{\lambda_{\PT}}
@ -534,9 +535,7 @@ See text for additional comments on the time evolution of the path.}
\label{fig:domains}
\end{figure}
Generalizing the single-state case treated previously, let us define the probability of remaining $n$ times in the domain of $\ket{I_0}$ and to exit at $\ket{I} \notin \cD_{I_0}$.
This probability is given by
Generalizing the single-state case treated previously, let us define the probability of remaining $n$ times in the domain of $\ket{I_0}$ and to exit at $\ket{I} \notin \cD_{I_0}$
\be
\label{eq:eq1C}
\cP_{I_0 \to I}(n)
@ -567,7 +566,7 @@ where the operator $F^+_I = P_I T^+ (1-P_I)$, corresponding to the last move co
Physically, $F$ may be seen as a flux operator through the boundary of $\cD_{I}$.
Knowing the probability of remaining $n$ times in the domain and, then, to exit to some state, it is possible to obtain
the probability of being trapped $n$ times in $\cD_{I}$, just by summing over all possible exit states. Thus, we get
the probability of being trapped $n$ times in $\cD_{I}$, just by summing over all possible exit states:
\be
\label{eq:PiN}
P_{I}(n) = \frac{1}{\PsiG_{I}} \mel{ I }{ \qty(T^+_{I})^{n-1} F^+_{I} }{ \PsiG }.
@ -602,7 +601,7 @@ Note that it is possible only if the dimension of the domains is not too large (
In this section we generalize the path-integral expression of the Green's matrix, Eq.~\eqref{eq:G}, to the case where domains are used.
To do so, we introduce the Green's matrix associated with each domain as follows:
\be
G^{(N),\cD}_{ij}= \mel{ i }{ T_i^N }{ j }.
G^{(N),\cD}_{ij}= \mel{ i }{ \titou{T_i^N} }{ j }.
\ee
%Starting from Eq.~\eqref{eq:cn}
%\be
@ -628,22 +627,17 @@ It follows that
\qty[ \prod_{k=0}^{p-1} \mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} } ]
G^{(n_p-1),\cD}_{I_p i_N},
\end{multline}
where $\delta_{i,j}$ is a Kronecker delta. Note that the first contribution, $G^{(N),\cD}_{i_0 i_N}$, corresponds to the case
$p=0$ and collects all contributions to the Green's matrix coming from paths remaining for ever within the domain of $\ket{i_0}$ (no exit event).
This contribution is here isolated from the $p$-sum since, as a domain Green's matrix, it will be calculated exactly and will not be suject to a stochastic
treatment.
Note also
that the last state $\ket{i_N}$ is never an exit state because of the very definition of our representation of the paths (if not, it would be
associated to the next contribution coresponding to $p+1$ exit events).
where $\delta_{i,j}$ is a Kronecker delta.
This expression is the path-integral representation of the Green's matrix using only the variables $(\ket{I_k},n_k)$ of the effective dynamics defined over the set of domains.
The standard formula for $G^{(N)}_{i_0 i_N}$ derived above, Eq.~\eqref{eq:G}, may be considered as the particular case where the
walker exits of the current state $\ket{i_k}$ at each step (no domains are introduced), leading to a number of
exit events $p$ equal to $N$. In this case, we have $\ket{I_k}=\ket{i_k}$, $n_k=1$ (for $0 \le k \le N$), and we are left only with the $p$th component of the sum, that is, $G^{(N)}_{i_0 i_N}=\prod_{k=0}^{N-1} \mel{ I_k }{ F_{I_k} }{ I_{k+1} }$, with $F=T$, thus recovering Eq.~\eqref{eq:G}.
The standard formula for $G^{(N)}_{i_0 i_N}$ derived above [see Eq.~\eqref{eq:G}] may be considered as the particular case where the walker exits of the current state $\ket{i_k}$ at each step (no domains are introduced), leading to a number of exit events $p$ equal to $N$.
In this case, we have $\ket{I_k}=\ket{i_k}$, $n_k=1$ (for $0 \le k \le N$), and we are left only with the $p$th component of the sum, that is, $G^{(N)}_{i_0 i_N}=\prod_{k=0}^{N-1} \mel{ I_k }{ F_{I_k} }{ I_{k+1} }$, with $F=T$, thus recovering Eq.~\eqref{eq:G}.
Note that the first contribution $G^{(N),\cD}_{i_0 i_N}$ corresponds to the case $p=0$ and collects all contributions to the Green's matrix coming from paths remaining indefinitely in the domain of $\ket{i_0}$ (no exit event).
This contribution is isolated from the sum in Eq.~\eqref{eq:Gt} since, as a domain Green's matrix, it is calculated exactly and is not subject to a stochastic treatment.
Note also that the last state $\ket{i_N}$ is never an exit state because of the very definition of our path representation.
% (if not, it would be associated to the next contribution corresponding to $p+1$ exit events).
In order to compute $G^{(N)}_{i_0 i_N}$ by resorting to Monte Carlo techniques, let us reformulate Eq.~\eqref{eq:Gt}
using the transition probability $\cP_{I \to J}(n)$ introduced above,
Eq.~\eqref{eq:eq3C}. We first rewrite Eq.~\eqref{eq:Gt} under the form
In order to compute $G^{(N)}_{i_0 i_N}$ by resorting to Monte Carlo techniques, let us reformulate Eq.~\eqref{eq:Gt} using the transition probability $\cP_{I \to J}(n)$ introduced in Eq.~\eqref{eq:eq3C}.
We first rewrite Eq.~\eqref{eq:Gt} under the form
\begin{multline}
{G}^{(N)}_{i_0 i_N}={G}^{(N),\cD}_{i_0 i_N} + {\PsiG_{i_0}}
\sum_{p=1}^{N}
@ -668,40 +662,35 @@ and using the effective transition probability, Eq.~\eqref{eq:eq3C}, we get
\qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} ) \qty( \prod_{k=0}^{p-1}\cP_{I_k \to I_{k+1}}(n_k) )
\frac{1}{\PsiG_{I_p}} {G}^{(n_p-1), \cD}_{I_p i_N} },
\end{multline}
where, for clarity, $\sum_{{(I,n)}_{p,N}}$ is used as a short-hand notation for the sum,
$ \sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}}}
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}$ with the constraint $\sum_{k=0}^p n_k=N+1$.\\
\titou{where, for clarity, $\sum_{{(I,n)}_{p,N}}$ is used as a short-hand notation for the sum, $\sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}}} \sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}$ with the constraint $\sum_{k=0}^p n_k=N+1$.}
Under this form ${G}^{(N)}_{i_0 i_N}$ is now amenable to Monte Carlo calculations
by generating paths using the transition probability matrix, $\cP_{I \to J}(n)$. For example, in the case of the energy, we start from
Under this form, ${G}^{(N)}_{i_0 i_N}$ is now amenable to Monte Carlo calculations
by generating paths using the transition probability matrix $\cP_{I \to J}(n)$.
For example, in the case of the energy, we start from
\be
E_0 = \lim_{N \to \infty }
\frac{ \sum_{i_N} {G}^{(N)}_{i_0 i_N} {(H\PsiT)}_{i_N} }
{ \sum_{i_N} {G}^{(N)}_{i_0 i_N} {\PsiT}_{i_N} }
{ \sum_{i_N} {G}^{(N)}_{i_0 i_N} {\PsiT}_{i_N} },
\ee
which can be rewritten probabilistically as
\be
E_0 = \lim_{N \to \infty }
\frac{ {G}^{(N),\cD}_{i_0 i_N} + {\PsiG_{i_0}} \sum_{p=1}^{N} \expval{ \qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} ) {\cal H}_{n_p,I_p} }_p}
{ {G}^{(N),\cD}_{i_0 i_N} + {\PsiG_{i_0}} \sum_{p=1}^{N} \expval{ \qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} ) {\cal S}_{n_p,I_p} }_p}
{ {G}^{(N),\cD}_{i_0 i_N} + {\PsiG_{i_0}} \sum_{p=1}^{N} \expval{ \qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} ) {\cal S}_{n_p,I_p} }_p},
\ee
where $\expval{...}_p$ is the probabilistic average defined over the set of paths $p$ exit events of probability
$\prod_{k=0}^{p-1} \cP_{I_k \to I_{k+1}}(n_k) $
and $({\cal H}_{n_p,I_p},{\cal S}_{n_p,I_p})$ two quantities taking into account the contribution of the trial wave function at the end of the path and defined as follows
\be
{\cal H}_{n_p,I_p}= \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{(n_p-1),\cD}_{I_p i_N} (H \Psi_T)_{i_N}
\ee
and
\be
{\cal S}_{n_p,I_p}= \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{(n_p-1), \cD}_{I_p i_N} (\Psi_T)_{i_N}
\ee
In practice, the Monte Carlo algorithm is a simple generalization of that used in standard diffusion Monte Carlo calculations.
Stochastic paths starting at $\ket{I_0}$ are generated using
the probability $\cP_{I_k \to I_{k+1}}(n_k)$ and are stopped when $\sum_k n_k$ is greater than some target value $N$. Averages of the
products of weights, $ \prod_{k=0}^{p-1} W_{I_k I_{k+1}} $ times the end-point contributions, ${{(\cal H}/{\cal S})}_{n_p,I_p} $ are computed for each $p$.
The generation of the paths can be performed using a two-step process. First, the integer $n_k$ is drawn using the probability $P_{I_k}(n)$ [see Eq.~\eqref{eq:PiN}]
and, then,
the exit state, $\ket{I_{k+1}}$, is drawn using the conditional probability $\frac{\cP_{I_k \to I_{k+1}}(n_k)}{P_{I_k}(n_k)}$.
\titou{where $\expval{\cdots}_p$ is the probabilistic average defined over the set of paths $p$ exit events of probability $\prod_{k=0}^{p-1} \cP_{I_k \to I_{k+1}}(n_k)$} and
\begin{align}
\cH_{n_p,I_p} & = \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{(n_p-1),\cD}_{I_p i_N} (H \Psi_T)_{i_N},
\\
\cS_{n_p,I_p} & = \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{(n_p-1), \cD}_{I_p i_N} (\Psi_T)_{i_N},
\end{align}
two quantities taking into account the contribution of the trial wave function at the end of the path.
\titou{In practice, the Monte Carlo algorithm is a simple generalization of that used in standard diffusion Monte Carlo calculations.}
Stochastic paths starting at $\ket{I_0}$ are generated using the probability $\cP_{I_k \to I_{k+1}}(n_k)$ and are stopped when $\sum_k n_k$ is greater than some target value $N$.
Averages of the weight products $ \prod_{k=0}^{p-1} W_{I_k I_{k+1}} $ times the end-point contributions ${(\cH/\cS)}_{n_p,I_p}$ are computed for each $p$.
The generation of the paths can be performed using a two-step process.
First, the integer $n_k$ is drawn using the probability $P_{I_k}(n)$ [see Eq.~\eqref{eq:PiN}] and, then, the exit state $\ket{I_{k+1}}$ is drawn using the conditional probability $\cP_{I_k \to I_{k+1}}(n_k)/P_{I_k}(n_k)$.
%See fig.\ref{scheme1C}.
%\titou{i) Choose some initial vector $\ket{I_0}$\\
%ii) Generate a stochastic path by running over $k$ (starting at $k=0$) as follows.\\
@ -732,7 +721,7 @@ Let us define the energy-dependent Green's matrix
\be
G^E_{ij}= \tau \sum_{N=0}^{\infty} \mel{ i }{ T^N }{ j} = \mel{i}{ \qty( H-E \Id )^{-1} }{j}.
\ee
The denomimation ``energy-dependent'' is chosen here since
The denomination ``energy-dependent'' is chosen here since
this quantity is the discrete version of the Laplace transform of the time-dependent Green's function in a continuous space,
usually known under this name.\cite{note}
The remarkable property is that, thanks to the summation over $N$ up to infinity, the constrained multiple sums appearing in Eq.~\eqref{eq:Gt} can be factorized in terms of a product of unconstrained single sums, as follows
@ -742,7 +731,7 @@ The remarkable property is that, thanks to the summation over $N$ up to infinity
= \sum_{p=1}^{\infty} \sum_{n_0=1}^{\infty} \cdots \sum_{n_p=1}^{\infty} F(n_0,\ldots,n_N).
\end{multline}
where $F$ is some arbitrary function of the trapping times.
Using the fact that $G^E_{ij}= \tau \sum_{N=0}^{\infty} G^{(N)}_{ij}$ where $G^{(N)}_{ij}$ is given by Eq.~\eqref{eq:Gt} and summing over the $n_k$-variables, we get
Using the fact that $G^E_{ij}= \tau \sum_{N=0}^{\infty} G^{(N)}_{ij}$, where $G^{(N)}_{ij}$ is given by Eq.~\eqref{eq:Gt}, and summing over the variables $n_k$, we get
\begin{multline}
\label{eq:eqfond}
{G}^{E}_{i_0 i_N}
@ -751,7 +740,7 @@ Using the fact that $G^E_{ij}= \tau \sum_{N=0}^{\infty} G^{(N)}_{ij}$ where $G^{
\qty[ \prod_{k=0}^{p-1} \mel{ I_k }{ {\qty[ P_k \qty( H-E \Id ) P_k ] }^{-1} (-H)(\Id-P_k) }{ I_{k+1} } ]
{G}^{E,\cD}_{I_p i_N}
\end{multline}
where, ${G}^{E,\cD}$ is the energy-dependent domain's Green matrix defined as ${G}^{E,\cD}_{ij} = \tau \sum_{N=0}^{\infty} \mel{ i }{ T^N_i }{ j}$.
where, ${G}^{E,\cD}$ is the energy-dependent domain's Green matrix defined as ${G}^{E,\cD}_{ij} = \tau \sum_{N=0}^{\infty} \mel{ i }{ \titou{T^N_i} }{ j}$.
As a didactical example, Appendix \ref{app:A} reports the exact derivation of this formula in the case of a two-state system.
@ -836,25 +825,24 @@ of the non-linear equation $\cE(E)= E$ in the vicinity of $E_0$.
In practical Monte Carlo calculations the DMC energy will be obtained by computing a finite number of components $H_p$ and $S_p$ defined as follows
\be
\cE^\text{DMC}(E,p_{max})= \frac{ H_0+ \sum_{p=1}^{p_{max}} H^\text{DMC}_p }
{S_1 +\sum_{p=1}^{p_{max}} S^\text{DMC}_p }
{S_{\titou{0}} +\sum_{p=1}^{p_{max}} S^\text{DMC}_p }.
\ee
For $ p\ge 1$, Eq~\eqref{eq:final_E} gives
\begin{align}
H^\text{DMC}_p & = \PsiG_{i_0}\expval{ \qty(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}}) {\cal H}_{I_p} }
H^\text{DMC}_p & = \PsiG_{i_0}\expval{ \qty(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}}) {\cal H}_{I_p} },
\\
S^\text{DMC}_p & = \PsiG_{i_0} \expval{ \qty(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}}) {\cal S}_{I_p} }.
S^\text{DMC}_p & = \PsiG_{i_0} \expval{ \qty(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}}) {\cal S}_{I_p} },
\end{align}
where
\be
{\cal H}_{I_p}= \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{(E,\cD}_{I_p i_N} (H \Psi_T)_{i_N}
\ee
and
\be
{\cal S}_{I_p}= \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{E, \cD}_{I_p i_N} (\Psi_T)_{i_N}
\ee
\begin{align}
\cH_{I_p} & = \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{E,\cD}_{I_p i_N} (H \Psi_T)_{i_N},
\\
\cS_{I_p} & = \frac{1}{\PsiG_{I_p}} \sum_{i_N} {G}^{E, \cD}_{I_p i_N} (\Psi_T)_{i_N}.
\end{align}
For $p=0$, the two components are exactly evaluated as
\begin{align}
H_0 & = \mel{ I_0 }{ {\qty[ P_{I_0} \qty(H-E \Id) P_{I_0} ]}^{-1} }{ H\PsiT }
H_0 & = \mel{ I_0 }{ {\qty[ P_{I_0} \qty(H-E \Id) P_{I_0} ]}^{-1} }{ H\PsiT },
\\
S_0 & = \mel{ I_0 }{ {\qty[ P_{I_0} \qty(H-E \Id) P_{I_0} ]}^{-1} }{ \PsiT }.
\end{align}
@ -1139,8 +1127,7 @@ $\cD(0,1)\cup\cD(1,0)\cup\cD$(2,0) & 36 & $\infty$&1&$-0.75272390$\\
\end{ruledtabular}
\end{table}
As explained above, it is very advantageous to calculate exactly as many $(H_p,S_p)$ as possible in order to avoid the sttistical error on the heaviest
components.
As explained above, it is very advantageous to calculate exactly as many $(H_p,S_p)$ as possible in order to avoid the statistical error on the largest components.
Table \ref{tab2} shows the results both for the case of a single-state main domain and for the domain having the largest average trapping time, namely $\cD(0,1) \cup \cD(1,1)$ (see Table \ref{tab1}).
Table \ref{tab2} reports the statistical fluctuations of the energy for the simulation of Table \ref{tab1}.
Results show that it is indeed interesting to compute exactly as many components as possible.