OK with Sec II
This commit is contained in:
parent
10c357343f
commit
90de8049bc
51
g.tex
51
g.tex
@ -140,7 +140,7 @@
|
||||
\noindent
|
||||
The sampling of the configuration space in diffusion Monte Carlo (DMC) is done using walkers moving randomly.
|
||||
In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al.~Phys.~Rev.~B \textbf{60}, 2299 (1999)}],
|
||||
it was shown that the probability for a walker to stay a certain amount of time in the same \titou{state} obeys a Poisson law and that the \titou{on-state} dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
|
||||
it was shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
|
||||
Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size.
|
||||
The equations of the resulting effective stochastic dynamics are derived.
|
||||
The larger the average (trapping) time spent by the walker within the domains, the greater the reduction in statistical fluctuations.
|
||||
@ -167,7 +167,7 @@ In practice, power methods are employed under more sophisticated implementations
|
||||
When the size of the matrix is too large, matrix-vector multiplications become unfeasible and probabilistic techniques to sample only the most important contributions of the matrix-vector product are required.
|
||||
This is the basic idea of DMC.
|
||||
There exist several variants of DMC known under various names: pure DMC, \cite{Caffarel_1988} DMC with branching, \cite{Reynolds_1982} reptation Monte Carlo, \cite{Baroni_1999} stochastic reconfiguration Monte Carlo, \cite{Sorella_1998,Assaraf_2000} etc.
|
||||
Here, we shall place ourselves within the framework of pure DMC whose mathematical simplicity is particularly appealing when developing new ideas, although it is usually not the most efficient variant of DMC. \titou{Why?}
|
||||
Here, we shall place ourselves within the framework of pure DMC whose mathematical simplicity is particularly appealing when developing new ideas.
|
||||
However, all the ideas presented in this work can be adapted without too much difficulty to the other variants, so the denomination DMC must ultimately be understood here as a generic name for this broad class of methods.
|
||||
|
||||
Without entering into the mathematical details (which are presented below), the main ingredient of DMC in order to perform the matrix-vector multiplications probabilistically is the stochastic matrix (or transition probability matrix) that generates stepwise a series of states over which statistical averages are evaluated.
|
||||
@ -175,7 +175,7 @@ The critical aspect of any Monte Carlo scheme is the amount of computational eff
|
||||
Two important avenues to decrease the error are the use of variance reduction techniques (for example, by introducing improved estimators \cite{Assaraf_1999A}) or to improve the quality of the sampling (minimization of the correlation time between states).
|
||||
Another possibility, at the heart of the present work, is to integrate out exactly some parts of the dynamics, thus reducing the number of degrees of freedom and, hence, the amount of statistical fluctuations.
|
||||
|
||||
In previous works,\cite{Assaraf_1999B,Caffarel_2000} it has been shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the \titou{on-state} dynamics can be integrated out to generate an effective dynamics connecting only different states with some renormalized estimators for the properties.
|
||||
In previous works,\cite{Assaraf_1999B,Caffarel_2000} it has been shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out to generate an effective dynamics connecting only different states with some renormalized estimators for the properties.
|
||||
Numerical applications have shown that the statistical errors can be very significantly decreased.
|
||||
Here, we extend this idea to the general case where a walker remains a certain amount of time within a finite domain no longer restricted to a single state.
|
||||
It is shown how to define an effective stochastic dynamics describing walkers moving from one domain to another.
|
||||
@ -236,7 +236,7 @@ To proceed further we introduce the time-dependent Green's matrix $G^{(N)}$ defi
|
||||
\be
|
||||
G^{(N)}_{ij}=\mel{j}{T^N}{i}.
|
||||
\ee
|
||||
\titou{where $\ket{i}$ and $\ket{j}$ are basis vectors.}
|
||||
where $\ket{i}$ and $\ket{j}$ are basis vectors.
|
||||
The denomination ``time-dependent Green's matrix'' is used here since $G$ may be viewed as a short-time approximation of the (time-imaginary) evolution operator,
|
||||
$e^{-N\tau H}$ which is usually referred to as the imaginary-time dependent Green's function.
|
||||
|
||||
@ -245,7 +245,7 @@ $e^{-N\tau H}$ which is usually referred to as the imaginary-time dependent Gree
|
||||
\label{eq:cn}
|
||||
G^{(N)}_{i_0 i_N} = \sum_{i_1} \sum_{i_2} ... \sum_{i_{N-1}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}},
|
||||
\ee
|
||||
\titou{where $T_{ij} =\mel{i}{T}{j}$}.
|
||||
where $T_{ij} =\mel{i}{T}{j}$.
|
||||
Here, each index $i_k$ runs over all basis vectors.
|
||||
|
||||
In quantum physics, Eq.~\eqref{eq:cn} is referred to as the path-integral representation of the Green's matrix (function).
|
||||
@ -257,7 +257,7 @@ Each path is associated with a weight $\prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}$ and
|
||||
\ee
|
||||
|
||||
This expression allows a simple and vivid interpretation of the solution.
|
||||
In the limit $N \to \infty$, the $i$th component of the ground-state wave function (obtained as $\lim_{N \to \infty} G^{(N)}_{\titou{i_0 i_N}})$ is the weighted sum over all possible paths arriving at vector \titou{$\ket{i_N}$}.
|
||||
In the limit $N \to \infty$, the $i_N$th component of the ground-state wave function (obtained as $\lim_{N \to \infty} G^{(N)}_{i_0 i_N})$ is the weighted sum over all possible paths arriving at vector $\ket{i_N}$.
|
||||
This result is independent of the initial vector $\ket{i_0}$, apart from some irrelevant global phase factor.
|
||||
When the size of the linear space is small the explicit calculation of the full sums involving $M^N$ terms (where $M$ is the size of the Hilbert space) can be performed.
|
||||
In such a case, we are in the realm of what one would call the ``deterministic'' power methods, such as the Lancz\`os or Davidson approaches.
|
||||
@ -331,7 +331,7 @@ We are now in the position to define the stochastic matrix as
|
||||
As readily seen in Eq.~\eqref{eq:pij}, the off-diagonal terms of the stochastic matrix are positive, while the diagonal ones can be made positive if $\tau$ is chosen sufficiently small via the condition
|
||||
\be
|
||||
\label{eq:cond}
|
||||
\tau^{-1} \geq \max_i\abs{H^+_{ii}-(\EL^+)_{i}}.
|
||||
\tau \leq \frac{1}{\max_i\abs{H^+_{ii}-(\EL^+)_{i}}}.
|
||||
\ee
|
||||
The sum-over-states condition [see Eq.~\eqref{eq:sumup}]
|
||||
\be
|
||||
@ -341,7 +341,7 @@ follows from the fact that $\ket{\PsiG}$ is eigenvector of $T^+$ [see Eq.~\eqref
|
||||
This ensures that $p_{i \to j}$ is indeed a stochastic matrix.
|
||||
|
||||
At first sight, the condition defining the maximum value of $\tau$ allowed, Eq.~\eqref{eq:cond}, may appear rather tight since, for very large matrices, it may impose an extremely small value of the time step.
|
||||
However, in practice, during the simulation only a (tiny) fraction of the linear space is sampled, and the maximum absolute value of $H^+_{ii}-(\EL^+)_{i}$ for the sampled states turns out to be \titou{not too large}.
|
||||
However, in practice, during the simulation only a (tiny) fraction of the linear space is sampled, and the maximum absolute value of $H^+_{ii}-(\EL^+)_{i}$ for the sampled states turns out to be not too large.
|
||||
Hence, reasonable values of $\tau$ can be selected without violating the positivity of the transition probability matrix.
|
||||
\titou{Note that one can eschew this condition via a simple generalization of the transition probability matrix:}
|
||||
\be
|
||||
@ -351,18 +351,17 @@ Hence, reasonable values of $\tau$ can be selected without violating the positiv
|
||||
= \frac{ \PsiG_{j} \abs*{T^+_{ij}} }
|
||||
{ \sum_j \PsiG_{j} \abs*{T^+_{ij}} }.
|
||||
\ee
|
||||
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive.
|
||||
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive as $\sum_j \PsiG_{j} T^+_{ij} = 1$.
|
||||
|
||||
Now, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weight reads
|
||||
\titou{Now}, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weights read
|
||||
\be
|
||||
w_{ij}=\frac{T_{ij}}{T^+_{ij}}.
|
||||
\ee
|
||||
Using these notations the Green's matrix components can be rewritten as
|
||||
\be
|
||||
\bar{G}^{(N)}_{\titou{i i_0}} =
|
||||
\sum_{i_1,\ldots,i_{N-1}} \qty( \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ) \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}
|
||||
\bar{G}^{(N)}_{i_0 i_N} =
|
||||
\sum_{i_1,\ldots,i_{N-1}} \qty( \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ) \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}.
|
||||
\ee
|
||||
\titou{where $i$ is identified to $i_N$.}
|
||||
|
||||
The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted $\text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1})$,
|
||||
for the path starting at $\ket{i_0}$ and ending at $\ket{i_N}$ to occur.
|
||||
@ -371,7 +370,7 @@ Using Eq.~\eqref{eq:sumup} and the fact that $p_{i \to j} \ge 0$, one can easily
|
||||
\sum_{i_1,\ldots,i_{N-1}} \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}) = 1,
|
||||
\ee
|
||||
as it should.
|
||||
For a given path $i_1,\ldots,i_{N-1}$, the probabilistic average associated with this probability, denoted here as $\expval{\cdots}$, is then defined as
|
||||
For a given path $i_1,\ldots,i_{N-1}$, the probabilistic average associated with this probability is then defined as
|
||||
\be
|
||||
\label{eq:average}
|
||||
\expval{F} = \sum_{i_1,\ldots,i_{N-1}} F(i_0,\ldots,i_N) \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}),
|
||||
@ -379,22 +378,22 @@ For a given path $i_1,\ldots,i_{N-1}$, the probabilistic average associated with
|
||||
where $F$ is an arbitrary function.
|
||||
Finally, the path-integral expressed as a probabilistic average reads
|
||||
\be
|
||||
\bar{G}^{(N)}_{\titou{ii_0}}= \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}}
|
||||
\label{cn_stoch}
|
||||
\label{eq:cn_stoch}
|
||||
\bar{G}^{(N)}_{i_0 i_N}= \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}}.
|
||||
\ee
|
||||
To calculate the probabilistic average, Eq.~\eqref{eq:average},
|
||||
an artificial (mathematical) ``particle'' called walker (or psi-particle) is introduced.
|
||||
During the Monte Carlo simulation the walker moves in configuration space by drawing new states with
|
||||
probability $p_{i_k \to i_{k+1}}$, thus realizing the path of probability $\text{Prob}(i_0 \to i_n)$.
|
||||
The energy, Eq.~\eqref{eq:E0} is given as
|
||||
During the Monte Carlo simulation, the walker moves in configuration space by drawing new states with
|
||||
probability $p_{i_k \to i_{k+1}}$, thus realizing the path of probability $\text{Prob}(i_0 \to i_N)$.
|
||||
In this framework, the energy defined in Eq.~\eqref{eq:E0} is given by
|
||||
\be
|
||||
E_0 = \lim_{N \to \infty }
|
||||
\frac{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N}} }
|
||||
{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} }}.
|
||||
\ee
|
||||
Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over the population.
|
||||
In addition, thanks to the ergodic property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}) a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
|
||||
We shall not here insist on these practical details that can be found, for example, in refs \onlinecite{Foulkes_2001,Kolorenc_2011}.
|
||||
Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over this population.
|
||||
In addition, thanks to the ergodic property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}), a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
|
||||
We shall not here insist on these practical details that are discussed, for example, in Refs.~\onlinecite{Foulkes_2001,Kolorenc_2011}.
|
||||
|
||||
%{\it Spawner representation} In this representation, we no longer consider moving particles but occupied or non-occupied states $|i\rangle$.
|
||||
%To each state is associated the (positive or negative) quantity $c_i$.
|
||||
@ -408,12 +407,12 @@ We shall not here insist on these practical details that can be found, for examp
|
||||
%In the numerical applications to follow, we shall use the walker representation.
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{DMC with domains}
|
||||
\section{Diffusion Monte Carlo with domains}
|
||||
\label{sec:DMC_domains}
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
%=======================================%
|
||||
\subsection{Domains consisting of a single state}
|
||||
\subsection{Single-state domains}
|
||||
\label{sec:single_domains}
|
||||
%=======================================%
|
||||
|
||||
@ -428,9 +427,9 @@ Let us consider a given state $|i\rangle$. The probability that the walker remai
|
||||
\ee
|
||||
Using the relation
|
||||
\be
|
||||
\sum_{n=1}^{\infty} p^{n-1}(i \to i)=\frac{1}{1-p(i \to i)}
|
||||
\sum_{n=1}^{\infty} (p_{i \to i})^{n-1}=\frac{1}{1-p_{i \to i}}
|
||||
\ee
|
||||
and the normalization of the $p(i \to j)$, Eq.~\eqref{eq:sumup}, we verify that the probability is normalized to one
|
||||
and the normalization of the $p_{i \to j}$, Eq.~\eqref{eq:sumup}, we verify that the probability is normalized to one
|
||||
\be
|
||||
\sum_{j \ne i} \sum_{n=1}^{\infty} \cP_{i \to j}(n) = 1.
|
||||
\ee
|
||||
|
Loading…
Reference in New Issue
Block a user