starting working on intro

This commit is contained in:
Pierre-Francois Loos 2022-09-14 14:22:15 +02:00
parent e76b7a1cb3
commit 226e1adc15

87
g.tex
View File

@ -1,5 +1,5 @@
\documentclass[aps,prb,reprint,showkeys,superscriptaddress]{revtex4-1}
\usepackage{subcaption}
%\usepackage{subcaption}
\usepackage{bm,graphicx,tabularx,array,booktabs,dcolumn,xcolor,microtype,multirow,amscd,amsmath,amssymb,amsfonts,physics,siunitx}
\usepackage[version=4]{mhchem}
\usepackage[utf8]{inputenc}
@ -115,7 +115,7 @@
\newcommand{\LCT}{Laboratoire de Chimie Th\'eorique, Sorbonne-Universit\'e, Paris, France}
\begin{document}
\title{Quantum Monte Carlo using Domains in Configuration Space}
\title{Diffusion Monte Carlo using Domains in Configuration Space}
\author{Roland Assaraf}
\email{assaraf@lct.jussieu.fr}
\affiliation{\LCT}
@ -137,21 +137,18 @@
\begin{abstract}
\noindent
The sampling of the configuration space in Diffusion Monte Carlo (DMC)
The sampling of the configuration space in diffusion Monte Carlo (DMC)
is done using walkers moving randomly.
In a previous work on the Hubbard model [Assaraf et al. Phys. Rev. B {\bf 60}, 2299 (1999)],
it was shown that the probability for a walker to stay a certain amount of time on the same state obeys a Poisson law and that
the on-state dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
Here, we extend this idea to the general case of a walker
trapped within domains of arbitrary shape and size.
In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al. Phys. Rev. B \textbf{60}, 2299 (1999)}],
it was shown that the probability for a walker to stay a certain amount of time in the same \titou{state} obeys a Poisson law and that
the on-\titou{state} dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size.
The equations of the resulting effective stochastic dynamics are derived.
The larger the average (trapping) time spent by the walker within the domains, the greater the reduction in statistical fluctuations.
A numerical application to the 1D-Hubbard model is presented.
A numerical application to the Hubbard model is presented.
Although this work presents the method for finite linear spaces, it can be generalized without fundamental difficulties to continuous configuration spaces.
\end{abstract}
\keywords{}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@ -159,31 +156,26 @@ Although this work presents the method for finite linear spaces, it can be gener
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Diffusion Monte Carlo (DMC) is a class of stochastic methods for evaluating
the ground-state properties of quantum systems. They have been extensively used
in virtually all domains of physics and chemistry where the $N$-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011}
in virtually all domains of physics and chemistry where the many-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011}
quantum liquids,\cite{Holzmann_2006}
nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc.).
DMC can be used either for systems defined in a continuous configuration space
(typically, a set of particles
moving in space) for which the Hamiltonian is an operator in a (infinite-dimensional) Hilbert space or systems defined in a discrete configuration space where
the Hamiltonian reduces to a matrix. Here, we shall consider only the discrete case, that is, the general problem
of calculating the lowest eigenvalue/eigenstate of a (very large) matrix.
nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc).
DMC can be used either for systems defined in a continuous configuration space (typically, a set of particles moving in space) for which the Hamiltonian operator is defined in a (infinite-dimensional) Hilbert space or systems defined in a discrete configuration space where the Hamiltonian reduces to a matrix.
Here, we shall consider only the discrete case, that is, the general problem of calculating the lowest eigenvalue/eigenstate of a (very large) matrix.
The generalization to continuous configuration spaces presents no fundamental difficulty.
In essence, DMC are {\it stochastic} power methods. The power method is an old and widely employed numerical approach to extract
the eigenvalues of a matrix having the largest and smallest modulus (see, {\it e.g.} [\onlinecite{Golub_2012}]). This approach is particularly simple: It merely consists
in applying the matrix (or some simple function of it) as many times as
needed on some arbitrary vector of the linear space. Thus, the basic step of the algorithm essentially reduces to a matrix-vector multiplication.
In practice, the power method is used under some more sophisticated implementations, such as, {\it e.g}.
the Lancz\`os\cite{Golub_2012} or Davidson algorithms.\cite{Davidson_1975}
In essence, DMC is based on \textit{stochastic} power methods, an old and widely employed numerical approach to extract
the largest or smallest eigenvalues of a matrix (see, \eg, Ref.~\onlinecite{Golub_2012}).
This approach is particularly simple as it merely consists in applying the matrix (or some simple function of it) as many times as
required on some arbitrary vector belonging to the linear space.
Thus, the basic step of the corresponding algorithm essentially reduces to a matrix-vector multiplication.
In practice, the power method is employed under some more sophisticated implementations, such as, \eg,
the Lancz\`os \cite{Golub_2012} or Davidson \cite{Davidson_1975} algorithms.
When the size of the matrix is too large, the matrix-vector multiplication becomes unfeasible
and probabilistic techniques to sample only the most important contributions of the matrix-vector product are required. This is the basic idea of
DMC. There exist several variants of DMC known under various names
(Pure DMC,\cite{Caffarel_1988} DMC with branching,\cite{Reynolds_1982} Reptation Monte Carlo,\cite{Baroni_1999} Stochastic Reconfiguration Monte Carlo,
\cite{Sorella_1998,Assaraf_2000} etc.).
Here, we shall place ourselves within the framework of Pure DMC whose mathematical simplicity is particularly appealing when developing new ideas,
although it is usually not the most efficient variant of DMC.
However, all the ideas presented in this work can be adapted without too much difficulty to the other variants,
so the denomination DMC must ultimately be understood here as a generic name for the broad class of DMC methods.
and probabilistic techniques to sample only the most important contributions of the matrix-vector product are required.
This is the basic idea of DMC. There exist several variants of DMC known under various names:
pure DMC, \cite{Caffarel_1988} DMC with branching, \cite{Reynolds_1982} reptation Monte Carlo, \cite{Baroni_1999} stochastic reconfiguration Monte Carlo, \cite{Sorella_1998,Assaraf_2000} etc.
Here, we shall place ourselves within the framework of pure DMC whose mathematical simplicity is particularly appealing when developing new ideas, although it is usually not the most efficient variant of DMC. \titou{Why?}
However, all the ideas presented in this work can be adapted without too much difficulty to the other variants, so the denomination DMC must ultimately be understood here as a generic name for the broad class of DMC methods.
Without entering into the mathematical details presented below, the main ingredient of DMC to perform the
matrix-vector multiplication probabilistically is the use of a stochastic matrix (or transition probability matrix)
@ -962,7 +954,7 @@ the two-component expression. The estimate of the energy obtained from ${\cal E}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{fig1.pdf}
\includegraphics[width=\linewidth]{fig1}
\end{center}
\caption{1D-Hubbard model, $N=4$, $U=12$. $H_p$ as a function of $p$ for $E=-1.6,-1.2,-1.,-0.9,-0.8$. $H_0$ is
computed analytically and $H_p$ (p > 0) by Monte Carlo. Error bars are smaller than the symbol size.}
@ -972,7 +964,7 @@ computed analytically and $H_p$ (p > 0) by Monte Carlo. Error bars are smaller t
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{fig2.pdf}
\includegraphics[width=\linewidth]{fig2}
\end{center}
\caption{1D-Hubbard model, $N=4$ and $U=12$. ${\cal E}(E)$ as a function of $E$.
The horizontal and vertical lines are at ${\cal E}(E_0)=E_0$ and $E=E_0$, respectively.
@ -1008,8 +1000,8 @@ starting from the N\'eel state. $\bar{t}_{I_0}$
is the average trapping time for the
N\'eel state. $p_{\rm conv}$ is a measure of the convergence of ${\cal E}_{QMC}(p)$ as a function of $p$, see text.}
\label{tab1}
\begin{ruledtabular}
\begin{tabular}{lcccl}
\hline
Domain & Size & $\bar{t}_{I_0}$ & $p_{\rm conv}$ & $\;\;\;\;\;\;{\cal E}_{QMC}$ \\
\hline
Single & 1 & 0.026 & 88 &$\;\;\;\;$-0.75276(3)\\
@ -1028,8 +1020,8 @@ ${\cal D}(0,2)$ $\cup$ ${\cal D}(1,1)$ $\cup$ ${\cal D}$(2,0) &22 & 10.8 & 30 &$
${\cal D}(0,2)$ $\cup$ ${\cal D}(1,0)$ $\cup$ ${\cal D}$(2,0) &34 & 52.5 & 13&$\;\;\;\;$-0.7527236(2)\\
${\cal D}(0,1)$ $\cup$ ${\cal D}(1,1)$ $\cup$ ${\cal D}$(2,0) & 24 & 10.8 & 26&$\;\;\;\;$-0.75270(1)\\
${\cal D}(0,1)$ $\cup$ ${\cal D}(1,0)$ $\cup$ ${\cal D}$(2,0) & 36 & $\infty$&1&$\;\;\;\;$-0.75272390\\
\hline
\end{tabular}
\end{ruledtabular}
\end{table}
As a general rule, it is always good to avoid the Monte Carlo calculation of a quantity which is computable analytically. Here, we apply
@ -1071,14 +1063,13 @@ laptop. Of course, it will also be particularly interesting to take advantage of
All these aspects will be considered in a forthcoming work.
\begin{table}[h!]
\centering
\caption{$N=4$, $U=12$, and $E=-1$. Dependence of the statistical error on the energy with the number of $p$-components calculated
analytically. Same simulation as for Table \ref{tab1}. Results are presented when a single-state domain
is used for all states and when
${\cal D}(0,1) \cup {\cal D}(1,0)$ is chosen as main domain.}
\label{tab2}
\begin{ruledtabular}
\begin{tabular}{lcc}
\hline
$p_{ex}$ & single-state & ${\cal D}(0,1) \cup {\cal D}(1,0)$ \\
\hline
$0$ & $4.3 \times 10^{-5}$ &$ 347 \times 10^{-8}$ \\
@ -1090,18 +1081,17 @@ $5$ & $2.5 \times10^{-5}$ &$ 6.0 \times 10^{-8}$\\
$6$ & $2.3 \times10^{-5}$ &$ 0.7 \times 10^{-8}$\\
$7$ & $2.2 \times 10^{-5}$ &$ 0.6 \times 10^{-8}$\\
$8$ & $2.2 \times10^{-5}$ &$ 0.05 \times 10^{-8}$\\
\hline
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[h!]
\centering
\caption{$N=4$, $U=12$, $\alpha=1.292$, $\beta=0.552$. Main domain = ${\cal D}(0,1) \cup {\cal D}(1,0)$. Simulation with 20 independent blocks and $10^6$ paths.
$p_{ex}=4$. The various fits are done with the five values of $E$}
\label{tab3}
\begin{ruledtabular}
\begin{tabular}{lc}
\hline
$E$ & $E_{QMC}$ \\
\hline
-0.8 &-0.7654686(2)\\
@ -1113,16 +1103,15 @@ $E_0$ linear fit & -0.7680282(5)\\
$E_0$ quadratic fit & -0.7680684(5)\\
$E_0$ two-component fit & -0.7680676(5)\\
$E_0$ exact & -0.768068...\\
\hline
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[h!]
\centering
\caption{$N$=4, Domain ${\cal D}(0,1) \cup {\cal D}(1,0)$}
\label{tab4}
\begin{ruledtabular}
\begin{tabular}{cccccc}
\hline
$U$ & $\alpha,\beta$ & $E_{var}$ & $E_{ex}$ & $\bar{t}_{I_0}$ \\
\hline
8 & 0.908,\;0.520 & -0.770342... &-1.117172... & 33.5\\
@ -1132,16 +1121,15 @@ $U$ & $\alpha,\beta$ & $E_{var}$ & $E_{ex}$ & $\bar{t}_{I_0}$ \\
20 & 1.786,\;0.582 & -0.286044... &-0.468619... & 504.5 \\
50 & 2.690,\;0.609 & -0.110013... &-0.188984... & 8040.2 \\
200 & 4.070,\;0.624& -0.026940... &-0.047315... & 523836.0 \\
\hline
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table*}[h!]
\centering
\caption{$U=12$. The fits to extrapolate the QMC energies are done using the two-component function}
\label{tab5}
\begin{ruledtabular}
\begin{tabular}{crcrccccc}
\hline
$N$ & Size Hilbert space & Domain & Domain size & $\alpha,\beta$ &$\bar{t}_{I_0}$ & $E_{var}$ & $E_{ex}$ & $E_{QMC}$\\
\hline
4 & 36 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ & 30 &1.292, \; 0.552& 108.7 & -0.495361 & -0.768068 & -0.7680676(5)\\\
@ -1149,8 +1137,8 @@ $N$ & Size Hilbert space & Domain & Domain size & $\alpha,\beta$ &$\bar{t}_{I_0}
8 & 4 900 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ & 1 190 & 0.984,\; 0.788 & 42.8 & -0.750995 & -1.66395& -1.6637(2)\\
10 & 63 504 & ${\cal D}(0,5) \cup {\cal D}(1,4)$ & 2 682 & 0.856,\; 0.869& 31.0 & -0.855958 & -2.113089& -2.1120(7)\\
12 & 853 776 & ${\cal D}(0,8) \cup {\cal D}(1,7)$ & 1 674 & 0.739,\; 0.938 & 16.7 & -0.952127 & -2.562529& -2.560(6)\\
\hline
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{Summary and perspectives}
@ -1179,11 +1167,12 @@ more elaborate implementation of the method in order to keep under control the c
Doing this was out of the scope of the present work and will be presented in a forthcoming work.
\section*{Acknowledgement}
P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481).
This work was supported by the European Centre of
Excellence in Exascale Computing TREX --- Targeting Real Chemical
Accuracy at the Exascale. This project has received funding from the
European Union's Horizon 2020 --- Research and Innovation program ---
under grant agreement no. 952165.
under grant agreement no.~952165.
\appendix
\section{Particular case of the $2\times2$ matrix}