DQMC/g.tex

1272 lines
67 KiB
TeX
Raw Normal View History

2022-09-14 11:30:16 +02:00
\documentclass[aps,prb,reprint,showkeys,superscriptaddress]{revtex4-1}
2022-09-14 14:22:15 +02:00
%\usepackage{subcaption}
2022-09-14 11:30:16 +02:00
\usepackage{bm,graphicx,tabularx,array,booktabs,dcolumn,xcolor,microtype,multirow,amscd,amsmath,amssymb,amsfonts,physics,siunitx}
\usepackage[version=4]{mhchem}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{txfonts,dsfont}
\usepackage{xspace}
\usepackage[normalem]{ulem}
\newcommand{\roland}[1]{\textcolor{cyan}{\bf #1}}
\newcommand{\manu}[1]{\textcolor{blue}{\bf #1}}
\newcommand{\vijay}[1]{\textcolor{green}{\bf #1}}
2022-09-14 17:31:00 +02:00
\newcommand{\titou}[1]{\textcolor{red}{#1}}
2022-09-14 11:30:16 +02:00
\newcommand{\toto}[1]{\textcolor{purple}{\bf #1}}
\newcommand{\mimi}[1]{\textcolor{orange}{\bf #1}}
\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\usepackage[
colorlinks=true,
citecolor=blue,
linkcolor=blue,
filecolor=blue,
urlcolor=blue,
breaklinks=true
]{hyperref}
\urlstyle{same}
\newcommand{\ctab}{\multicolumn{1}{c}{---}}
\newcommand{\latin}[1]{#1}
%\newcommand{\latin}[1]{\textit{#1}}
\newcommand{\ie}{\latin{i.e.}}
\newcommand{\eg}{\latin{e.g.}}
\newcommand{\etal}{\textit{et al.}}
\newcommand{\mc}{\multicolumn}
\newcommand{\fnm}{\footnotemark}
\newcommand{\fnt}{\footnotetext}
\newcommand{\mcc}[1]{\multicolumn{1}{c}{#1}}
\newcommand{\mr}{\multirow}
% operators
2022-09-14 22:22:15 +02:00
\newcommand{\bH}{\boldsymbol{H}}
\newcommand{\bV}{\boldsymbol{V}}
\newcommand{\bh}{\boldsymbol{h}}
\newcommand{\bQ}{\boldsymbol{Q}}
\newcommand{\br}{\boldsymbol{r}}
\newcommand{\bp}{\boldsymbol{p}}
2022-09-14 11:30:16 +02:00
\newcommand{\cP}{\mathcal{P}}
\newcommand{\cS}{\mathcal{S}}
\newcommand{\cT}{\mathcal{T}}
\newcommand{\cC}{\mathcal{C}}
2022-09-14 22:22:15 +02:00
\newcommand{\cD}{\mathcal{D}}
2022-09-14 11:30:16 +02:00
\newcommand{\EPT}{E_{\PT}}
\newcommand{\laPT}{\lambda_{\PT}}
\newcommand{\EEP}{E_\text{EP}}
\newcommand{\laEP}{\lambda_\text{EP}}
2022-09-14 17:31:00 +02:00
\newcommand{\PsiT}{\Psi_\text{T}}
2022-09-14 22:22:15 +02:00
\newcommand{\PsiG}{\Psi^{+}}
\newcommand{\EL}{E_\text{L}}
\newcommand{\Id}{\mathds{1}}
2022-09-14 11:30:16 +02:00
\newcommand{\Ne}{N} % Number of electrons
\newcommand{\Nn}{M} % Number of nuclei
\newcommand{\hI}{\Hat{I}}
\newcommand{\hH}{\Hat{H}}
\newcommand{\hS}{\Hat{S}}
\newcommand{\hT}{\Hat{T}}
\newcommand{\hW}{\Hat{W}}
\newcommand{\hV}{\Hat{V}}
\newcommand{\hc}[2]{\Hat{c}_{#1}^{#2}}
\newcommand{\hn}[1]{\Hat{n}_{#1}}
\newcommand{\n}[1]{n_{#1}}
\newcommand{\Dv}{\Delta v}
2022-09-14 17:31:00 +02:00
\newcommand{\ra}{\to}
2022-09-14 11:30:16 +02:00
% Center tabularx columns
\newcolumntype{Y}{>{\centering\arraybackslash}X}
% HF rotation angles
\newcommand{\ta}{\theta^{\,\alpha}}
\newcommand{\tb}{\theta^{\,\beta}}
\newcommand{\ts}{\theta^{\sigma}}
% Some constants
\renewcommand{\i}{\mathrm{i}} % Imaginary unit
\newcommand{\e}{\mathrm{e}} % Euler number
\newcommand{\rc}{r_{\text{c}}}
\newcommand{\lc}{\lambda_{\text{c}}}
\newcommand{\lp}{\lambda_{\text{p}}}
\newcommand{\lep}{\lambda_{\text{EP}}}
% Some energies
\newcommand{\Emp}{E_{\text{MP}}}
% Blackboard bold
\newcommand{\bbR}{\mathbb{R}}
\newcommand{\bbC}{\mathbb{C}}
% Making life easier
\newcommand{\Lup}{\mathcal{L}^{\uparrow}}
\newcommand{\Ldown}{\mathcal{L}^{\downarrow}}
\newcommand{\Lsi}{\mathcal{L}^{\sigma}}
\newcommand{\Rup}{\mathcal{R}^{\uparrow}}
\newcommand{\Rdown}{\mathcal{R}^{\downarrow}}
\newcommand{\Rsi}{\mathcal{R}^{\sigma}}
\newcommand{\vhf}{\Hat{v}_{\text{HF}}}
\newcommand{\whf}{\Psi_{\text{HF}}}
\newcommand{\LCPQ}{Laboratoire de Chimie et Physique Quantiques (UMR 5626), Universit\'e de Toulouse, CNRS, UPS, France}
\newcommand{\LCT}{Laboratoire de Chimie Th\'eorique, Sorbonne-Universit\'e, Paris, France}
\begin{document}
2022-09-14 14:22:15 +02:00
\title{Diffusion Monte Carlo using Domains in Configuration Space}
2022-09-14 11:30:16 +02:00
\author{Roland Assaraf}
\email{assaraf@lct.jussieu.fr}
\affiliation{\LCT}
\author{Emmanuel Giner}
\email{giner@lct.jussieu.fr}
\affiliation{\LCT}
\author{Vijay Gopal Chilkuri}
\email{vijay.gopal.c@gmail.com}
\affiliation{\LCT}
\author{Pierre-Fran\c{c}ois Loos}
\email{loos@irsamc.ups-tlse.fr}
\affiliation{\LCPQ}
\author{Anthony Scemama}
\email{scemama@irsamc.ups-tlse.fr}
\affiliation{\LCPQ}
\author{Michel Caffarel}
\email{caffarel@irsamc.ups-tlse.fr}
\affiliation{\LCPQ}
\begin{abstract}
\noindent
2022-09-14 15:57:39 +02:00
The sampling of the configuration space in diffusion Monte Carlo (DMC) is done using walkers moving randomly.
2022-09-14 22:22:15 +02:00
In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al.~Phys.~Rev.~B \textbf{60}, 2299 (1999)}],
2022-09-15 14:50:55 +02:00
it was shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states.
2022-09-14 14:22:15 +02:00
Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size.
2022-09-14 11:30:16 +02:00
The equations of the resulting effective stochastic dynamics are derived.
The larger the average (trapping) time spent by the walker within the domains, the greater the reduction in statistical fluctuations.
2022-09-14 14:22:15 +02:00
A numerical application to the Hubbard model is presented.
2022-09-14 11:30:16 +02:00
Although this work presents the method for finite linear spaces, it can be generalized without fundamental difficulties to continuous configuration spaces.
\end{abstract}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 17:31:00 +02:00
2022-09-14 15:57:39 +02:00
Diffusion Monte Carlo (DMC) is a class of stochastic methods for evaluating the ground-state properties of quantum systems.
They have been extensively used in virtually all domains of physics and chemistry where the many-body quantum problem plays a central role (condensed-matter physics,\cite{Foulkes_2001,Kolorenc_2011} quantum liquids,\cite{Holzmann_2006} nuclear physics,\cite{Carlson_2015,Carlson_2007} theoretical chemistry,\cite{Austin_2012} etc).
DMC can be used either for systems defined in a continuous configuration space (typically, a set of particles moving in space) for which the Hamiltonian operator is defined in a Hilbert space of infinite dimension or systems defined in a discrete configuration space where the Hamiltonian reduces to a matrix.
Here, we shall consider only the discrete case, that is, the general problem of calculating the lowest eigenvalue and/or eigenstate of a (very large) matrix.
2022-09-14 11:30:16 +02:00
The generalization to continuous configuration spaces presents no fundamental difficulty.
2022-09-14 15:57:39 +02:00
In essence, DMC is based on \textit{stochastic} power methods, a family of old and widely employed numerical approaches able to extract the largest or smallest eigenvalues of a matrix (see, \eg, Ref.~\onlinecite{Golub_2012}).
These approaches are particularly simple as they merely consist in applying a given matrix (or some simple function of it) as many times as required on some arbitrary vector belonging to the linear space.
Thus, the basic step of the corresponding algorithm essentially reduces to successive matrix-vector multiplications.
In practice, power methods are employed under more sophisticated implementations, such as, \eg, the Lancz\`os algorithm (based on Krylov subspaces) \cite{Golub_2012} or Davidson's method where a diagonal preconditioning is performed. \cite{Davidson_1975}
When the size of the matrix is too large, matrix-vector multiplications become unfeasible and probabilistic techniques to sample only the most important contributions of the matrix-vector product are required.
This is the basic idea of DMC.
There exist several variants of DMC known under various names: pure DMC, \cite{Caffarel_1988} DMC with branching, \cite{Reynolds_1982} reptation Monte Carlo, \cite{Baroni_1999} stochastic reconfiguration Monte Carlo, \cite{Sorella_1998,Assaraf_2000} etc.
2022-09-15 14:50:55 +02:00
Here, we shall place ourselves within the framework of pure DMC whose mathematical simplicity is particularly appealing when developing new ideas.
2022-09-14 15:57:39 +02:00
However, all the ideas presented in this work can be adapted without too much difficulty to the other variants, so the denomination DMC must ultimately be understood here as a generic name for this broad class of methods.
Without entering into the mathematical details (which are presented below), the main ingredient of DMC in order to perform the matrix-vector multiplications probabilistically is the stochastic matrix (or transition probability matrix) that generates stepwise a series of states over which statistical averages are evaluated.
The critical aspect of any Monte Carlo scheme is the amount of computational effort required to reach a given statistical error.
2022-09-15 10:12:36 +02:00
Two important avenues to decrease the error are the use of variance reduction techniques (for example, by introducing improved estimators \cite{Assaraf_1999A}) or to improve the quality of the sampling (minimization of the correlation time between states).
2022-09-14 15:57:39 +02:00
Another possibility, at the heart of the present work, is to integrate out exactly some parts of the dynamics, thus reducing the number of degrees of freedom and, hence, the amount of statistical fluctuations.
2022-09-14 11:30:16 +02:00
2022-09-15 14:50:55 +02:00
In previous works,\cite{Assaraf_1999B,Caffarel_2000} it has been shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out to generate an effective dynamics connecting only different states with some renormalized estimators for the properties.
2022-09-14 11:30:16 +02:00
Numerical applications have shown that the statistical errors can be very significantly decreased.
2022-09-14 15:57:39 +02:00
Here, we extend this idea to the general case where a walker remains a certain amount of time within a finite domain no longer restricted to a single state.
It is shown how to define an effective stochastic dynamics describing walkers moving from one domain to another.
The equations of the effective dynamics are derived and a numerical application for a model (one-dimensional) problem is presented.
2022-09-15 10:12:36 +02:00
In particular, it shows that the statistical convergence of the energy can be greatly enhanced when domains associated with large average trapping times are considered.
2022-09-14 15:57:39 +02:00
It should be noted that the use of domains in quantum Monte Carlo is not new.
In their pioneering work, \cite{Kalos_1974} Kalos and collaborators introduced the so-called domain Green's function Monte Carlo approach in continuous space that they applied to a system of bosons with hard-sphere interaction.
The domain used was the Cartesian product of small spheres around each particle, the Hamiltonian being approximated by the kinetic part only within the domain.
Some time later, Kalos proposed to extend these ideas to more general domains such as rectangular and/or cylindrical domains. \cite{Kalos_2000} In both works, the size of the domains is infinitely small in the limit of a vanishing time-step.
Here, the domains are of arbitrary size, thus greatly increasing the efficiency of the approach.
Finally, note that some general equations for arbitrary domains in continuous space have also been proposed in Ref.~\onlinecite{Assaraf_1999B}.
The paper is organized as follows.
Section \ref{sec:DMC} presents the basic equations and notations of DMC.
First, the path integral representation of the Green's function is given in Subsec.~\ref{sec:path}.
The probabilistic framework allowing the Monte Carlo calculation of the Green's function is presented in Subsec.~\ref{sec:proba}.
Section \ref{sec:DMC_domains} is devoted to the use of domains in DMC.
2022-09-15 10:12:36 +02:00
First, we recall in Subsec.~\ref{sec:single_domains} the case of a domain consisting of a single state. \cite{Assaraf_1999B}
2022-09-14 15:57:39 +02:00
The general case is then treated in Subsec.~\ref{sec:general_domains}.
In Subsec.~\ref{sec:Green}, both the time- and energy-dependent Green's function using domains is derived.
Section \ref{sec:numerical} presents the application of the approach to the one-dimensional Hubbard model.
Finally, in Sec.\ref{sec:conclu}, some conclusions and perspectives are given.
2022-09-14 17:31:00 +02:00
Atomic units are used throughout.
2022-09-14 15:57:39 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
\section{Diffusion Monte Carlo}
2022-09-14 15:57:39 +02:00
\label{sec:DMC}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%=======================================%
2022-09-14 11:30:16 +02:00
\subsection{Path-integral representation}
2022-09-14 15:57:39 +02:00
\label{sec:path}
%=======================================%
2022-09-14 17:31:00 +02:00
2022-09-14 15:57:39 +02:00
As previously mentioned, DMC is a stochastic implementation of the power method defined by the following operator:
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
T = \Id -\tau (H-E\Id),
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
where $\Id$ is the identity operator, $\tau$ a small positive parameter playing the role of a time step, $E$ some arbitrary reference energy, and $H$ the Hamiltonian operator. For any initial vector $\ket{\Psi_0}$ provided that $\braket{\Phi_0}{\Psi_0} \ne 0$ and for $\tau$ sufficiently small, we have
2022-09-14 11:30:16 +02:00
\be
2022-09-15 10:12:36 +02:00
\label{eq:limTN}
2022-09-14 17:31:00 +02:00
\lim_{N \to \infty} T^N \ket{\Psi_0} = \ket{\Phi_0},
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
where $\ket{\Phi_0}$ is the ground-state wave function, \ie, $H \ket{\Phi_0} = E_0 \ket{\Phi_0}$.
The equality in Eq.~\eqref{eq:limTN} holds up to a global phase factor playing no role in physical quantum averages.
At large but finite $N$, the vector $T^N \ket{\Psi_0}$ differs from $\ket{\Phi_0}$ only by an exponentially small correction, making it straightforward to extrapolate the finite-$N$ results to $N \to \infty$.
2022-09-14 11:30:16 +02:00
2022-09-14 17:31:00 +02:00
Likewise, ground-state properties may be obtained at large $N$.
For example, in the important case of the energy, one can rely on the following formula
2022-09-14 11:30:16 +02:00
\be
2022-09-14 17:31:00 +02:00
\label{eq:E0}
E_0 = \lim_{N \to \infty } \frac{\mel{\PsiT}{H T^N}{\Psi_0}}{\mel{\PsiT}{T^N}{\Psi_0}},
2022-09-14 11:30:16 +02:00
\ee
2022-09-14 17:31:00 +02:00
where $|\PsiT\rangle$ is some trial vector (some approximation of the true ground-state wave function) on which $T^N \ket{\Psi_0}$ is projected out.
2022-09-14 11:30:16 +02:00
To proceed further we introduce the time-dependent Green's matrix $G^{(N)}$ defined as
\be
2022-09-14 17:31:00 +02:00
G^{(N)}_{ij}=\mel{j}{T^N}{i}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 14:50:55 +02:00
where $\ket{i}$ and $\ket{j}$ are basis vectors.
2022-09-14 15:57:39 +02:00
The denomination ``time-dependent Green's matrix'' is used here since $G$ may be viewed as a short-time approximation of the (time-imaginary) evolution operator,
2022-09-14 17:31:00 +02:00
$e^{-N\tau H}$ which is usually referred to as the imaginary-time dependent Green's function.
2022-09-14 11:30:16 +02:00
2022-09-15 10:12:36 +02:00
\titou{Introducing the set of $N-1$ intermediate states, $\{ \ket{i_k} \}_{1 \le k \le N-1}$, in the $N$th product of $T$,} $G^{(N)}$ can be written in the following expanded form
2022-09-14 11:30:16 +02:00
\be
2022-09-14 17:31:00 +02:00
\label{eq:cn}
2022-09-15 15:52:03 +02:00
G^{(N)}_{i_0 i_N} = \sum_{i_1} \sum_{i_2} \cdots \sum_{i_{N-1}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}},
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 14:50:55 +02:00
where $T_{ij} =\mel{i}{T}{j}$.
2022-09-14 17:31:00 +02:00
Here, each index $i_k$ runs over all basis vectors.
2022-09-14 11:30:16 +02:00
2022-09-14 17:31:00 +02:00
In quantum physics, Eq.~\eqref{eq:cn} is referred to as the path-integral representation of the Green's matrix (function).
The series of states $\ket{i_0}, \ldots,\ket{i_N}$ is interpreted as a ``path'' in the Hilbert space starting at vector $\ket{i_0}$ and ending at vector $\ket{i_N}$ where $k$ plays the role of a time index.
Each path is associated with a weight $\prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}$ and the path integral expression of $G$ can be recast in the more suggestive form as follows:
2022-09-14 11:30:16 +02:00
\be
2022-09-14 17:31:00 +02:00
\label{eq:G}
2022-09-15 10:12:36 +02:00
G^{(N)}_{i_0 i_N}= \sum_{\text{all paths $\ket{i_1},\ldots,\ket{i_{N-1}}$}} \prod_{k=0}^{N-1} T_{i_{k} i_{k+1}}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
2022-09-14 17:31:00 +02:00
This expression allows a simple and vivid interpretation of the solution.
2022-09-15 14:50:55 +02:00
In the limit $N \to \infty$, the $i_N$th component of the ground-state wave function (obtained as $\lim_{N \to \infty} G^{(N)}_{i_0 i_N})$ is the weighted sum over all possible paths arriving at vector $\ket{i_N}$.
2022-09-14 17:31:00 +02:00
This result is independent of the initial vector $\ket{i_0}$, apart from some irrelevant global phase factor.
When the size of the linear space is small the explicit calculation of the full sums involving $M^N$ terms (where $M$ is the size of the Hilbert space) can be performed.
2022-09-15 10:12:36 +02:00
In such a case, we are in the realm of what one would call the ``deterministic'' power methods, such as the Lancz\`os or Davidson approaches.
If not, probabilistic techniques for generating only the paths contributing significantly to the sums are to be preferred.
This is the central theme of the present work.
2022-09-14 11:30:16 +02:00
2022-09-14 17:31:00 +02:00
%=======================================%
2022-09-14 11:30:16 +02:00
\subsection{Probabilistic framework}
2022-09-14 15:57:39 +02:00
\label{sec:proba}
2022-09-14 17:31:00 +02:00
%=======================================%
2022-09-14 11:30:16 +02:00
2022-09-15 10:12:36 +02:00
In order to derive a probabilistic expression for the Green's matrix we introduce a so-called guiding vector, $\ket{\PsiG}$, having strictly positive components, \ie, $\PsiG_i > 0$, in order to apply a similarity transformation to the operators $G^{(N)}$ and $T$ as follows:
2022-09-14 17:31:00 +02:00
\begin{align}
\label{eq:defT}
2022-09-15 10:12:36 +02:00
\bar{T}_{ij} & = \frac{\PsiG_j}{\PsiG_i} T_{ij},
&
\bar{G}^{(N)}_{ij}& = \frac{\PsiG_j}{\PsiG_i} G^{(N)}_{ij}.
2022-09-14 17:31:00 +02:00
\end{align}
2022-09-15 10:12:36 +02:00
Note that, under this similarity transformation, the path integral expression relating $G^{(N)}$ and $T$ [see Eq.~\eqref{eq:G}] remains unchanged for $\bar{G}^{(N)}$ and $\bar{T}$.
2022-09-14 17:31:00 +02:00
2022-09-15 10:12:36 +02:00
Next, the matrix elements of $\bar{T}$ are expressed as those of a stochastic matrix multiplied by some residual weight, namely,
2022-09-14 11:30:16 +02:00
\be
2022-09-14 17:31:00 +02:00
\label{eq:defTij}
2022-09-14 22:22:15 +02:00
\bar{T}_{ij}= p_{i \to j} w_{ij}.
2022-09-14 11:30:16 +02:00
\ee
Here, we recall that a stochastic matrix is defined as a matrix with positive entries and obeying
\be
2022-09-14 17:31:00 +02:00
\label{eq:sumup}
\sum_j p_{i \to j}=1.
2022-09-14 11:30:16 +02:00
\ee
To build the transition probability density the following operator is introduced
%As known, there is a natural way of associating a stochastic matrix to a matrix having a positive ground-state vector (here, a positive vector is defined here as
%a vector with all components positive).
\be
2022-09-15 10:12:36 +02:00
\label{eq:T+}
2022-09-14 22:22:15 +02:00
T^+= \Id - \tau \qty( H^+ - \EL^+ \Id ),
2022-09-14 11:30:16 +02:00
\ee
where
$H^+$ is the matrix obtained from $H$ by imposing the off-diagonal elements to be negative
\be
2022-09-14 17:31:00 +02:00
H^+_{ij}=
\begin{cases}
2022-09-15 10:12:36 +02:00
\phantom{-}H_{ij}, & \text{if $i=j$},
2022-09-14 17:31:00 +02:00
\\
2022-09-15 10:12:36 +02:00
-\abs{H_{ij}}, & \text{if $i\neq j$}.
2022-09-14 17:31:00 +02:00
\end{cases}
2022-09-14 11:30:16 +02:00
\ee
2022-09-14 22:22:15 +02:00
Here, $\EL^+ \Id$ is the diagonal matrix whose diagonal elements are defined as
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
(\EL^+)_{i}= \frac{\sum_j H^+_{ij}\PsiG_j}{\PsiG_i}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-14 22:22:15 +02:00
The vector $\EL^+$ is known as the local energy vector associated with $\PsiG$.
2022-09-15 10:12:36 +02:00
By construction, the operator $H^+ - \EL^+ \Id$ in the definition of the operator $T^+$ [see Eq.~\eqref{eq:T+}] has been chosen to admit $\ket{\PsiG}$ as a ground-state wave function with zero eigenvalue, \ie, $\qty(H^+ - E_L^+ \Id) \ket{\PsiG} = 0$, leading to the relation
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
\label{eq:relT+}
T^+ \ket{\PsiG} = \ket{\PsiG}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
We are now in the position to define the stochastic matrix as
2022-09-14 11:30:16 +02:00
\be
2022-09-14 17:31:00 +02:00
\label{eq:pij}
2022-09-15 10:12:36 +02:00
p_{i \to j}
= \frac{\PsiG_j}{\PsiG_i} T^+_{ij}
=
\begin{cases}
2022-09-16 13:22:13 +02:00
1 - \tau \qty[ H^+_{ii}- (\EL^+)_{i} ], & \qif* i=j,
2022-09-15 10:12:36 +02:00
\\
2022-09-16 13:22:13 +02:00
\tau \frac{\PsiG_{j}}{\PsiG_{i}} \abs{H_{ij}} \ge 0, & \qif* i\neq j.
2022-09-15 10:12:36 +02:00
\end{cases}
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
As readily seen in Eq.~\eqref{eq:pij}, the off-diagonal terms of the stochastic matrix are positive, while the diagonal ones can be made positive if $\tau$ is chosen sufficiently small via the condition
2022-09-14 11:30:16 +02:00
\be
2022-09-14 17:31:00 +02:00
\label{eq:cond}
2022-09-15 14:50:55 +02:00
\tau \leq \frac{1}{\max_i\abs{H^+_{ii}-(\EL^+)_{i}}}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
The sum-over-states condition [see Eq.~\eqref{eq:sumup}]
2022-09-14 11:30:16 +02:00
\be
2022-09-15 10:12:36 +02:00
\sum_j p_{i \to j}= \frac{\mel{i}{T^+}{\PsiG}}{\PsiG_{i}} = 1.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
follows from the fact that $\ket{\PsiG}$ is eigenvector of $T^+$ [see Eq.~\eqref{eq:relT+}].
This ensures that $p_{i \to j}$ is indeed a stochastic matrix.
At first sight, the condition defining the maximum value of $\tau$ allowed, Eq.~\eqref{eq:cond}, may appear rather tight since, for very large matrices, it may impose an extremely small value of the time step.
2022-09-15 14:50:55 +02:00
However, in practice, during the simulation only a (tiny) fraction of the linear space is sampled, and the maximum absolute value of $H^+_{ii}-(\EL^+)_{i}$ for the sampled states turns out to be not too large.
2022-09-15 10:12:36 +02:00
Hence, reasonable values of $\tau$ can be selected without violating the positivity of the transition probability matrix.
\titou{Note that one can eschew this condition via a simple generalization of the transition probability matrix:}
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
p_{i \to j}
= \frac{ \frac{\PsiG_{j}}{\PsiG_{i}} \abs{\mel{i}{T^+}{j}} }
{ \sum_j \frac{\PsiG_{j}}{\PsiG_{i}} \abs{\mel{i}{T^+}{j}} }
2022-09-15 10:12:36 +02:00
= \frac{ \PsiG_{j} \abs*{T^+_{ij}} }
{ \sum_j \PsiG_{j} \abs*{T^+_{ij}} }.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 14:50:55 +02:00
This new transition probability matrix with positive entries reduces to Eq.~\eqref{eq:pij} when $T^+_{ij}$ is positive as $\sum_j \PsiG_{j} T^+_{ij} = 1$.
2022-09-14 11:30:16 +02:00
2022-09-15 14:50:55 +02:00
\titou{Now}, using Eqs.~\eqref{eq:defT}, \eqref{eq:defTij} and \eqref{eq:pij}, the residual weights read
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
w_{ij}=\frac{T_{ij}}{T^+_{ij}}.
2022-09-14 11:30:16 +02:00
\ee
Using these notations the Green's matrix components can be rewritten as
\be
2022-09-15 14:50:55 +02:00
\bar{G}^{(N)}_{i_0 i_N} =
\sum_{i_1,\ldots,i_{N-1}} \qty( \prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}} ) \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
The product $\prod_{k=0}^{N-1} p_{i_{k} \to i_{k+1}}$ is the probability, denoted $\text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1})$,
2022-09-14 22:22:15 +02:00
for the path starting at $\ket{i_0}$ and ending at $\ket{i_N}$ to occur.
2022-09-15 10:12:36 +02:00
Using Eq.~\eqref{eq:sumup} and the fact that $p_{i \to j} \ge 0$, one can easily verify that $\text{Prob}_{i_0 \to i_N}$ is positive and obeys
2022-09-14 11:30:16 +02:00
\be
2022-09-15 10:12:36 +02:00
\sum_{i_1,\ldots,i_{N-1}} \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}) = 1,
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
as it should.
2022-09-15 14:50:55 +02:00
For a given path $i_1,\ldots,i_{N-1}$, the probabilistic average associated with this probability is then defined as
2022-09-14 11:30:16 +02:00
\be
2022-09-15 10:12:36 +02:00
\label{eq:average}
2022-09-14 22:22:15 +02:00
\expval{F} = \sum_{i_1,\ldots,i_{N-1}} F(i_0,\ldots,i_N) \text{Prob}_{i_0 \to i_N}(i_1,\ldots,i_{N-1}),
2022-09-14 11:30:16 +02:00
\ee
where $F$ is an arbitrary function.
2022-09-14 22:22:15 +02:00
Finally, the path-integral expressed as a probabilistic average reads
2022-09-14 11:30:16 +02:00
\be
2022-09-15 14:50:55 +02:00
\label{eq:cn_stoch}
\bar{G}^{(N)}_{i_0 i_N}= \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}}}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 10:12:36 +02:00
To calculate the probabilistic average, Eq.~\eqref{eq:average},
2022-09-14 11:30:16 +02:00
an artificial (mathematical) ``particle'' called walker (or psi-particle) is introduced.
2022-09-15 14:50:55 +02:00
During the Monte Carlo simulation, the walker moves in configuration space by drawing new states with
probability $p_{i_k \to i_{k+1}}$, thus realizing the path of probability $\text{Prob}(i_0 \to i_N)$.
In this framework, the energy defined in Eq.~\eqref{eq:E0} is given by
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
E_0 = \lim_{N \to \infty }
\frac{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {(H\PsiT)}_{i_N}} }
2022-09-15 10:12:36 +02:00
{ \expval{ \prod_{k=0}^{N-1} w_{i_{k}i_{k+1}} {\PsiT}_{i_N} }}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 14:50:55 +02:00
Note that, instead of using a single walker, it is possible to introduce a population of independent walkers and to calculate the averages over this population.
In addition, thanks to the ergodic property of the stochastic matrix (see, for example, Ref.~\onlinecite{Caffarel_1988}), a unique path of infinite length from which sub-paths of length $N$ can be extracted may also be used.
We shall not here insist on these practical details that are discussed, for example, in Refs.~\onlinecite{Foulkes_2001,Kolorenc_2011}.
2022-09-14 11:30:16 +02:00
%{\it Spawner representation} In this representation, we no longer consider moving particles but occupied or non-occupied states $|i\rangle$.
%To each state is associated the (positive or negative) quantity $c_i$.
%At iteration $n$ each state $|i\rangle$ with weight $c^n_i$
%"spawnes" (or deposits) the weight $T_{ij} c_i^{(n)}$ on a number of its connected sites $j$
%(that is, $T_{ij} \ne 0$). At the end of iteration the new weight on site $|i_n\rangle$ is given as the sum
%of the contributions spawn on this site by all states of the previous iteration
%\be
%c_i^{(n+1)}=\sum_{i_n} T_{ij} c^{(n)}_{i_n}
%\ee
%In the numerical applications to follow, we shall use the walker representation.
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-15 14:50:55 +02:00
\section{Diffusion Monte Carlo with domains}
2022-09-14 11:30:16 +02:00
\label{sec:DMC_domains}
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%
%=======================================%
2022-09-15 14:50:55 +02:00
\subsection{Single-state domains}
2022-09-14 15:57:39 +02:00
\label{sec:single_domains}
2022-09-14 17:31:00 +02:00
%=======================================%
2022-09-14 11:30:16 +02:00
During the simulation, walkers move from state to state with the possibility of being trapped a certain number of times on the same state before
exiting to a different state. This fact can be exploited in order to integrate out some part of the dynamics, thus leading to a reduction of the statistical
2022-09-16 13:22:13 +02:00
fluctuations. This idea was proposed some time ago and applied to the SU($N$) one-dimensional Hubbard model.\cite{Assaraf_1999A,Assaraf_1999B,Caffarel_2000}
2022-09-14 11:30:16 +02:00
2022-09-15 15:19:41 +02:00
Considering a given state $\ket{i}$, the probability that a walker remains exactly $n$ times in $\ket{i}$ (with $1 \le n < \infty$) and then exits to a different state $j$ (with $j \neq i$) is
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:19:41 +02:00
\cP_{i \to j}(n) = \qty(p_{i \to i})^{n-1} p_{i \to j}.
2022-09-14 22:22:15 +02:00
\ee
Using the relation
\be
2022-09-15 14:50:55 +02:00
\sum_{n=1}^{\infty} (p_{i \to i})^{n-1}=\frac{1}{1-p_{i \to i}}
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:19:41 +02:00
and the normalization of the $p_{i \to j}$'s [see Eq.~\eqref{eq:sumup}], one can check that the probability is properly normalized, \ie,
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
\sum_{j \ne i} \sum_{n=1}^{\infty} \cP_{i \to j}(n) = 1.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:19:41 +02:00
Naturally, the probability of being trapped during $n$ steps is obtained by summing over all possible exit states
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:19:41 +02:00
P_i(n)=\sum_{j \ne i} \cP_{i \to j}(n) = \qty(p_{i \to i})^{n-1} \qty( 1 - p_{i \to i} ),
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:19:41 +02:00
and this defines a Poisson law with an average number of trapping events
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
\bar{n}_i= \sum_{n=1}^{\infty} n P_i(n) = \frac{1}{1 -p_{i \to i}}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:19:41 +02:00
Introducing the continuous time $t_i = n \tau$, the average trapping time is thus given by
2022-09-14 11:30:16 +02:00
\be
2022-09-16 13:22:13 +02:00
\bar{t_i}= \frac{1}{H^+_{ii}-(\EL^+)_{i}},
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
and, in the limit $\tau \to 0$, the Poisson probability takes the usual form
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:19:41 +02:00
P_{i}(t) = \frac{1}{\bar{t}_i} \exp(-\frac{t}{\bar{t}_i}).
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
The time-averaged contribution of the on-state weight can then be easily calculated to be
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
\bar{w}_i= \sum_{n=1}^{\infty} w^n_{ii} P_i(n)= \frac{T_{ii}}{T^+_{ii}} \frac{1-T^+_{ii}}{1-T_{ii}}
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:19:41 +02:00
Details of the implementation of this effective dynamics can be in found in Refs.~\onlinecite{Assaraf_1999B} and \onlinecite{Caffarel_2000}.
2022-09-14 17:31:00 +02:00
%=======================================%
2022-09-16 13:38:42 +02:00
\subsection{Multi-state domains}
2022-09-14 15:57:39 +02:00
\label{sec:general_domains}
2022-09-14 17:31:00 +02:00
%=======================================%
2022-09-15 15:52:03 +02:00
Let us now extend the results of Sec.~\ref{sec:single_domains} to a general domain.
2022-09-16 13:22:13 +02:00
To do so, we associate to each state $\ket{i}$ a set of states, called the domain of $\ket{i}$ denoted $\cD_i$, consisting of the state $\ket{i}$ plus a certain number of states.
2022-09-15 15:52:03 +02:00
No particular constraints on the type of domains are imposed.
For example, domains associated with different states can be identical, and they may or may not have common states.
2022-09-16 13:22:13 +02:00
The only important condition is that the set of all domains ensures the ergodicity property of the effective stochastic dynamics, that is, starting from any state, there is a non-zero probability to reach any other state in a finite number of steps.
2022-09-15 15:52:03 +02:00
In practice, it is not difficult to impose such a condition.
2022-09-14 11:30:16 +02:00
Let us write an arbitrary path of length $N$ as
\be
2022-09-16 13:22:13 +02:00
\ket{i_0} \to \ket{i_1} \to \cdots \to \ket{i_N},
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:52:03 +02:00
where the successive states are drawn using the transition probability matrix, $p_{i \to j}$.
2022-09-16 13:22:13 +02:00
This series can be recast
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
\label{eq:eff_series}
2022-09-16 13:22:13 +02:00
(\ket*{I_0},n_0) \to (\ket*{I_1},n_1) \to \cdots \to (\ket*{I_p},n_p),
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
where $\ket{I_0}=\ket{i_0}$ is the initial state, $n_0$ is the number of times the walker remains in the domain of $\ket{i_0}$ (with $1 \le n_0 \le N+1$), $\ket{I_1}$ is the first exit state that does not belong to $\cD_{i_0}$, $n_1$ is the number of times the walker remains in $\cD_{i_1}$ (with $1 \le n_1 \le N+1-n_0$), $\ket{I_2}$ is the second exit state, and so on.
Here, the integer $p$ (with $0 \le p \le N$) indicates the number of exit events occurring along the path.
The two extreme values, $p=0$ and $p=N$, correspond to the cases where the walker remains in the initial domain during the entire path, and where the walker exits a domain at each step, respectively.
2022-09-15 15:52:03 +02:00
\titou{In what follows, we shall systematically write the integers representing the exit states in capital letter.}
2022-09-14 11:30:16 +02:00
%Generalizing what has been done for domains consisting of only one single state, the general idea here is to integrate out exactly the stochastic dynamics over the
%set of all paths having the same representation, Eq.(\ref{eff_series}). As a consequence, an effective Monte Carlo dynamics including only exit states
%averages for renormalized quantities will be defined.\\
2022-09-16 13:22:13 +02:00
Let us define the probability of remaining $n$ times in the domain of $\ket{I_0}$ and to exit at $\ket{I} \notin \cD_{I_0}$ as
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
\label{eq:eq1C}
2022-09-15 15:52:03 +02:00
\cP_{I_0 \to I}(n)
= \sum_{\ket{i_1} \in \cD_{I_0}} \cdots \sum_{\ket{i_{n-1}} \in \cD_{I_0}}
2022-09-16 13:22:13 +02:00
p_{I_0 \to i_1} \ldots p_{i_{n-2} \to i_{n-1}} p_{i_{n-1} \to I}.
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
\titou{To proceed} we must introduce the projector associated with each domain
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:52:03 +02:00
\label{eq:pi}
2022-09-16 13:22:13 +02:00
P_I = \sum_{\ket{i} \in \cD_I} \dyad{i}{i}
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
and the projection of the operator $T^+$ to the domain that governs the dynamics of the walkers moving in $\cD_{I}$, \ie,
2022-09-14 11:30:16 +02:00
\be
2022-09-14 22:22:15 +02:00
T^+_I= P_I T^+ P_I.
2022-09-14 11:30:16 +02:00
\ee
2022-09-15 15:52:03 +02:00
Using Eqs.~\eqref{eq:pij} and \eqref{eq:eq1C}, the probability can be rewritten as
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:52:03 +02:00
\label{eq:eq3C}
2022-09-16 13:22:13 +02:00
\cP_{I_0 \to I}(n) = \frac{1}{\PsiG_{I_0}} \mel{I_0}{\qty(T^+_{I_0})^{n-1} F^+_{I_0}}{I} \PsiG_{I},
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
where the operator
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:52:03 +02:00
\label{eq:Fi}
F^+_I = P_I T^+ (1-P_I),
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
corresponding to the last move connecting the inside and outside regions of the domain, has the following matrix elements:
\be
(F^+_I)_{ij} =
\begin{cases}
T^+_{ij}, & \qif* \ket{i} \in \cD_{I} \titou{\land} \ket{j} \notin \cD_{I},
\\
0, & \text{otherwise}.
\end{cases}
\ee
2022-09-14 11:30:16 +02:00
Physically, $F$ may be seen as a flux operator through the boundary of ${\cal D}_{I}$.
2022-09-16 13:22:13 +02:00
\titou{Now}, the probability of being trapped $n$ times within ${\cal D}_{I}$ is given by
2022-09-14 11:30:16 +02:00
\be
2022-09-15 15:52:03 +02:00
\label{eq:PiN}
2022-09-16 13:22:13 +02:00
P_{I}(n) = \frac{1}{\PsiG_{I}} \mel{ I }{ \qty(T^+_{I})^{n-1} F^+_{I} }{ \PsiG }.
2022-09-14 11:30:16 +02:00
\ee
Using the fact that
\be
2022-09-15 15:52:03 +02:00
\label{eq:relation}
2022-09-16 13:22:13 +02:00
\qty(T^+_{I})^{n-1} F^+_I = \qty(T^+_{I})^{n-1} T^+ - \qty(T^+_I)^n,
2022-09-14 11:30:16 +02:00
\ee
we have
\be
2022-09-15 15:52:03 +02:00
\sum_{n=0}^{\infty} P_{I}(n)
2022-09-16 13:22:13 +02:00
= \frac{1}{\PsiG_{I}} \sum_{n=1}^{\infty} \qty[ \mel{ I }{ \qty(T^+_{I})^{n-1} }{ \PsiG }
- \mel{ I }{ \qty(T^+_{I})^{n} }{ \PsiG } ] = 1,
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
and the average trapping time is
2022-09-14 11:30:16 +02:00
\be
2022-09-16 13:22:13 +02:00
t_{I}={\bar n}_{I} \tau = \frac{1}{\PsiG_{I}} \mel{ I }{ P_{I} \frac{1}{H^+ - \EL^+ \Id} P_{I} }{ \PsiG }.
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
In practice, the various quantities restricted to the domain are computed by diagonalizing the matrix $(H^+-\EL^+ \Id)$ in $\cD_{I}$.
2022-09-15 15:52:03 +02:00
Note that it is possible only if the dimension of the domains is not too large (say, less than a few thousands).
2022-09-14 17:31:00 +02:00
%=======================================%
2022-09-14 11:30:16 +02:00
\subsection{Expressing the Green's matrix using domains}
2022-09-14 15:57:39 +02:00
\label{sec:Green}
2022-09-14 17:31:00 +02:00
%=======================================%
%--------------------------------------------%
2022-09-14 11:30:16 +02:00
\subsubsection{Time-dependent Green's matrix}
2022-09-14 17:31:00 +02:00
\label{sec:time}
%--------------------------------------------%
2022-09-15 15:52:03 +02:00
In this section we generalize the path-integral expression of the Green's matrix, Eqs.~\eqref{eq:G} and \eqref{eq:cn_stoch}, to the case where domains are used.
2022-09-14 11:30:16 +02:00
For that we introduce the Green's matrix associated with each domain
\be
2022-09-15 15:52:03 +02:00
G^{(N),\cD}_{IJ}= \mel{ J }{ T_I^N }{ I }.
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
%Starting from Eq.~\eqref{eq:cn}
%\be
%G^{(N)}_{i_0 i_N}= \sum_{i_1,...,i_{N-1}} \prod_{k=0}^{N-1} \langle i_k| T |i_{k+1} \rangle.
%\ee
Starting from Eq.~\eqref{eq:cn} and using the representation of the paths in terms of exit states and trapping times, we get
2022-09-14 11:30:16 +02:00
\be
2022-09-16 13:22:13 +02:00
G^{(N)}_{I_0 I_N} = \sum_{p=0}^N
\sum_{\cC_p} \sum_{(i_1,...,i_{N-1}) \in \cC_p}
\prod_{k=0}^{N-1} \mel{ i_k }{ T }{ i_{k+1} }
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
where $\cC_p$ is the set of paths with $p$ exit states, $\ket{I_k}$, and trapping times $n_k$ with the constraints that $\ket{I_k} \notin \cD_{k-1}$ (with $1 \le n_k \le N+1$ and $\sum_{k=0}^p n_k= N+1$).
2022-09-14 11:30:16 +02:00
We then have
2022-09-16 13:22:13 +02:00
\begin{multline}
2022-09-15 15:52:03 +02:00
\label{eq:Gt}
2022-09-16 13:22:13 +02:00
G^{(N)}_{I_0 I_N}= G^{(N),{\cal D}}_{I_0 I_N} +
\sum_{p=1}^{N}
\sum_{|I_1\rangle \notin {\cal D}_{I_0}, \hdots , |I_p\rangle \notin {\cal D}_{I_{p-1}} }
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}
\delta_{\sum_{k=0}^p n_k,N+1}
\\
\times
\qty[ \prod_{k=0}^{p-1} \mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} } ]
G^{(n_p-1),{\cal D}}_{I_p I_N}
\end{multline}
2022-09-16 13:38:42 +02:00
where $\delta_{i,j}$ is a Kronecker delta.
2022-09-16 13:22:13 +02:00
This expression is the path-integral representation of the Green's matrix using only the variables $(\ket{I_k},n_k)$ of the effective dynamics defined over the set of domains.
2022-09-16 13:38:42 +02:00
The standard formula derived above [see Eq.~\eqref{eq:G}] may be considered as the particular case where the domain associated with each state is empty,
In that case, $p=N$ and $n_k=1$ for $0 \le k \le N$ and we are left only with the $p$th component of the sum, that is, $G^{(N)}_{I_0 I_N}
2022-09-16 13:22:13 +02:00
= \prod_{k=0}^{N-1} \mel{ I_k }{ F_{I_k} }{ I_{k+1} } $ where $F=T$.
2022-09-14 11:30:16 +02:00
2022-09-14 15:57:39 +02:00
To express the fundamental equation for $G$ under the form of a probabilistic average, we write the importance-sampled version of the equation
2022-09-16 13:38:42 +02:00
\begin{multline}
2022-09-16 13:22:13 +02:00
\label{eq:Gbart}
2022-09-16 13:38:42 +02:00
\bar{G}^{(N)}_{I_0 I_N}=\bar{G}^{(N),\cD}_{I_0 I_N} +
2022-09-16 13:22:13 +02:00
\sum_{p=1}^{N}
2022-09-16 13:38:42 +02:00
\sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}}}
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}
\\
\times
\delta_{\sum_k n_k,N+1} \qty{ \prod_{k=0}^{p-1} \qty[ \frac{\PsiG_{I_{k+1}}}{\PsiG_{I_k}} \mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} } ] }
\bar{G}^{(n_p-1),\cD}_{I_p I_N}.
\end{multline}
2022-09-14 11:30:16 +02:00
Introducing the weight
\be
2022-09-16 13:22:13 +02:00
W_{I_k I_{k+1}} = \frac{\mel{ I_k }{ \qty(T_{I_k})^{n_k-1} F_{I_k} }{ I_{k+1} }}{\mel{ I_k }{ \qty(T^{+}_{I_k})^{n_k-1} F^+_{I_k} }{ I_{k+1} }}
2022-09-14 11:30:16 +02:00
\ee
2022-09-16 13:22:13 +02:00
and using the effective transition probability defined in Eq.~\eqref{eq:eq3C}, we get
2022-09-14 11:30:16 +02:00
\be
2022-09-16 13:22:13 +02:00
\label{eq:Gbart}
2022-09-16 13:38:42 +02:00
\bar{G}^{(N)}_{I_0 I_N} = \bar{G}^{(N),\cD}_{I_0 I_N} + \sum_{p=1}^{N}
\expval{
\qty( \prod_{k=0}^{p-1} W_{I_k I_{k+1}} )
\bar{G}^{(n_p-1), {\cal D}}_{I_p I_N}
}
2022-09-14 11:30:16 +02:00
\ee
where the average is defined as
2022-09-16 13:38:42 +02:00
\begin{multline}
2022-09-16 13:22:13 +02:00
\expval{F}
2022-09-16 13:38:42 +02:00
= \sum_{\ket{I_1} \notin \cD_{I_0}, \ldots , \ket{I_p} \notin \cD_{I_{p-1}}}
2022-09-16 13:22:13 +02:00
\sum_{n_0 \ge 1} \cdots \sum_{n_p \ge 1}
\delta_{\sum_k n_k,N+1}
2022-09-16 13:38:42 +02:00
\\
\times
\prod_{k=0}^{N-1}\cP_{I_k \to I_{k+1}}(n_k-1) F(I_0,n_0;\ldots.;I_N,n_N)
\end{multline}
2022-09-14 11:30:16 +02:00
In practice, a schematic DMC algorithm to compute the average is as follows.\\
2022-09-16 13:22:13 +02:00
i) Choose some initial vector $\ket{I_0}$\\
2022-09-14 11:30:16 +02:00
ii) Generate a stochastic path by running over $k$ (starting at $k=0$) as follows.\\
2022-09-16 13:22:13 +02:00
$\;\;\;\bullet$ Draw $n_k$ using the probability $P_{I_k}(n)$ [see Eq.~\eqref{eq:PiN}]\\
$\;\;\;\bullet$ Draw the exit state, $\ket{I_{k+1}}$, using the conditional probability $$\frac{\cP_{I_k \to I_{k+1}}(n_k)}{P_{I_k}(n_k)}$$\\
2022-09-14 11:30:16 +02:00
iii) Terminate the path when $\sum_k n_k=N$ is greater than some target value $N_{\rm max}$ and compute $F(I_0,n_0;...;I_N,n_N)$\\
iv) Go to step ii) until some maximum number of paths is reached.\\
\\
2022-09-16 13:22:13 +02:00
At the end of the simulation, an estimate of the average for a few values of $N$ greater but close to $N_\text{max}$ is obtained.
At large $N_\text{max}$ where the convergence of the average as a function of $p$ is reached, such values can be averaged.
2022-09-14 17:31:00 +02:00
%--------------------------------------------%
2022-09-14 11:30:16 +02:00
\subsubsection{Integrating out the trapping times : The Domain Green's Function Monte Carlo approach}
2022-09-14 17:31:00 +02:00
\label{sec:energy}
%--------------------------------------------%
2022-09-14 11:30:16 +02:00
Now, let us show that it is possible to go further by integrating out the trapping times, $n_k$, of the preceding expressions, thus defining a new effective
stochastic dynamics involving now only the exit states. Physically, it means that we are going to compute exactly within the time-evolution of all
stochastic paths trapped within each domain. We shall present two different ways to derive the new dynamics and renormalized probabilistic averages.
The first one, called the pedestrian way, consists in starting from the preceding time-expression for $G$ and make the explicit integration over the
$n_k$. The second, more direct and elegant, is based on the Dyson equation.\\
\\
{\it $\bullet$ The pedestrian way}. Let us define the quantity\\
$$
2022-09-16 13:22:13 +02:00
G^E_{ij}= \tau \sum_{N=0}^{\infty} \mel{ i }{ T^N }{ j}.
2022-09-14 11:30:16 +02:00
$$
By summing over $N$ we obtain
\be
2022-09-16 13:22:13 +02:00
G^E_{ij}= \mel{i}{\frac{1}{H-E \Id}}{j}.
2022-09-14 11:30:16 +02:00
\ee
This quantity, which no longer depends on the time-step, is referred to as the energy-dependent Green's matrix. Note that in the continuum this quantity is
essentially the Laplace transform of the time-dependent Green's function. Here, we then use the same denomination. The remarkable property
is that, thanks to the summation over $N$ up to the
infinity the constrained multiple sums appearing in Eq.(\ref{Gt}) can be factorized in terms of a product of unconstrained single sums as follows
2022-09-16 13:22:13 +02:00
\be
\sum_{N=1}^\infty \sum_{p=1}^N \sum_{n_0 \ge 1} ...\sum_{n_p \ge 1} \delta_{n_0+...+n_p,N+1}
= \sum_{p=1}^{\infty} \sum_{n_0=1}^{\infty} ...\sum_{n_p=1}^{\infty}.
\ee
2022-09-14 11:30:16 +02:00
It is then a trivial matter to integrate out exactly the $n_k$ variables, leading to
$$
\langle I_0|\frac{1}{H-E}|I_N\rangle = \langle I_0|P_0\frac{1}{H-E} P_0|I_N\rangle
+ \sum_{p=1}^{\infty}
\sum_{I_1 \notin {\cal D}_0, \hdots , I_p \notin {\cal D}_{p-1}}
$$
\be
\Big[ \prod_{k=0}^{p-1} \langle I_k| P_k \frac{1}{H-E} P_k (-H)(1-P_k)|I_{k+1} \rangle \Big]
\langle I_p| P_p \frac{1} {H-E} P_p|I_N\rangle
\label{eqfond}
\ee
\noindent
As an illustration, Appendix \ref{A} reports the exact derivation of this formula in the case of a two-state system.\\
\\
{\it $\bullet$ Dyson equation.} In fact, there is a more direct way to derive the same equation by resorting to the Dyson equation. Starting from the
well-known equality
\be
\frac{1}{H-E} = \frac{1}{H_0-E}
+ \frac{1}{H_0-E} (H_0-H)\frac{1}{H-E}
\ee
where $H_0$ is some arbitrary reference Hamiltonian, we have
the Dyson equation
$$
G^E_{ij}= G^E_{0,ij} + \sum_{k,l} G^{E}_{0,ik} (H_0-H)_{kl} G^E_{lj}
$$
Let us choose as $H_0$
$$
\langle i |H_0|j\rangle= \langle i|P_i H P_i|j\rangle \;\;\; {\rm for \; all \; i \;j}.
$$
The Dyson equation becomes
$$
\langle i| \frac{1}{H-E}|j\rangle
= \langle i| P_i \frac{1}{H-E} P_i|j\rangle
$$
\be
+ \sum_k \langle i| P_i \frac{1}{H-E} P_i(H_0-H)|k\rangle \langle k|\frac{1}{H-E}|j\rangle
\ee
Now, we have
$$
P_i \frac{1}{H-E} P_i(H_0-H) = P_i \frac{1}{H-E} P_i (P_i H P_i - H)
$$
$$
= P_i \frac{1}{H-E} P_i (-H) (1-P_i)
$$
and the Dyson equation may be written under the form
$$
\langle i| \frac{1}{H-E}|j\rangle
= \langle i| P_i \frac{1}{H-E} P_i|j\rangle
$$
$$
+ \sum_{k \notin {\cal D}_i} \langle i| P_i \frac{1}{H-E} P_i (-H)(1-P_i)|k\rangle \langle k|\frac{1}{H-E}|j\rangle
$$
which is identical to Eq.(\ref{eqfond}) when $G$ is expanded iteratively.\\
\\
Let us use as effective transition probability density
\be
2022-09-14 22:22:15 +02:00
P(I \to J) = \frac{1} {\PsiG(I)} \langle I| P_I \frac{1}{H^+-E^+_L} P_I (-H^+) (1-P_I)|J\rangle \PsiG(J)
2022-09-14 11:30:16 +02:00
\ee
and the weight
\be
W^E_{IJ} =
\frac{\langle I|\frac{1}{H-E} P_I (-H)(1-P_I) |J\rangle }{\langle I|\frac{1}{H^+-E^+_L} P_I (-H^+)(1-P_I) |J\rangle}
\ee
2022-09-16 13:22:13 +02:00
Using Eqs.~\eqref{eq:eq1C}, \eqref{eq:eq3C} and \eqref{eq:relation}, we verify that $P_{I \to J} \ge 0$ and $\sum_J P_{I \to J}=1$.
2022-09-14 11:30:16 +02:00
Finally, the probabilistic expression writes
$$
\langle I_0| \frac{1}{H-E}|I_N\rangle
= \langle I_0| P_{I_0} \frac{1}{H-E} P_{I_0}|I_N\rangle
$$
\be
+ \sum_{p=0}^{\infty} \Bigg \langle \Big( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|I_N\rangle \Bigg \rangle
\label{final_E}
\ee
{\it Energy estimator.} To calculate the energy we introduce the following quantity
\be
2022-09-14 17:31:00 +02:00
{\cal E}(E) = \frac{ \langle I_0|\frac{1}{H-E}|H\PsiT\rangle} {\langle I_0|\frac{1}{H-E}|\PsiT\rangle}.
2022-09-14 11:30:16 +02:00
\label{calE}
\ee
and search for the solution $E=E_0$ of
\be
{\cal E}(E)= E
\ee
Using the spectral decomposition of $H$ we have
\be
{\cal E}(E) = \frac{ \sum_i \frac{E_i c_i}{E_i-E}}{\sum_i \frac{c_i}{E_i-E}}
\label{calE}
\ee
with
\be
2022-09-14 17:31:00 +02:00
c_i = \langle I_0| \Phi_0\rangle \langle \Phi_0| \PsiT\rangle
2022-09-14 11:30:16 +02:00
\ee
It is easy to check that in the vicinity of $E=E_0$, ${\cal E}(E)$ is a linear function of $E-E_0$.
So, in practice we compute a few values of ${\cal E}(E^{k})$ and fit the data using some function of $E$ close to the linearity
to extrapolate the exact value of $E_0$. Let us describe the 3 functions used here for the fit.\\
i) Linear fit\\
We write
\be
{\cal E}(E)= a_0 + a_1 E
\ee
and search the best value of $(a_0,a_1)$ by fitting the data.
Then, ${\cal E}(E)=E$ leads to
$$
E_0= \frac{a_0}{1-a_1}
$$
ii) Quadratic fit\\
At the quadratic level we write
$$
{\cal E}(E)= a_0 + a_1 E + a_2 E^2
$$
leading to
$$
E_0 = \frac{1 - a_1 \pm \sqrt{ (a_1 -1)^2 - 4 a_0 a_2}}{2 a_2}
$$
iii) Two-component fit\\
We take the advantage that the exact expression of ${\cal E}(E)$ is known as an infinite series, Eq.(\ref{calE}).
If we limit ourselves to the first two-component, we write
\be
{\cal E}(E) = \frac{ \frac{E_0 c_0}{E_0-E} + \frac{E_1 c_1}{E_1-E}}{\frac{c_0}{E_0-E} + \frac{c_1}{E_1-E} }
\ee
Here, the variational parameters used for the fit are ($c_0,E_0,c_1,E_1)$.\\
In order to have a precise extrapolation of the energy, it is interesting to compute the ratio
${\cal E}(E)$ for values of $E$ as close as possible to the exact energy. However, in that case
the numerators and denominators computed diverge. This is reflected by the fact that we need to compute
more and more $p$-components with an important increase of statistical fluctuations. So, in practice
a tradoff has to be found between the possible bias in the extrapolation and the amount of simulation time
required.
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
\section{Numerical application to the Hubbard model}
2022-09-14 15:57:39 +02:00
\label{sec:numerical}
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
Let us consider the one-dimensional Hubbard Hamiltonian for a chain of $N$ sites
\be
\hat{H}= -t \sum_{\langle i j\rangle \sigma} \hat{a}^+_{i\sigma} \hat{a}_{j\sigma}
+ U \sum_i \hat{n}_{i\uparrow} \hat{n}_{i\downarrow}
\ee
where $\langle i j\rangle$ denotes the summation over two neighboring sites,
$\hat{a}_{i\sigma} (\hat{a}_{i\sigma})$ is the fermionic creation (annihilation) operator of
an electron of spin $\sigma$ ($=\uparrow$ or $\downarrow$) at site $i$, $\hat{n}_{i\sigma} = \hat{a}^+_{i\sigma} \hat{a}_{i\sigma}$ the
number operator, $t$ the hopping amplitude and $U$ the on-site Coulomb repulsion.
We consider a chain with an even number of sites and open boundary conditions (OBC)
at half-filling, that is, $N_{\uparrow}=N_{\downarrow}=\frac{N}{2}$.
In the site-representation, a general vector of the Hilbert space will be written as
\be
|n\rangle = |n_{1 \uparrow},...,n_{N \uparrow},n_{1 \downarrow},...,n_{N \downarrow}\rangle
\ee
where $n_{i \sigma}=0,1$ is the number of electrons of spin $\sigma$ at site $i$.
For the 1D Hubbard model and OBC the components of the ground-state vector have the same sign (say, $c_i \ge 0$).
It is then possible to identify the guiding and trial vectors, that is, $|c^+\rangle=|c_T\rangle$.
As trial wave function we shall employ a generalization of the Gutzwiller wave function\cite{Gutzwiller_1963} written under the form
\be
\langle n|c_T\rangle= e^{-\alpha n_D(n)-\beta n_A(n)}
\ee
where $n_D(n)$ is the number of doubly occupied sites for the configuration $|n\rangle$ and $n_A(n)$
the number of nearest-neighbor antiparallel pairs defined as
\be
n_A(n)= \sum_{\langle i j\rangle} n_{i\uparrow} n _{j\downarrow} \delta(n_{i\uparrow}-1) \delta(n_{j\downarrow}-1)
\ee
where the kronecker function $\delta(k)$ is equal to 1 when $k=0$ and 0 otherwise.
The parameters $\alpha, \beta$ are optimized by minimizing the variational energy,
$E_v(\alpha,\beta)=\frac{\langle c_T|H|c_T\rangle} {\langle c_T|c_T\rangle}$.\\
{\it Domains}. The efficiency of the method depends on the choice of states forming each domain.
As a general guiding principle, it is advantageous to build domains associated with a large average trapping time in order to integrate out the most important
part of the Green's matrix. Here, as a first illustration of the method, we shall
consider the large-$U$ regime of the Hubbard model where the construction of such domains is rather natural.
At large $U$ and half-filling,
the Hubbard model approaches the Heisenberg limit where only the $2^N$ states with no double occupancy, $n_D(n)=0$, have a significant weight in the wave function.
The contribution of the other states vanishes as $U$ increases with a rate increasing sharply with $n_D(n)$. In addition, for a given
number of double occupations, configurations with large values of $n_A(n)$ are favored due to their high kinetic energy (electrons move
more easily). Therefore, we build domains associated with small $n_D$ and large $n_A$ in a hierarchical way as described below.
For simplicity and decreasing the number of diagonalizations to perform,
we shall consider only one non-trivial domain called here the main domain and denoted as ${\cal D}$.
This domain will be chosen common to all states belonging to it, that is
\be
{\cal D}_i= {\cal D} \;\; {\rm for } \; |i\rangle \in {\cal D}
\ee
For the other states we choose a single-state domain
\be
{\cal D}_i= \{ |i\rangle \} \;\; {\rm for} \; |i\rangle \notin {\cal D}
\ee
To define ${\cal D}$, let us introduce the following set of states
\be
{\cal S}_{ij} = \{ |n\rangle; n_D(n)=i {\rm \; and\;} n_A(n)=j \}.
\ee
${\cal D}$ is defined as the set of states having up to $n_D^{\rm max}$ double occupations and, for each state with a number
of double occupations equal to $m$, a number of nearest-neighbor antiparallel pairs between $n_A^{\rm min}(m)$ and $n_A^{\rm max}(m)$.
Here, $n_A^{\rm max}(m)$ will not be varied and taken to be the maximum possible for a given $m$,
$n_A^{\rm max}(m)= {\rm Max}(N-1-2m,0)$.
Using these definitions, the main domain is taken as the union of some elementary domains
\be
{\cal D} = \cup_{n_D=0}^{n_D^{\rm max}}{\cal D}(n_D,n_A^{\rm min}(n_D))
\ee
where the elementary domain is defined as
\be
{\cal D}(n_D,n_A^{\rm min}(n_D))=\cup_{ n_A^{\rm min}(n_D) \leq j \leq n_A^{\rm max}(n_D)} {\cal S}_{n_D j}
\ee
The two quantities defining the main domain are thus $n_D^{\rm max}$ and
$n_A^{\rm min}(m)$.
To give an illustrative example, let us consider the 4-site case. There are 6 possible elementary domains
$$
{\cal D}(0,3)= {\cal S}_{03}
$$
$$
{\cal D}(0,2)= {\cal S}_{03} \cup {\cal S}_{02}
$$
$$
{\cal D}(0,1)= {\cal S}_{03} \cup {\cal S}_{02} \cup {\cal S}_{01}
$$
$$
{\cal D}(1,1)= {\cal S}_{11}
$$
$$
{\cal D}(1,0)= {\cal S}_{11} \cup {\cal S}_{10}
$$
$$
{\cal D}(2,0)= {\cal S}_{20}
$$
where
$$
{\cal S}_{03} = \{\; |\uparrow,\downarrow,\uparrow,\downarrow \rangle, |\downarrow,\uparrow,\downarrow,\uparrow \rangle \}
\;\;({\rm the\; two\; N\acute eel\; states})
$$
$$
{\cal S}_{02} = \{\; |\uparrow, \downarrow, \downarrow, \uparrow \rangle, |\downarrow, \uparrow, \uparrow, \downarrow \rangle \}
$$
$$
{\cal S}_{01} = \{\; |\uparrow, \uparrow, \downarrow, \downarrow \rangle, |\downarrow, \downarrow, \uparrow, \uparrow \rangle \}
$$
$$
{\cal S}_{11} = \{\; |\uparrow \downarrow, \uparrow ,\downarrow, 0 \rangle, |\uparrow \downarrow, 0, \uparrow,\uparrow \rangle + ...
\}
$$
$$
{\cal S}_{10} = \{\; |\uparrow \downarrow, \uparrow, 0, \downarrow \rangle, |\uparrow \downarrow, 0, \uparrow, \downarrow \rangle + ... \}
$$
$$
{\cal S}_{20} = \{\; |\uparrow \downarrow, \uparrow \downarrow, 0 ,0 \rangle + ... \}
$$
For the three last cases, the dots indicate the remaining states obtained by permuting the position of the pairs.\\
\\
Let us now present our QMC calculations for the Hubbard model. In what follows
we shall restrict ourselves to the case of the Green's Function Monte Carlo approach where trapping times are integrated out exactly.
\\
Following Eqs.(\ref{final_E},\ref{calE}), the practical formula used for computing the QMC energy is written as
\be
{\cal E}_{QMC}(E,p_{ex},p_{max})= \frac{H_0 +...+ H_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} H^{QMC}_p }
{S_0 +...+ S_{p_{ex}} + \sum_{p=p_{ex}+1}^{p_{max}} S^{QMC}_p }
\label{calE}
\ee
where $p_{ex}+1$ is the number of pairs, ($H_p$, $S_p$), computed analytically. For $p_{ex} < p \le p_{max}$
the Monte Carlo estimates are written as
\be
2022-09-14 17:31:00 +02:00
H^{QMC}_p= \Bigg \langle \Big( \prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|H\PsiT\rangle \Bigg \rangle
2022-09-14 11:30:16 +02:00
\ee
and
\be
2022-09-14 17:31:00 +02:00
S^{QMC}_p= \Bigg \langle \Big(\prod_{k=0}^{p-1} W^E_{I_k I_{k+1}} \Big) \langle I_p| P_{I_p} \frac{1}{H-E} P_{I_p}|\PsiT\rangle \Bigg \rangle.
2022-09-14 11:30:16 +02:00
\ee
Let us begin with a small chain of 4 sites with $U=12$. From now on, we shall take $t=1$.
The size of the linear space is ${\binom{4}{2}}^2= 36$ and the ground-state energy obtained by exact diagonalization
is $E_0=-0.768068...$. The two variational parameters of the trial vector have been optimized and fixed at the values of
$\alpha=1.292$, and $\beta=0.552$ with a variational energy
of $E_v=-0.495361...$. In what follows
$|I_0\rangle$ will be systematically chosen as one of the two N\'eel states, {\it e.g.} $|I_0\rangle =|\uparrow,\downarrow, \uparrow,...\rangle$.
Figure \ref{fig1} shows the convergence of $H_p$ as a function of $p$ for different values of the reference energy $E$.
We consider the simplest case where a single-state domain is associated to each state.
Five different values of $E$ have been chosen, namely $E=-1.6,-1.2,-1,-0.9$, and
$E=-0.8$. Only $H_0$ is computed analytically ($p_{ex}=0$). At the scale of the figure error bars are too small to be perceptible.
When $E$ is far from the exact value of $E_0=-0.768...$ the convergence is very rapid and only a few terms of the $p$-expansion are necessary.
In constrast, when $E$ approaches the exact energy,
a slower convergence is observed, as expected from the divergence of the matrix elements of the
Green's matrix at $E=E_0$ where the expansion does not converge at all. Note the oscillations of the curves as a function of $p$ due to a
parity effect specific to this system. In practice, it is not too much of a problem since
a smoothly convergent behavior is nevertheless observed for the even- and odd-parity curves.
The ratio, ${\cal E}_{QMC}(E,p_{ex}=1,p_{max})$ as a function of $E$ is presented in figure \ref{fig2}. Here, $p_{max}$ is taken sufficiently large
2022-09-14 17:31:00 +02:00
so that the convergence at large $p$ is reached. The values of $E$ are $-0.780$, $-0.790$, $-0,785$, $-0,780$, and $-0.775$. For smaller $E$ the curve is extrapolated using
2022-09-14 11:30:16 +02:00
the two-component expression. The estimate of the energy obtained from ${\cal E}(E)=E$ is $-0.76807(5)$ in full agreement with the exact value of $-0.768068...$.
\begin{figure}[h!]
2022-09-14 15:57:39 +02:00
\includegraphics[width=\columnwidth]{fig1}
2022-09-14 11:30:16 +02:00
\caption{1D-Hubbard model, $N=4$, $U=12$. $H_p$ as a function of $p$ for $E=-1.6,-1.2,-1.,-0.9,-0.8$. $H_0$ is
computed analytically and $H_p$ (p > 0) by Monte Carlo. Error bars are smaller than the symbol size.}
\label{fig1}
\end{figure}
\begin{figure}[h!]
2022-09-14 15:57:39 +02:00
\includegraphics[width=\columnwidth]{fig2}
2022-09-14 13:38:06 +02:00
\caption{1D-Hubbard model, $N=4$ and $U=12$. $\mathcal{E}(E)$ as a function of $E$.
The horizontal and vertical lines are at $\mathcal{E}(E_0)=E_0$ and $E=E_0$, respectively.
2022-09-14 11:30:16 +02:00
$E_0$ is the exact energy of -0.768068.... The dotted line is the two-component extrapolation.
Error bars are smaller than the symbol size.}
\label{fig2}
\end{figure}
Table \ref{tab1} illustrates the dependence of the Monte Carlo results upon the choice of the domain. The reference energy is $E=-1$.
The first column indicates the various domains consisting of the union of some elementary domains as explained above.
The first line of the table gives the results when using a minimal single-state domain for all states, and the last
one for the maximal domain containing the full linear space. The size of the various domains is given in column 2, the average trapping time
for the state $|I_0\rangle$ in the third column, and an estimate of the speed of convergence of the $p$-expansion for the energy in the fourth column.
To quantify the rate of convergence, we report the quantity, $p_{conv}$, defined as
the smallest value of $p$ for which the energy is converged with six decimal places. The smaller $p_{conv}$, the better the convergence is.
Although this is a rough estimate, it is sufficient here for our purpose.
As clearly seen, the speed of convergence is directly related to the magnitude of $\bar{t}_{I_0}$. The
longer the stochastic trajectories remain trapped within the domain, the better the convergence. Of course, when the domain is chosen to be the full space,
the average trapping time becomes infinite. Let us emphasize that the rate of convergence has no reason to be related to the size of the domain.
For example, the domain ${\cal D}(0,3) \cup {\cal D}(1,0)$ has a trapping time for the N\'eel state of 6.2, while
the domain ${\cal D}(0,3) \cup {\cal D}(1,1)$ having almost the same number of states (28 states), has an average trapping time about 6 times longer. Finally, the last column gives the energy obtained for
$E=-1$. The energy is expected to be independent of the domain and to converge to a common value, which is indeed the case here.
The exact value, ${\cal E}(E=-1)=-0.75272390...$, can be found at the last row of the Table for the case of a domain corresponding to the full space.
In sharp contrast, the statistical error depends strongly on the type of domains used. As expected, the largest error of $3 \times 10^{-5}$ is obtained in the case of
a single-state domain for all states. The smallest statistical error is obtained for the "best" domain having the largest average
trapping time. Using this domain leads to a reduction in the statistical error as large as about three orders of magnitude, nicely illustrating the
critical importance of the domains employed.
\begin{table}[h!]
\centering
\caption{$N$=4, $U$=12, $E$=-1, $\alpha=1.292$, $\beta=0.552$,$p_{ex}=4$. Simulation with 20 independent blocks and $10^5$ stochastic paths
starting from the N\'eel state. $\bar{t}_{I_0}$
is the average trapping time for the
N\'eel state. $p_{\rm conv}$ is a measure of the convergence of ${\cal E}_{QMC}(p)$ as a function of $p$, see text.}
\label{tab1}
2022-09-14 14:22:15 +02:00
\begin{ruledtabular}
2022-09-14 11:30:16 +02:00
\begin{tabular}{lcccl}
Domain & Size & $\bar{t}_{I_0}$ & $p_{\rm conv}$ & $\;\;\;\;\;\;{\cal E}_{QMC}$ \\
\hline
Single & 1 & 0.026 & 88 &$\;\;\;\;$-0.75276(3)\\
${\cal D}(0,3)$ & 2 & 2.1 & 110 &$\;\;\;\;$-0.75276(3)\\
${\cal D}(0,2)$ & 4 & 2.1 & 106 &$\;\;\;\;$-0.75275(2)\\
${\cal D}(0,1)$ & 6 & 2.1& 82 &$\;\;\;\;$-0.75274(3)\\
${\cal D}(0,3)$ $\cup$ ${\cal D}(1,1)$ &14 &4.0& 60 & $\;\;\;\;$-0.75270(2)\\
${\cal D}(0,3)$ $\cup$ ${\cal D}(1,0)$ &26 &6.2& 45 & $\;\;\;\;$-0.752730(7) \\
${\cal D}(0,2)$ $\cup$ ${\cal D}(1,1)$ &16 &10.1 & 36 &$\;\;\;\;$-0.75269(1)\\
${\cal D}(0,2)$ $\cup$ ${\cal D}(1,0)$ &28 &34.7 & 14&$\;\;\;\;$-0.7527240(6)\\
${\cal D}(0,1)$ $\cup$ ${\cal D}(1,1)$ &18 & 10.1 & 28 &$\;\;\;\;$-0.75269(1)\\
${\cal D}(0,1)$ $\cup$ ${\cal D}(1,0)$ &30 & 108.7 & 11&$\;\;\;\;$-0.75272400(5) \\
${\cal D}(0,3)$ $\cup$ ${\cal D}(1,1)$ $\cup$ ${\cal D}$(2,0) &20 & 4.1 & 47 &$\;\;\;\;$-0.75271(2)\\
${\cal D}(0,3)$ $\cup$ ${\cal D}(1,0)$ $\cup$ ${\cal D}$(2,0) &32 & 6.5 & 39 &$\;\;\;\;$-0.752725(3)\\
${\cal D}(0,2)$ $\cup$ ${\cal D}(1,1)$ $\cup$ ${\cal D}$(2,0) &22 & 10.8 & 30 &$\;\;\;\;$-0.75270(1)\\
${\cal D}(0,2)$ $\cup$ ${\cal D}(1,0)$ $\cup$ ${\cal D}$(2,0) &34 & 52.5 & 13&$\;\;\;\;$-0.7527236(2)\\
${\cal D}(0,1)$ $\cup$ ${\cal D}(1,1)$ $\cup$ ${\cal D}$(2,0) & 24 & 10.8 & 26&$\;\;\;\;$-0.75270(1)\\
${\cal D}(0,1)$ $\cup$ ${\cal D}(1,0)$ $\cup$ ${\cal D}$(2,0) & 36 & $\infty$&1&$\;\;\;\;$-0.75272390\\
\end{tabular}
2022-09-14 14:22:15 +02:00
\end{ruledtabular}
2022-09-14 11:30:16 +02:00
\end{table}
As a general rule, it is always good to avoid the Monte Carlo calculation of a quantity which is computable analytically. Here, we apply
this idea to the case of the energy, Eq.(\ref{calE}), where the first $p$-components can be evaluated exactly up to some maximal value of $p$, $p_{ex}$.
Table \ref{tab2} shows the results both for the case of a single-state main domain and for the domain having the largest average trapping time,
namely ${\cal D}(0,1) \cup {\cal D}(1,1)$ (see Table \ref{tab1}). Table \ref{tab2} reports the statistical fluctuations of the
energy for the simulation of Table \ref{tab1}. Results show that it is indeed interesting to compute exactly as many components
as possible. For the single-state domain, a factor 2 of reduction of the statistical error is obtained when passing from no analytical computation, $p_{ex}=0$, to the
case where eight components for $H_p$ and $S_p$ are exactly computed.
For the best domain, the impact is much more important with a huge reduction of about three orders of
magnitude in the statistical error. Table \ref{tab3} reports
the energies converged as a function of $p$ with their statistical error on the last digit for $E=
-0.8, -0.795, -0.79, -0.785$, and $-0.78$. The values are displayed on Fig.\ref{fig2}. As seen on the figure the behavior of ${\cal E}$ as a function of
$E$ is very close to the linearity. The extrapolated values obtained from the five values of the energy with the three fitting functions are reported.
Using the linear fitting function
leads to an energy of -0.7680282(5) to compare with the exact value of -0.768068... A small bias of about $4 \times 10^{-5}$ is observed. This bias vanishes within the
statistical error with the quadratic fit. It is also true when using the two-component function.
Our final value is in full agreement with the
exact value with about six decimal places.
Table \ref{tab4} shows the evolution of the average trapping times and extrapolated energies as a function of $U$ when using ${\cal D}(0,1) \cup {\cal D}(1,0)$ as the main
domain. We also report the variational and exact energies
together with the values of the optimized parameters of the trial wave function. As $U$ increases the configurations with zero or one double occupation
become more and more predominant and the average trapping time increases. For very large values of $U$ (say, $U \ge 12$) the increase of $\bar{t}_{I_0}$
becomes particularly steep.\\
Finally, in Table \ref{tab4} we report the results obtained for larger systems at $U=12$ for a size of the chain ranging from $N=4$ (36 states)
to $N=12$ ($\sim 10^6$ states). No careful construction of domains maximizing the average trapping time has been performed, we have merely chosen domains
of reasonable size (not more than 2682 ) by taking not too large number of double occupations (only, $n_D=0,1$) and not too small number of
nearest-neighbor antiparallel pairs.
As seen, as the number of sites increases, the average trapping time for the domains chosen decreases. This point is of course undesirable since
the efficiency of the approach may gradually deteriorate when considering large systems. A more elaborate way of constructing domains is clearly called for.
The exact QMC energies extrapolated using the two-component function are also reported.
Similarly to what has been done for $N=4$ the extrapolation is performed using about five values for the reference energy. The extrapolated QMC
energies are in full agreement
with the exact value within error bars. However, an increase of the statistical error is observed when the size increases. To get lower error bars
2022-09-14 15:57:39 +02:00
we need to use better trial wave functions, better domains, and also larger simulation times.
2022-09-14 11:30:16 +02:00
laptop. Of course, it will also be particularly interesting to take advantage of the fully parallelizable character of the algorithm to get much lower error bars.
All these aspects will be considered in a forthcoming work.
\begin{table}[h!]
\caption{$N=4$, $U=12$, and $E=-1$. Dependence of the statistical error on the energy with the number of $p$-components calculated
analytically. Same simulation as for Table \ref{tab1}. Results are presented when a single-state domain
is used for all states and when
${\cal D}(0,1) \cup {\cal D}(1,0)$ is chosen as main domain.}
\label{tab2}
2022-09-14 14:22:15 +02:00
\begin{ruledtabular}
2022-09-14 11:30:16 +02:00
\begin{tabular}{lcc}
$p_{ex}$ & single-state & ${\cal D}(0,1) \cup {\cal D}(1,0)$ \\
\hline
$0$ & $4.3 \times 10^{-5}$ &$ 347 \times 10^{-8}$ \\
$1$ & $4.0 \times10^{-5}$ &$ 377 \times 10^{-8}$\\
$2$ & $3.7 \times 10^{-5}$ &$ 43 \times 10^{-8}$\\
$3$ & $3.3 \times 10^{-5}$ &$ 46 \times 10^{-8}$\\
$4$ & $2.6 \times 10^{-5}$ &$ 5.6 \times 10^{-8}$\\
$5$ & $2.5 \times10^{-5}$ &$ 6.0 \times 10^{-8}$\\
$6$ & $2.3 \times10^{-5}$ &$ 0.7 \times 10^{-8}$\\
$7$ & $2.2 \times 10^{-5}$ &$ 0.6 \times 10^{-8}$\\
$8$ & $2.2 \times10^{-5}$ &$ 0.05 \times 10^{-8}$\\
\end{tabular}
2022-09-14 14:22:15 +02:00
\end{ruledtabular}
2022-09-14 11:30:16 +02:00
\end{table}
\begin{table}[h!]
\caption{$N=4$, $U=12$, $\alpha=1.292$, $\beta=0.552$. Main domain = ${\cal D}(0,1) \cup {\cal D}(1,0)$. Simulation with 20 independent blocks and $10^6$ paths.
$p_{ex}=4$. The various fits are done with the five values of $E$}
\label{tab3}
2022-09-14 14:22:15 +02:00
\begin{ruledtabular}
2022-09-14 11:30:16 +02:00
\begin{tabular}{lc}
$E$ & $E_{QMC}$ \\
\hline
-0.8 &-0.7654686(2)\\
-0.795&-0.7658622(2)\\
-0.79 &-0.7662607(3)\\
-0.785&-0.7666642(4)\\
-0.78 &-0.7670729(5)\\
$E_0$ linear fit & -0.7680282(5)\\
$E_0$ quadratic fit & -0.7680684(5)\\
$E_0$ two-component fit & -0.7680676(5)\\
$E_0$ exact & -0.768068...\\
\end{tabular}
2022-09-14 14:22:15 +02:00
\end{ruledtabular}
2022-09-14 11:30:16 +02:00
\end{table}
\begin{table}[h!]
\caption{$N$=4, Domain ${\cal D}(0,1) \cup {\cal D}(1,0)$}
\label{tab4}
2022-09-14 14:22:15 +02:00
\begin{ruledtabular}
2022-09-14 11:30:16 +02:00
\begin{tabular}{cccccc}
$U$ & $\alpha,\beta$ & $E_{var}$ & $E_{ex}$ & $\bar{t}_{I_0}$ \\
\hline
8 & 0.908,\;0.520 & -0.770342... &-1.117172... & 33.5\\
10 & 1.116,\;0.539 & -0.604162... &-0.911497... & 63.3\\
12 & 1.292,\;0.552 & -0.495361... &-0.768068... & 108.7\\
14 & 1.438,\;0.563 & -0.419163... &-0.662871... & 171.7 \\
20 & 1.786,\;0.582 & -0.286044... &-0.468619... & 504.5 \\
50 & 2.690,\;0.609 & -0.110013... &-0.188984... & 8040.2 \\
200 & 4.070,\;0.624& -0.026940... &-0.047315... & 523836.0 \\
\end{tabular}
2022-09-14 14:22:15 +02:00
\end{ruledtabular}
2022-09-14 11:30:16 +02:00
\end{table}
\begin{table*}[h!]
\caption{$U=12$. The fits to extrapolate the QMC energies are done using the two-component function}
\label{tab5}
2022-09-14 14:22:15 +02:00
\begin{ruledtabular}
2022-09-14 11:30:16 +02:00
\begin{tabular}{crcrccccc}
$N$ & Size Hilbert space & Domain & Domain size & $\alpha,\beta$ &$\bar{t}_{I_0}$ & $E_{var}$ & $E_{ex}$ & $E_{QMC}$\\
\hline
4 & 36 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ & 30 &1.292, \; 0.552& 108.7 & -0.495361 & -0.768068 & -0.7680676(5)\\\
6 & 400 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ &200 & 1.124,\; 0.689 &57.8 & -0.633297 & -1.215395& -1.215389(9)\\
8 & 4 900 & ${\cal D}(0,1) \cup {\cal D}(1,0)$ & 1 190 & 0.984,\; 0.788 & 42.8 & -0.750995 & -1.66395& -1.6637(2)\\
10 & 63 504 & ${\cal D}(0,5) \cup {\cal D}(1,4)$ & 2 682 & 0.856,\; 0.869& 31.0 & -0.855958 & -2.113089& -2.1120(7)\\
12 & 853 776 & ${\cal D}(0,8) \cup {\cal D}(1,7)$ & 1 674 & 0.739,\; 0.938 & 16.7 & -0.952127 & -2.562529& -2.560(6)\\
\end{tabular}
2022-09-14 14:22:15 +02:00
\end{ruledtabular}
2022-09-14 11:30:16 +02:00
\end{table*}
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
\section{Summary and perspectives}
2022-09-14 15:57:39 +02:00
\label{sec:conclu}
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
In this work it has been shown how to integrate out exactly within a DMC framework the contribution of all
stochastic trajectories trapped in some given domains of the Hilbert space and the corresponding general equations have been derived.
In this way a new effective stochastic dynamics connecting only the domains and not the individual states is defined.
A key property of the effective dynamics is that it does not depend on the shape of the domains used for each state, so rather general
domains overlapping or not can be used. To obtain the effective transition probability giving the probability of going from
one domain to another and the renormalized estimators,
the Green's functions limited to the domains are to be computed analytically, that is, in practice, matrices of size the number
of states of the sampled domains need to be inverted. This is the main computationally intensive step of the method.
The efficiency of the method is directly related to
the importance of the average time spent by the stochastic trajectories within each domain. Being able to define domains with large average trapping times
is the key aspect of the method since it may lead to some important reduction of the statistical error as illustrated in our numerical application.
So a tradeoff has to be found in the complexity of the domains maximizing the average trapping time and the cost of computing
the local Green's functions associated with such domains. In practice, there is no general rule to construct efficient domains.
For each system at hand, we need to determine on physical grounds which regions of the space are preferentially sampled by
the stochastic trajectories and to build domains of minimal size enclosing such regions. In the first application presented here on the 1D-Hubbard model
we exploit the physics of the large $U$ regime which is known to approach the Heisenberg limit where double occupations have a small weight.
This simple example has been chosen to illustrate the various aspects of the approach. However, our goal is of course to
tackle much larger systems, like those treated by state-of-the-art methods, such as selected CI, (for example, \cite{Huron_1973,Harrison_1991,Giner_2013,Holmes_2016,
Schriber_2016,Tubman_2020}, QMCFCI,\cite{Booth_2009,Cleland_2010}, AFQMC\cite{Zhang_2003}, or DMRG\cite{White_1999,Chan_2011} approaches).
Here, we have mainly focused on the theoretical aspects of the approach. To reach larger systems will require some
more elaborate implementation of the method in order to keep under control the cost of the simulation.
Doing this was out of the scope of the present work and will be presented in a forthcoming work.
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\acknowledgements{
2022-09-14 14:22:15 +02:00
P.F.L., A.S., and M.C. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481).
2022-09-14 11:30:16 +02:00
This work was supported by the European Centre of
Excellence in Exascale Computing TREX --- Targeting Real Chemical
Accuracy at the Exascale. This project has received funding from the
European Union's Horizon 2020 --- Research and Innovation program ---
2022-09-14 14:22:15 +02:00
under grant agreement no.~952165.
2022-09-14 17:31:00 +02:00
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
\appendix
2022-09-14 17:31:00 +02:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
\section{Particular case of the $2\times2$ matrix}
2022-09-14 17:31:00 +02:00
\label{app:A}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2022-09-14 11:30:16 +02:00
For the simplest case of a two-state system the fundamental equation (\ref{eqfond}) writes
$$
{\cal I}=
2022-09-14 15:57:39 +02:00
\langle I_0|\frac{1}{H-E}|\Psi\rangle = \langle I_0|P_0\frac{1}{H-E} P_0|\Psi\rangle
2022-09-14 11:30:16 +02:00
+ \sum_{p=1}^{\infty} {\cal I}_p$$
with
$$
{\cal I}_p=
\sum_{I_1 \notin {\cal D}_0, \hdots , I_p \notin {\cal D}_{p-1}}
$$
\be
\Big[ \prod_{k=0}^{p-1} \langle I_k| P_k \frac{1}{H-E} P_k (-H)(1-P_k)|I_{k+1} \rangle \Big]
\langle I_p| P_p \frac{1} {H-E} P_p|\Psi\rangle
\label{eqfond}
\ee
To treat simultaneously the two possible cases for the final state, $|I_N\rangle =|1\rangle$ or $|2\rangle$,
the equation has been slightly generalized to the case of a general vector for the final state written as
\be
|\Psi\rangle = \Psi_1 |1\rangle + \Psi_2 |2\rangle
\ee
where
$|1\rangle$ and $|2\rangle$ denote the two states. Let us choose a single-state domain for both states, namely ${\cal D}_1=\{|1\rangle \}$ and ${\cal D}_2=\{ |2\rangle \}$. Note that the single exit state for each state is the other state. Thus there are only two possible deterministic "alternating" paths, namely either
2022-09-14 17:31:00 +02:00
$|1\rangle \to |2\rangle \to |1\rangle,...$ or $|2\rangle \to |1\rangle \to |2\rangle,...$
2022-09-14 11:30:16 +02:00
We introduce the following quantities
2022-09-14 17:31:00 +02:00
\begin{align}
A_1 & = \langle 1| P_1 \frac{1}{H-E} P_1 (-H)(1-P_1)|2\rangle \;\;\;
\\
A_2 & = \langle 2| P_2 \frac{1}{H-E} P_2 (-H) (1-P_2)|1\rangle
2022-09-14 11:30:16 +02:00
\label{defA}
2022-09-14 17:31:00 +02:00
\end{align}
2022-09-14 11:30:16 +02:00
and
\be
C_1= \langle 1| P_1 \frac{1}{H-E} P_1 |\Psi\rangle \;\;\;
C_2= \langle 2| P_2 \frac{1}{H-E} P_2 |\Psi\rangle
\ee
Let us choose, for example, $|I_0\rangle =|1\rangle$, we then have
\be
{\cal I}_1= A_1 C_2 \;\; {\cal I}_2 = A_1 A_2 C_1 \;\; {\cal I}_3 = A_1 A_2 A_1 C_2 \;\; etc.
\ee
that is
\be
{\cal I}_{2k+1} = \frac{C_2}{A_2} (A_1 A_2)^{2k+1}
\ee
and
\be
{\cal I}_{2k} = C_1 (A_1 A_2)^{2k}
\ee
Then
\be
\sum_{p=1}^{\infty}
{\cal I}_p
= \frac{C_2}{A_2} \big[ \sum_{p=1}^{\infty} (A_1 A_2)^p \big] + {C_1} \big[ \sum_{p=1}^{\infty} (A_1 A_2)^p \big]
\ee
which gives
\be
\sum_{p=1}^{\infty}
{\cal I}_p
= A_1 \frac{C_2 + C_1 A_2}{1-A_1 A_2}
\ee
and thus
\be
\langle 1 | \frac{1} {H-E} |\Psi\rangle
=
C_1 + A_1 \frac{C_2 + C_1 A_2}{1-A_1 A_2}
\ee
For a two by two matrix it is easy to evaluate the $A_i$'s, Eq.\ref{defA}, we
have
$$
A_i = -\frac{H_{12}}{H_{ii}-E} \;\;\; \;\;\; C_i= \frac{1}{H_{ii}-E} \Psi_i \;\;\; i=1,2
$$
Then
$$
\langle 1 | \frac{1} {H-E} |\Psi\rangle
= \frac{1}{H_{11}-E} \Psi_1 -H_{12} \frac{ \big( \Psi_2 - \frac{ H_{12}\Psi_1}{H_{11}-E} \big)}
{ (H_{11}-E)(H_{22}-E) - H^2_{12}}
$$
$$
= \frac{ (H_{22}-E) \Psi_1}{\Delta} -\frac{H_{12} \Psi_2}{\Delta}
$$
where $\Delta$ is the determinant of the matrix $H$. It is easy to check that the RHS is equal to the LHS, thus validating the fundamental equation
for this particular case.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliography{g}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}