dynker/dynker.tex

735 lines
38 KiB
TeX
Raw Normal View History

2020-07-19 20:43:05 +02:00
\documentclass[aip,jcp,reprint,noshowkeys,superscriptaddress]{revtex4-2}
\usepackage{graphicx,dcolumn,bm,xcolor,microtype,multirow,amscd,amsmath,amssymb,amsfonts,physics,longtable,wrapfig,txfonts}
\usepackage[version=4]{mhchem}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{txfonts}
\usepackage[
colorlinks=true,
citecolor=blue,
breaklinks=true
]{hyperref}
\urlstyle{same}
\begin{document}
\title{Dynamical Kernels for Optical Excitations}
\author{Pierre-Fran\c{c}ois \surname{Loos}}
\email{loos@irsamc.ups-tlse.fr}
\affiliation{\LCPQ}
\begin{abstract}
We discuss the physical properties and accuracy of three distinct dynamical (\ie, frequency-dependent) kernels for the computation of optical excitations within linear response theory:
i) an \textit{a priori} built kernel inspired by the dressed time-dependent density-functional theory (TDDFT) kernel proposed by Maitra and coworkers [\href{https://doi.org/10.1063/1.1651060}{J.~Chem.~Phys.~120, 5932 (2004)}],
ii) the dynamical kernel stemming from the Bethe-Salpeter equation (BSE) formalism derived originally by Strinati [\href{https://doi.org/10.1007/BF02725962}{Riv.~Nuovo Cimento 11, 1--86 (1988)}], and
iii) the second-order BSE kernel derived by Yang and coworkers [\href{https://doi.org/10.1063/1.4824907}{J.~Chem.~Phys.~139, 154109 (2013)}].
In particular, using a simple two-level model, we analyze, for each kernel, the appearance of spurious excitations, as first evidenced by Romaniello and collaborators [\href{https://doi.org/10.1063/1.3065669}{J.~Chem.~Phys.~130, 044108 (2009)}], due to the approximate nature of the kernels.
%\\
%\bigskip
%\begin{center}
% \boxed{\includegraphics[width=0.5\linewidth]{TOC}}
%\end{center}
%\bigskip
\end{abstract}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Linear response theory}
\label{sec:LR}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2020-07-20 09:22:14 +02:00
Linear response theory is a powerful approach that allows to directly access the optical excitations $\omega_S$ of a given electronic system (such as a molecule) and their corresponding oscillator strengths $f_s$ [extracted from their eigenvectors $\T{(\bX_S \bY_S)}$] via the response of the system to a weak electromagnetic field. \cite{Oddershede_1977,Casida_1995,Petersilka_1996}
2020-07-19 20:43:05 +02:00
From a practical point of view, these quantities are obtained by solving non-linear, frequency-dependent Casida-like equations in the space of single excitations and de-excitations \cite{Casida_1995}
\begin{equation} \label{eq:LR}
\begin{pmatrix}
2020-07-20 09:22:14 +02:00
\bR^{\sigma}(\omega_S) & \bC^{\sigma}(\omega_S)
2020-07-19 20:43:05 +02:00
\\
2020-07-20 09:22:14 +02:00
-\bC^{\sigma}(-\omega_S)^* & -\bR^{\sigma}(-\omega_S)^*
2020-07-19 20:43:05 +02:00
\end{pmatrix}
\cdot
\begin{pmatrix}
2020-07-20 09:22:14 +02:00
\bX_S^{\sigma}
2020-07-19 20:43:05 +02:00
\\
2020-07-20 09:22:14 +02:00
\bY_S^{\sigma}
2020-07-19 20:43:05 +02:00
\end{pmatrix}
=
2020-07-20 09:22:14 +02:00
\omega_S
2020-07-19 20:43:05 +02:00
\begin{pmatrix}
2020-07-20 09:22:14 +02:00
\bX_S^{\sigma}
2020-07-19 20:43:05 +02:00
\\
2020-07-20 09:22:14 +02:00
\bY_S^{\sigma}
2020-07-19 20:43:05 +02:00
\end{pmatrix}
\end{equation}
where the explicit expressions of the resonant and coupling blocks, $\bR^{\sigma}(\omega)$ and $\bC^{\sigma}(\omega)$, depend on the spin manifold ($\sigma =$ $\updw$ for singlets and $\sigma =$ $\upup$ for triplets) and the level of approximation that one employs.
Neglecting the coupling block [\ie, $\bC^{\sigma}(\omega) = 0$] between the resonant and anti-resonants parts, $\bR^{\sigma}(\omega)$ and $-\bR^{\sigma}(-\omega)^*$, is known as the Tamm-Dancoff approximation (TDA).
In the absence of symmetry breaking, \cite{Dreuw_2005} the non-linear eigenvalue problem defined in Eq.~\eqref{eq:LR} has particle-hole symmetry which means that it is invariant via the transformation $\omega \to -\omega$.
Therefore, without loss of generality, we will restrict our analysis to positive frequencies.
2020-07-20 09:22:14 +02:00
In the one-electron basis of (real) spatial orbitals $\lbrace \MO{p}(\br) \rbrace$, we will assume that the elements of the matrices defined in Eq.~\eqref{eq:LR} have the following generic forms: \cite{Dreuw_2005}
2020-07-19 20:43:05 +02:00
\begin{subequations}
\begin{gather}
R_{ia,jb}^{\sigma}(\omega) = (\e{a} - \e{i}) \delta_{ij} \delta_{ab} + f_{ia,jb}^{\Hxc,\sigma}(\omega)
\\
C_{ia,jb}^{\sigma}(\omega) = f_{ia,bj}^{\Hxc,\sigma}(\omega)
\end{gather}
\end{subequations}
2020-07-20 09:22:14 +02:00
where $\delta_{pq}$ is the Kronecker delta, $\e{p}$ is the one-electron (or quasiparticle) energy associated with $\MO{p}(\br)$, and
2020-07-19 20:43:05 +02:00
\begin{equation} \label{eq:kernel}
f_{ia,jb}^{\Hxc,\sigma}(\omega)
= \iint \MO{i}(\br) \MO{a}(\br) f^{\Hxc,\sigma}(\omega) \MO{j}(\br') \MO{b}(\br') d\br d\br'
\end{equation}
Here, $i$ and $j$ are occupied orbitals, $a$ and $b$ are unoccupied orbitals, and $p$ and $q$ indicate arbitrary orbitals.
In Eq.~\eqref{eq:kernel},
\begin{equation} \label{eq:kernel-Hxc}
f^{\Hxc,\sigma}(\omega) = f^{\Hx,\sigma} + f^{\co,\sigma}(\omega)
\end{equation}
is the (spin-resolved) Hartree-exchange-correlation (Hxc) dynamical kernel.
In the case of a spin-independent kernel, we will drop the superscript $\sigma$.
As readily seen from Eq.~\eqref{eq:kernel-Hxc}, only the correlation (c) part of the kernel is frequency dependent and, in a wave function context, the static Hartree-exchange (Hx) matrix elements read
\begin{equation}
f_{ia,jb}^{\Hx,\sigma} = 2\sigma \ERI{ia}{jb} - \ERI{ib}{ja}
\end{equation}
where $\sigma = 1 $ or $0$ for singlet and triplet excited states (respectively), and
\begin{equation}
\ERI{ia}{jb} = \iint \MO{i}(\br) \MO{a}(\br) \frac{1}{\abs{\br - \br'}} \MO{j}(\br') \MO{b}(\br') d\br d\br'
\end{equation}
are the usual two-electron integrals.
2020-07-20 09:22:14 +02:00
The launchpad of the present study is that, thanks to its non-linear nature stemming from its frequency dependence, a dynamical kernel potentially generates more than just single excitations.
2020-07-19 20:43:05 +02:00
Unless otherwise stated, atomic units are used and we assume real quantities throughout this manuscript.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The concept of dynamical quantities}
\label{sec:dyn}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%s
As a chemist, it is maybe difficult to understand the concept of dynamical properties, the motivation behind their introduction, and their actual usefulness.
Here, we will try to give a pedagogical example showing the importance of dynamical quantities and their main purposes. \cite{Romaniello_2009b,Sangalli_2011,ReiningBook}
To do so, let us consider the usual chemical scenario where one wants to get the optical excitations of a given system.
In most cases, this can be done by solving a set of linear equations of the form
\begin{equation}
\label{eq:lin_sys}
2020-07-20 09:22:14 +02:00
\bA \cdot \bc = \omega_S \, \bc
2020-07-19 20:43:05 +02:00
\end{equation}
where $\omega$ is one of the optical excitation energies of interest and $\bc$ its transition vector .
2020-07-20 09:22:14 +02:00
If we assume that the matrix $\bA$ is diagonalizable and of size $N \times N$, the \textit{linear} set of equations \eqref{eq:lin_sys} yields $N$ excitation energies.
2020-07-19 20:43:05 +02:00
However, in practice, $N$ might be (very) large (\eg, equal to the total number of single and double excitations generated from a reference Slater determinant), and it might therefore be practically useful to recast this system as two smaller coupled systems, such that
\begin{equation}
\label{eq:lin_sys_split}
\begin{pmatrix}
\bA_1 & \T{\bb} \\
\bb & \bA_2 \\
\end{pmatrix}
\cdot
\begin{pmatrix}
\bc_1 \\
\bc_2 \\
\end{pmatrix}
= \omega
\begin{pmatrix}
\bc_1 \\
\bc_2 \\
\end{pmatrix}
\end{equation}
where the blocks $\bA_1$ and $\bA_2$, of sizes $N_1 \times N_1$ and $N_2 \times N_2$ (with $N_1 + N_2 = N$), can be associated with, for example, the single and double excitations of the system.
2020-07-20 09:22:14 +02:00
This decomposition technique is often called L\"owdin partitioning in the literature. \cite{Lowdin_1963}
2020-07-19 20:43:05 +02:00
2020-07-20 09:22:14 +02:00
Solving separately each row of the system \eqref{eq:lin_sys_split} and assuming that $\omega \bI - \bA_2$ is invertible, we get
2020-07-19 20:43:05 +02:00
\begin{subequations}
\begin{gather}
\label{eq:row1}
\bA_1 \cdot \bc_1 + \T{\bb} \cdot \bc_2 = \omega \, \bc_1
\\
\label{eq:row2}
\bc_2 = (\omega \, \bI - \bA_2)^{-1} \cdot \bb \cdot \bc_1
\end{gather}
\end{subequations}
Substituting Eq.~\eqref{eq:row2} into Eq.~\eqref{eq:row1} yields the following effective \textit{non-linear}, frequency-dependent operator
\begin{equation}
\label{eq:non_lin_sys}
\Tilde{\bA}_1(\omega) \cdot \bc_1 = \omega \, \bc_1
\end{equation}
with
\begin{equation}
\Tilde{\bA}_1(\omega) = \bA_1 + \T{\bb} \cdot (\omega \, \bI - \bA_2)^{-1} \cdot \bb
\end{equation}
which has, by construction, exactly the same solutions as the linear system \eqref{eq:lin_sys} but a smaller dimension.
For example, an operator $\Tilde{\bA}_1(\omega)$ built in the single-excitation basis can potentially provide excitation energies for double excitations thanks to its frequency-dependent nature, the information from the double excitations being ``folded'' into $\Tilde{\bA}_1(\omega)$ via Eq.~\eqref{eq:row2}. \cite{ReiningBook}
2020-07-20 09:22:14 +02:00
Note that this \textit{exact} decomposition does not alter, in any case, the values of the excitation energies.
2020-07-19 20:43:05 +02:00
How have we been able to reduce the dimension of the problem while keeping the same number of solutions?
To do so, we have transformed a linear operator $\bA$ into a non-linear operator $\Tilde{\bA}_1(\omega)$ by making it frequency dependent.
2020-07-20 09:22:14 +02:00
In other words, we have sacrificed the linearity of the system in order to obtain a new, non-linear systems of equations of smaller dimension [see Eq.~\eqref{eq:non_lin_sys}].
2020-07-19 20:43:05 +02:00
This procedure converting degrees of freedom into frequency or energy dependence is very general and can be applied in various contexts. \cite{Sottile_2003,Garniron_2018,QP2}
Thanks to its non-linearity, Eq.~\eqref{eq:non_lin_sys} can produce more solutions than its actual dimension.
However, because there is no free lunch, this non-linear system is obviously harder to solve than its corresponding linear analog given by Eq.~\eqref{eq:lin_sys}.
Nonetheless, approximations can be now applied to Eq.~\eqref{eq:non_lin_sys} in order to solve it efficiently.
For example, assuming that $\bA_2$ is a diagonal matrix is of common practice (see, for example, Ref.~\onlinecite{Garniron_2018} and references therein).
Another of these approximations is the so-called \textit{static} approximation, where one sets the frequency to a particular value.
For example, as commonly done within the Bethe-Salpeter equation (BSE) formalism of many-body perturbation theory (MBPT), \cite{Strinati_1988} $\Tilde{\bA}_1(\omega) = \Tilde{\bA}_1 \equiv \Tilde{\bA}_1(\omega = 0)$.
In such a way, the operator $\Tilde{\bA}_1$ is made linear again by removing its frequency-dependent nature.
2020-07-20 09:22:14 +02:00
A similar example in the context of time-dependent density-functional theory (TDDFT) \cite{Runge_1984} is provided by the ubiquitous adiabatic approximation, \cite{Tozer_2000} which neglects all memory effects by making static the exchange-correlation (xc) kernel (\ie, frequency independent). \cite{Maitra_2016}
2020-07-19 20:43:05 +02:00
These approximations come with a heavy price as the number of solutions provided by the system of equations \eqref{eq:non_lin_sys} has now been reduced from $N$ to $N_1$.
Coming back to our example, in the static (or adiabatic) approximation, the operator $\Tilde{\bA}_1$ built in the single-excitation basis cannot provide double excitations anymore, and the $N_1$ excitation energies are associated with single excitations.
All additional solutions associated with higher excitations have been forever lost.
In the next section, we illustrate these concepts and the various tricks that can be used to recover some of these dynamical effects starting from the static eigenproblem.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Dynamical kernels}
\label{sec:kernel}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Exact Hamiltonian}
\label{sec:exact}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Let us consider a two-level quantum system made of two orbitals in its singlet ground state (\ie, the lowest orbital is doubly occupied). \cite{Romaniello_2009b}
We will label these two orbitals, $\MO{v}$ and $\MO{c}$, as valence ($v$) and conduction ($c$) orbitals with respective one-electron Hartree-Fock (HF) energies $\e{v}$ and $\e{c}$.
In a more quantum chemical language, these correspond to the HOMO and LUMO orbitals (respectively).
The ground state $\ket{0}$ has a one-electron configuration $\ket{v\bar{v}}$, while the doubly-excited state $\ket{D}$ has a configuration $\ket{c\bar{c}}$.
There is then only one single excitation possible which corresponds to the transition $v \to c$ with different spin-flip configurations.
As usual, this can produce a singlet singly-excited state $\ket{S} = (\ket{v\bar{c}} + \ket{c\bar{v}})/\sqrt{2}$, and a triplet singly-excited state $\ket{T} = (\ket{v\bar{c}} - \ket{c\bar{v}})/\sqrt{2}$. \cite{SzaboBook}
For the singlet manifold, the exact Hamiltonian in the basis of the (spin-adapted) configuration state functions reads
\begin{equation} \label{eq:H-exact}
\bH^{\updw} =
\begin{pmatrix}
\mel{0}{\hH}{0} & \mel{0}{\hH}{S} & \mel{0}{\hH}{D} \\
\mel{S}{\hH}{0} & \mel{S}{\hH}{S} & \mel{S}{\hH}{D} \\
\mel{D}{\hH}{0} & \mel{D}{\hH}{S} & \mel{D}{\hH}{D} \\
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{align}
\mel{0}{\hH}{0} & = 2\e{v} - \ERI{vv}{vv} = \EHF
\\
\mel{S}{\hH - \EHF}{S} & = \Delta\e{} + \ERI{vc}{cv} - \ERI{vv}{cc}
\\
\begin{split}
\mel{D}{\hH - \EHF}{D}
& = 2\Delta\e{} + \ERI{vv}{vv} + \ERI{cc}{cc}
\\
& + 2\ERI{vc}{cv} - 4\ERI{vv}{cc}
\end{split}
\\
\mel{0}{\hH}{S} & = 0
\\
\mel{S}{\hH}{D} & = \sqrt{2}[\ERI{vc}{cc} - \ERI{cv}{vv}]
\\
\mel{0}{\hH}{D} & = \ERI{vc}{cv}
\end{align}
\end{subequations}
and $\Delta\e{} = \e{c} - \e{v}$.
The energy of the only triplet state is simply $\mel{T}{\hH}{T} = \EHF + \Delta\e{} - \ERI{vv}{cc}$.
For the sake of illustration, we will use the same numerical example throughout this study, and consider the singlet ground state of the \ce{He} atom in Pople's 6-31G basis set.
This system contains two orbitals and the numerical values of the various quantities defined above are
\begin{subequations}
\begin{align}
\e{v} & = -0.914\,127
&
\e{c} & = + 1.399\,859
\\
\ERI{vv}{vv} & = 1.026\,907
&
\ERI{cc}{cc} & = 0.766\,363
\\
\ERI{vv}{cc} & = 0.858\,133
&
\ERI{vc}{cv} & = 0.227\,670
\\
\ERI{vv}{vc} & = 0.316\,490
&
\ERI{vc}{cc} & = 0.255\,554
\end{align}
\end{subequations}
This yields the following exact singlet and triplet excitation energies
\begin{align} \label{eq:exact}
\omega_{1}^{\updw} & = 1.92145
&
\omega_{3}^{\updw} & = 3.47880
&
\omega_{1}^{\upup} & = 1.47085
\end{align}
where $\omega_{1}^{\updw}$ and $\omega_{3}^{\updw}$ are the singlet single and double excitations (respectively), and $\omega_{1}^{\upup}$ is the triplet single excitation.
We are going to use these as reference for the remaining of this study.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Maitra's dynamical kernel}
\label{sec:Maitra}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The kernel proposed by Maitra and coworkers \cite{Maitra_2004,Cave_2004} in the context of dressed TDDFT (D-TDDFT) corresponds to an \textit{ad hoc} many-body theory correction to TDDFT.
More specifically, D-TDDFT adds to the static kernel a frequency-dependent part by reverse-engineering the exact Hamiltonian: a single and double excitations, assumed to be strongly coupled, are isolated from among the spectrum and added manually to the static kernel.
The very same idea was taking further by Huix-Rotllant, Casida and coworkers, \cite{Huix-Rotllant_2011} and tested on a large set of molecules.
Here, we start instead from a HF reference.
The static problem corresponds then to the TDHF Hamiltonian, while in the TDA, it reduces to CIS.
For the two-level model, the reverse-engineering process of the exact Hamiltonian \eqref{eq:H-exact} yields
\begin{equation} \label{eq:f-Maitra}
f_\text{M}^{\co,\updw}(\omega) = \frac{\abs*{\mel{S}{\hH}{D}}^2}{\omega - (\mel{D}{\hH}{D} - \mel{0}{\hH}{0}) }
\end{equation}
while $f_\text{M}^{\co,\upup}(\omega) = 0$.
The expression \eqref{eq:f-Maitra} can be easily obtained by folding the double excitation onto the single excitation, as explained in Sec.~\ref{sec:dyn}.
It is clear that one must know \textit{a priori} the structure of the Hamiltonian to construct such dynamical kernel, and this obviously hampers its applicability to realistic photochemical systems where it is sometimes hard to get a clear picture of the interplay between excited states. \cite{Boggio-Pasqua_2007}
For the two-level model, the non-linear equations defined in Eq.~\eqref{eq:LR} provides the following effective Hamiltonian
\begin{equation} \label{eq:H-M}
\bH_\text{D-TDHF}^{\sigma}(\omega) =
\begin{pmatrix}
R_\text{M}^{\sigma}(\omega) & C_\text{M}^{\sigma}(\omega)
\\
-C_\text{M}^{\sigma}(-\omega) & -R_\text{M}^{\sigma}(-\omega)
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{gather}
\label{eq:R_M}
R_\text{M}^{\sigma}(\omega) = \Delta\e{} + 2 \sigma \ERI{vc}{vc} - \ERI{vc}{vc} + f_\text{M}^{\co,\sigma}(\omega)
\\
\label{eq:C_M}
C_\text{M}^{\sigma}(\omega) = 2 \sigma \ERI{vc}{cv} - \ERI{vv}{cc} + f_\text{M}^{\co,\sigma}(\omega)
\end{gather}
\end{subequations}
yielding the excitation energies reported in Table \ref{tab:Maitra} when diagonalized.
The TDHF Hamiltonian is obtained from Eq.~\eqref{eq:H-M} by setting $f_\text{M}^{\co,\sigma}(\omega) = 0$ in Eqs.~\eqref{eq:R_M} and \eqref{eq:C_M}.
In Fig.~\ref{fig:Maitra}, we plot $\det[\bH(\omega) - \omega \bI]$ as a function of $\omega$ for both the singlet (black and gray) and triplet (orange) manifolds.
The roots of $\det[\bH(\omega) - \omega \bI]$ indicate the excitation energies.
Because, there is nothing to dress for the triplet state, only the static TDHF excitation energy is reported.
%%% TABLE I %%%
\begin{table}
\caption{Singlet and triplet excitation energies (in hartree) at various levels of theory.
\label{tab:Maitra}
}
\begin{ruledtabular}
\begin{tabular}{|c|cccc|c|}
Singlets & CIS & TDHF & D-CIS & D-TDHF & Exact \\
\hline
$\omega_1^{\updw}$ & 1.91119 & 1.89758 & 1.90636 & 1.89314 & 1.92145 \\
$\omega_3^{\updw}$ & & & 3.44888 & 3.44865 & 3.47880 \\
\hline
Triplets & & & & & Exact \\
\hline
$\omega_1^{\upup}$ & 1.45585 & 1.43794 & 1.45585 & 1.43794 & 1.47085 \\
\end{tabular}
\end{ruledtabular}
\end{table}
%%% %%% %%% %%%
%%% FIGURE 1 %%%
\begin{figure}
\includegraphics[width=\linewidth]{Maitra}
\caption{
$\det[\bH(\omega) - \omega \bI]$ as a function of $\omega$ for both the singlet (gray and black) and triplet (orange) manifolds.
The static TDHF Hamiltonian (dashed) and dynamic D-TDHF Hamiltonian (solid) are considered.
\label{fig:Maitra}
}
\end{figure}
%%% %%% %%% %%%
Although not particularly accurate for the single excitations, Maitra's dynamical kernel allows to access the double excitation with good accuracy and provides exactly the right number of solutions (two singlets and one triplet).
Note that this correlation kernel is known to work best in the weak correlation regime (which is the case here) where the true excitations have a clear single and double excitation character, \cite{Loos_2019,Loos_2020d} but it is not intended to explore strongly correlated systems. \cite{Carrascal_2018}
Its accuracy for the single excitations could be certainly improved in a DFT context.
However, this is not the point of the present investigation.
Table \ref{tab:Maitra} also reports the slightly-improved (thanks to error compensation) CIS and D-CIS excitation energies.
In particular, single excitations are greatly improved without altering the accuracy of the double excitation.
Graphically, the curves obtained for CIS and D-CIS are extremely similar to the ones of TDHF and D-TDHF depicted in Fig.~\ref{fig:Maitra}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Dynamical BSE kernel}
\label{sec:BSE}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As mentioned in Sec.~\ref{sec:dyn}, most of BSE calculations performed nowadays are done within the static approximation. \cite{ReiningBook,Loos_2020e}
However, following Strinati's footsteps, \cite{Strinati_1982,Strinati_1984,Strinati_1988} several groups have explored this formalism beyond the static approximation by retaining (or reviving) the dynamical nature of the dynamically-screened Coulomb potential $W$ \cite{Sottile_2003,Romaniello_2009b,Sangalli_2011} or via a perturbative approach. \cite{Rohlfing_2000,Ma_2009a,Ma_2009b,Baumeier_2012b}
Based on the very same two-level model that we employ here, Romaniello and coworkers \cite{Romaniello_2009b} clearly evidenced that one can genuinely access additional excitations by solving the non-linear, frequency-dependent BSE eigenvalue problem.
For this particular system, they showed that a BSE kernel based on the random-phase approximation (RPA) produces indeed double excitations but also unphysical excitations, \cite{Romaniello_2009b} attributed to the self-screening problem. \cite{Romaniello_2009a}
This issue was resolved in the subsequent work of Sangalli \textit{et al.} \cite{Sangalli_2011} via the design of a diagrammatic number-conserving approach based on the folding of the second-RPA Hamiltonian. \cite{Wambach_1988}
Thanks to a careful diagrammatic analysis of the dynamic kernel, they showed that their approach produces the correct number of optically active poles, and this was further illustrated by computing the polarizability of two unsaturated hydrocarbon chains (\ce{C8H2} and \ce{C4H6}).
Within the so-called $GW$ approximation of MBPT, \cite{Golze_2019} one can easily compute the quasiparticle energies associated with the valence and conduction orbitals.
Assuming that $W$ has been calculated at the random-phase approximation (RPA) level and within the TDA, the expression of the $\GW$ quasiparticle energy is
\begin{equation}
\e{p}^{\GW} = \e{p} + Z_{p}^{\GW} \SigGW{p}(\e{p})
\end{equation}
where $p = v$ or $c$,
\begin{equation}
\label{eq:SigGW}
\SigGW{p}(\omega) = \frac{2 \ERI{pv}{vc}^2}{\omega - \e{v} + \Omega} + \frac{2 \ERI{pc}{cv}^2}{\omega - \e{c} - \Omega}
\end{equation}
is the correlation part of the self-energy $\Sig{}$, and
\begin{equation}
Z_{p}^{\GW} = \qty( 1 - \left. \pdv{\SigGW{p}(\omega)}{\omega} \right|_{\omega = \e{p}} )^{-1}
\end{equation}
is the renormalization factor.
In Eq.~\eqref{eq:SigGW}, $\Omega = \Delta\e{} + 2 \ERI{vc}{cv}$ is the sole (singlet) RPA excitation energy of the system, with $\Delta\eGW{} = \eGW{c} - \eGW{v}$.
Numerically, we get
\begin{align}
\Omega & = 2.769\,327
&
\eGW{v} & = -0.863\,700
&
\eGW{c} & = +1.373\,640
\end{align}
One can now build the dynamical BSE (dBSE) Hamiltonian \cite{Strinati_1988,Romaniello_2009b}
\begin{equation} \label{eq:HBSE}
\bH_{\dBSE}^{\sigma}(\omega) =
\begin{pmatrix}
R_{\dBSE}^{\sigma}(\omega) & C_{\dBSE}^{\sigma}(\omega)
\\
-C_{\dBSE}^{\sigma}(-\omega) & -R_{\dBSE}^{\sigma}(-\omega)
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{gather}
R_{\dBSE}^{\sigma}(\omega) = \Delta\eGW{} + 2 \sigma \ERI{vc}{cv} - \ERI{vv}{cc} - W^{\co}_R(\omega)
\\
C_{\dBSE}^{\sigma}(\omega) = 2 \sigma \ERI{vc}{cv} - \ERI{vc}{cv} - W^{\co}_C(\omega)
\end{gather}
\end{subequations}
and
\begin{subequations}
\begin{gather}
W^{\co}_R(\omega) = \frac{4 \ERI{vv}{vc} \ERI{vc}{cc}}{\omega - \Omega - \Delta\eGW{}}
\\
W^{\co}_C(\omega) = \ERI{vc}{cv} + \frac{4 \ERI{vc}{cv}^2}{\omega - \Omega}
\end{gather}
\end{subequations}
are the elements of the correlation part of the dynamically-screened Coulomb potential for the resonant and coupling blocks of the dBSE Hamiltonian.
Note that, in this case, the correlation kernel is spin blind.
Within the usual static approximation, the BSE Hamiltonian is simply
\begin{equation}
\bH_{\BSE}^{\sigma} =
\begin{pmatrix}
R_{\BSE}^{\sigma} & C_{\BSE}^{\sigma}
\\
-C_{\BSE}^{\sigma} & -R_{\BSE}^{\sigma}
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{gather}
R_{\BSE}^{\sigma} = \Delta\eGW{} + 2 \sigma \ERI{vc}{vc} - W_R(\omega = \Delta\eGW{})
\\
C_{\BSE}^{\sigma} = 2 \sigma \ERI{vc}{vc} - W_C(\omega = 0)
\end{gather}
\end{subequations}
It can be easily shown that solving the equation
\begin{equation}
\det[\bH_{\dBSE}^{\sigma}(\omega) - \omega \bI] = 0
\end{equation}
yields 3 solutions per spin manifold (see Fig.~\ref{fig:dBSE}).
Their numerical values are reported in Table \ref{tab:BSE} alongside other variants discussed below.
This evidences that dBSE reproduces qualitatively well the singlet and triplet single excitations, but quite badly the double excitation which is off by more than 1 hartree.
As mentioned in Ref.~\onlinecite{Romaniello_2009b}, spurious solutions appears due to the approximate nature of the dBSE kernel.
Indeed, diagonalizing the exact Hamiltonian \eqref{eq:H-exact} produces only two singlet solutions corresponding to the singly- and doubly-excited states, and one triplet state (see Sec.~\ref{sec:exact}).
Therefore, there is one spurious solution for the singlet manifold ($\omega_{2}^{\dBSE,\updw}$) and two spurious solutions for the triplet manifold ($\omega_{2}^{\dBSE,\upup}$ and $\omega_{3}^{\dBSE,\upup}$).
It is worth mentioning that, around $\omega = \omega_1^{\dBSE,\sigma}$, the slope of the curves depicted in Fig.~\ref{fig:dBSE} is small, while the two other solutions, $\omega_2^{\dBSE,\sigma}$ and $\omega_3^{\dBSE,\sigma}$, stem from poles and consequently the slope is very large around these frequency values.
\titou{T2: add comment on how one can detect fake solutions?}
%%% TABLE I %%%
\begin{table*}
\caption{BSE singlet and triplet excitation energies (in hartree) at various levels of theory.
\label{tab:BSE}
}
\begin{ruledtabular}
\begin{tabular}{|c|ccccccc|c|}
Singlets & BSE & pBSE & pBSE(dTDA) & dBSE & BSE(TDA) & pBSE(TDA) & dBSE(TDA) & Exact \\
\hline
$\omega_1^{\updw}$ & 1.92778 & 1.90022 & 1.91554 & 1.90527 & 1.95137 & 1.94004 & 1.94005 & 1.92145 \\
$\omega_2^{\updw}$ & & & & 2.78377 & & & & \\
$\omega_3^{\updw}$ & & & & 4.90134 & & & 4.90117 & 3.47880 \\
\hline
Triplets & BSE & pBSE & pBSE(dTDA) & dBSE & BSE(TDA) & pBSE(TDA) & dBSE(TDA) & Exact \\
\hline
$\omega_1^{\upup}$ & 1.48821 & 1.46860 & 1.46260 & 1.46636 & 1.49603 & 1.47070 & 1.47070 & 1.47085 \\
$\omega_2^{\upup}$ & & & & 2.76178 & & & & \\
$\omega_3^{\upup}$ & & & & 4.91545 & & & 4.91517 & \\
\end{tabular}
\end{ruledtabular}
\end{table*}
%%% %%% %%% %%%
%%% FIGURE 2 %%%
\begin{figure}
\includegraphics[width=\linewidth]{dBSE}
\caption{
$\det[\bH(\omega) - \omega \bI]$ as a function of $\omega$ for both the singlet (gray and black) and triplet (orange and red) manifolds.
The static BSE Hamiltonian (dashed) and dynamic dBSE Hamiltonian (solid) are considered.
\label{fig:dBSE}
}
\end{figure}
%%% %%% %%% %%%
Enforcing the TDA, which corresponds to neglecting the coupling term between the resonant and anti-resonant part of the dBSE Hamiltonian \eqref{eq:HBSE}, allows to remove some of these spurious excitations.
There is thus only one spurious excitation in the triplet manifold ($\omega_{3}^{\BSE,\upup}$), the two solutions of the singlet manifold corresponding now to the single and double excitations.
Figure \ref{fig:dBSE-TDA} shows the same curves as Fig.~\ref{fig:dBSE} but in the TDA.
%%% FIGURE 2 %%%
\begin{figure}
\includegraphics[width=\linewidth]{dBSE-TDA}
\caption{
$\det[\bH(\omega) - \omega \bI]$ as a function of $\omega$ for both the singlet (gray and black) and triplet (orange and red) manifolds within the TDA.
The static BSE Hamiltonian (dashed) and dynamic dBSE Hamiltonian (solid) are considered.
\label{fig:dBSE-TDA}
}
\end{figure}
%%% %%% %%% %%%
In the static approximation, only one solution per spin manifold is obtained by diagonalizing $\bH_{\BSE}^{\sigma}$ (see Fig.~\ref{fig:dBSE} and Table \ref{tab:BSE}).
Therefore, the static BSE Hamiltonian does not produce spurious excitations but misses the (singlet) double excitation, and it shows that the physical single excitation stemming from the dBSE Hamiltonian is the lowest one for each spin manifold, \ie, $\omega_1^{\dBSE,\updw}$ and $\omega_1^{\dBSE,\upup}$.
Another way to access dynamical effects while staying in the static framework is to use perturbation theory, \cite{Rohlfing_2000,Ma_2009a,Ma_2009b,Baumeier_2012b} a scheme we label as perturbative BSE (pBSE).
To do so, one must decompose the dBSE Hamiltonian into a (zeroth-order) static part and a dynamical perturbation, such that
\begin{equation}
\bH_{\dBSE}^{\sigma}(\omega)
= \underbrace{\bH_{\BSE}^{\sigma}}_{\bH_{\pBSE}^{(0)}}
+ \underbrace{\qty[ \bH_{\dBSE}^{\sigma}(\omega) - \bH_{\BSE}^{\sigma} ]}_{\bH_{\pBSE}^{(1)}(\omega)}
\end{equation}
Thanks to (renormalized) first-order perturbation theory, one gets
\begin{equation}
\begin{split}
\omega_{1}^{\pBSE,\sigma}
& = \omega_{1}^{\BSE,\sigma}
\\
& + Z_{1}^{\pBSE}
\T{\begin{pmatrix}
X_1 \\ Y_1
\end{pmatrix}
}
\cdot \qty[ \bH_{\dBSE}^{\sigma}(\omega = \omega_{1}^{\BSE,\sigma}) - \bH_{\BSE}^{\sigma} ] \cdot
\begin{pmatrix}
X_1 \\ Y_1
\end{pmatrix}
\end{split}
\end{equation}
where
\begin{equation}
\bH_{\BSE}^{\sigma}
\cdot
\begin{pmatrix}
X_1 \\ Y_1
\end{pmatrix}
= \omega_{1}^{\BSE,\sigma}
\begin{pmatrix}
X_1 \\ Y_1
\end{pmatrix}
\end{equation}
and the renormalization factor is
\begin{equation}
Z_{1}^{\pBSE} = \qty{ 1 -
\T{
\begin{pmatrix}
X_1 \\ Y_1
\end{pmatrix}
}
\cdot \left. \pdv{\bH_{\dBSE}^{\sigma}(\omega)}{\omega} \right|_{\omega = \omega_{1}^{\BSE,\sigma}} \cdot
\begin{pmatrix}
X_1 \\ Y_1
\end{pmatrix}
}^{-1}
\end{equation}
This corresponds to a dynamical perturbative correction to the static excitations.
Obviously, the TDA can be applied to the dynamical correction as well, a scheme we label as dTDA in the following.
The perturbatively-corrected values are also reported in Table \ref{tab:BSE}, which shows that this scheme is very efficient at reproducing the dynamical value for the single excitations.
However, because the perturbative treatment is ultimately static, one cannot access double excitations with such a scheme.
Note that, although the pBSE(dTDA) value is further from the dBSE value than pBSE, it is quite close to the exact excitation energy.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Second-order BSE kernel}
\label{sec:BSE2}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The third and final dynamical kernel that we consider here is the second-order BSE (BSE2) kernel as derived by Yang and collaborators in the TDA, \cite{Zhang_2013} and by Rebolini and Toulouse in a range-separated context \cite{Rebolini_2016,Rebolini_PhD} (see also Refs.~\onlinecite{Myohanen_2008,Sakkinen_2012}).
Note that a beyond-TDA BSE2 kernel was also derived in Ref.~\onlinecite{Rebolini_2016}, but was not tested.
In a nutshell, the BSE2 scheme applies second-order perturbation theory to optical excitations within the Green's function framework by taking the functional derivative of the second-order self-energy $\SigGF{}$ with respect to the one-body Green's function.
Because $\SigGF{}$ is a proper functional derivative, it was claimed in Ref.~\onlinecite{Zhang_2013} that BSE2 does not produce spurious excitations.
However, as we will show below, this is not always true.
Like BSE requires $GW$ quasiparticle energies, BSE2 requires the second-order Green's function (GF2) quasiparticle energies, \cite{SzaboBook} which are defined as follows:
\begin{equation}
\eGF{p} = \e{p} + Z_{p}^{\GF} \SigGF{p}(\e{p})
\end{equation}
where the second-order self-energy is
\begin{equation}
\label{eq:SigGF}
\SigGF{p}(\omega) = \frac{2 \ERI{pv}{vc}^2}{\omega - \e{v} + \e{c} - \e{v}} + \frac{2 \ERI{pc}{cv}^2}{\omega - \e{c} - (\e{c} - \e{v})}
\end{equation}
and
\begin{equation}
Z_{p}^{\GF} = \qty( 1 - \left. \pdv{\SigGF{p}(\omega)}{\omega} \right|_{\omega = \e{p}} )^{-1}
\end{equation}
The expression of the GF2 self-energy \eqref{eq:SigGF} can be easily obtained from its $GW$ counterpart \eqref{eq:SigGW} via the substitution $\Omega \to \e{c} - \e{v}$.
The static Hamiltonian of BSE2 is just the usual TDHF Hamiltonian where one substitutes the HF orbital energies by the GF2 quasiparticle energies, \ie,
\begin{equation}
\bH_{\BSE2}^{\sigma} =
\begin{pmatrix}
R_{\BSE2}^{\sigma} & C_{\BSE2}^{\sigma}
\\
-C_{\BSE2}^{\sigma} & -R_{\BSE2}^{\sigma}
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{gather}
R_{\BSE2}^{\sigma} = \Delta\eGF{} + 2 \sigma \ERI{vc}{vc} - \ERI{vv}{cc}
\\
C_{\BSE2}^{\sigma} = 2 \sigma \ERI{vc}{vc} - \ERI{vc}{cv}
\end{gather}
\end{subequations}
To avoid any confusion with the results of Sec.~\ref{sec:Maitra} and for notational consistency with Sec.~\ref{sec:BSE}, we have labeled this static Hamiltonian as BSE2.
The correlation part of the dynamical kernel for BSE2 is a bit cumbersome \cite{Zhang_2013,Rebolini_2016,Rebolini_PhD} but it simplifies greatly in the case of the present model to yield
\begin{equation}
\bH_{\dBSE2}^{\sigma} = \bH_{\BSE2}^{\sigma} +
\begin{pmatrix}
f_{\dBSE2}^{\co,\sigma}(\omega) & f_{\dBSE2}^{\co,\sigma}
\\
-f_{\dBSE2}^{\co,\sigma} & -f_{\BSE2}^{\co,\sigma}(-\omega)
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{gather}
f_{\dBSE2}^{\co,\updw}(\omega) = - \frac{4 \ERI{cv}{vv} \ERI{vc}{cc} - \ERI{vc}{cc}^2 - \ERI{cv}{vv}^2 }{\omega - 2 \Delta\eGF{}}
\\
f_{\dBSE2}^{\co,\updw} = - \frac{4 \ERI{vc}{cv}^2 - \ERI{cc}{cc} \ERI{vc}{cv} - \ERI{vv}{vv} \ERI{vc}{cv} }{2 \Delta\eGF{}}
\end{gather}
\end{subequations}
and
\begin{subequations}
\begin{gather}
f_{\dBSE2}^{\co,\upup}(\omega) = - \frac{ \ERI{vc}{cc}^2 + \ERI{cv}{vv}^2 }{\omega - 2 \Delta\eGF{}}
\\
f_{\dBSE2}^{\co,\upup} = - \frac{\ERI{cc}{cc} \ERI{vc}{cv} + \ERI{vv}{vv} \ERI{vc}{cv} }{2 \Delta\eGF{}}
\end{gather}
\end{subequations}
Note that, unlike the dBSE Hamiltonian [see Eq.~\eqref{eq:HBSE}], the BSE2 dynamical kernel is spin-aware with distinct expressions for singlets and triplets, and the coupling block $C_{\dBSE2}^{\sigma}$ is frequency independent.
This latter point has an important consequence as this lack of frequency dependence removes one of the spurious pole (see Fig.~\ref{fig:BSE2}).
The singlet manifold has then the right number of excitations.
However, one spurious triplet excitation remains.
It is mentioned in Ref.~\onlinecite{Rebolini_2016} that the BSE2 kernel has some similarities with the second-order polarization-propagator approximation \cite{Oddershede_1977,Nielsen_1980} (SOPPA) and second RPA kernels. \cite{Huix-Rotllant_2011,Huix-Rotllant_PhD,Sangalli_2011}
Numerical results for the two-level model are reported in Table \ref{tab:BSE2} with the usual approximations and perturbative treatments.
In the case of BSE2, the perturbative partitioning is simply
\begin{equation}
\bH_{\dBSE2}^{\sigma}(\omega)
= \underbrace{\bH_{\BSE2}^{\sigma}}_{\bH_{\pBSE2}^{(0)}}
+ \underbrace{\qty[ \bH_{\dBSE2}^{\sigma}(\omega) - \bH_{\BSE2}^{\sigma} ]}_{\bH_{\pBSE2}^{(1)}}
\end{equation}
%%% TABLE II %%%
\begin{table*}
\caption{BSE2 singlet and triplet excitation energies (in hartree) at various levels of theory.
\label{tab:BSE2}
}
\begin{ruledtabular}
\begin{tabular}{|c|ccccccc|c|}
Singlets & BSE2 & pBSE2 & pBSE2(dTDA) & dBSE2 & BSE2(TDA) & pBSE2(TDA) & dBSE2(TDA) & Exact \\
\hline
$\omega_1$ & 1.84903 & 1.90940 & 1.90950 & 1.90362 & 1.86299 & 1.92356 & 1.92359 & 1.92145 \\
$\omega_2$ & & & & & & & & \\
$\omega_3$ & & & & 4.47124 & & & 4.47097 & 3.47880 \\
\hline
Triplets & BSE2 & pBSE2 & pBSE2(dTDA) & dBSE2 & BSE2(TDA) & pBSE2(TDA) & dBSE2(TDA) & Exact \\
\hline
$\omega_1$ & 1.38912 & 1.44285 & 1.44304 & 1.42564 & 1.40765 & 1.46154 & 1.46155 & 1.47085 \\
$\omega_2$ & & & & & & & & \\
$\omega_3$ & & & & 4.47797 & & & 4.47767 & \\
\end{tabular}
\end{ruledtabular}
\end{table*}
%%% %%% %%% %%%
As compared to dBSE, dBSE2 produces much larger corrections to the static excitation energies probably due to the poorer quality of its static reference (CIS or TDHF).
Overall, the accuracy of dBSE and dBSE2 are comparable (see Tables \ref{tab:BSE} and \ref{tab:BSE2}) for single excitations although their behavior is quite different.
For the double excitation, dBSE2 yields a slightly better energy, yet still in quite poor agreement with the exact value.
%%% FIGURE 3 %%%
\begin{figure}
\includegraphics[width=\linewidth]{dBSE2}
\caption{
$\det[\bH(\omega) - \omega \bI]$ as a function of $\omega$ for both the singlet (gray and black) and triplet (orange and red) manifolds.
The static BSE2 Hamiltonian (dashed) and dynamic dBSE2 Hamiltonian (solid) are considered.
\label{fig:BSE2}
}
\end{figure}
%%% %%% %%% %%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The forgotten kernel: Sangalli's kernel}
\label{sec:Sangalli}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\titou{This section is experimental...}
In Ref.~\onlinecite{Sangalli_2011}, Sangalli proposed a dynamical kernel (based on the second RPA) without (he claims) spurious excitations thanks to the design of a number-conserving approach which correctly describes particle indistinguishability and Pauli exclusion principle.
We will first start by writing down explicitly this kernel as it is given in obscure physicist notations in the original article.
The Hamiltonian with Sangalli's kernel is (I think)
\begin{equation}
\bH_\text{S}^{\sigma}(\omega) =
\begin{pmatrix}
\bR_\text{S}^{\sigma}(\omega) & \bC_\text{S}^{\sigma}(\omega)
\\
-\bC_\text{S}^{\sigma}(-\omega) & -\bR_\text{S}^{\sigma}(-\omega)
\end{pmatrix}
\end{equation}
with
\begin{subequations}
\begin{gather}
R_{ia,jb}^{\sigma}(\omega) = \delta_{ij} \delta_{ab} (\eGW{a} - \eGW{i}) + f_{ia,jb}^{\sigma} (\omega)
\\
C_{ia,jb}^{\sigma}(\omega) = f_{ia,bj}^{\sigma} (\omega)
\end{gather}
\end{subequations}
and
\begin{subequations}
\begin{gather}
f_{ia,jb}^{\sigma} (\omega) = \sum_{m \neq n} \frac{ c_{ia,mn} c_{jb,mn} }{\omega - ( \omega_{m} + \omega_{n})}
\\
c_{ia,mn}^{\sigma} = \sum_{jb,kc} \qty{ \qty[ \ERI{ij}{kc} \delta_{ab} + \ERI{kc}{ab} \delta_{ij} ] \qty[ R_{m,jc} R_{n,kb}
+ R_{m,kb} R_{n,jc} ] }
\end{gather}
\end{subequations}
where $R_{m,ia}$ are the elements of the RPA eigenvectors.
For the two-level model, Sangalli's kernel reads
\begin{align}
R(\omega) & = \Delta\eGW{} + f_R (\omega)
\\
C(\omega) & = f_C (\omega)
\end{align}
\begin{gather}
f_R (\omega) = 2 \frac{ [\ERI{vv}{vc} + \ERI{vc}{cc}]^2 }{\omega - 2\omega_1}
\\
f_C (\omega) = 0
\end{gather}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Take-home messages}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
What have we learned here?
%%%%%%%%%%%%%%%%%%%%%%%%
\acknowledgements{
The author thanks the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481) for financial support.
He also thanks Xavier Blase and Juliette Authier for numerous insightful discussions on dynamical kernels.}
%%%%%%%%%%%%%%%%%%%%%%%%
% BIBLIOGRAPHY
\bibliography{dynker}
\end{document}