1st draft of Sec III

This commit is contained in:
Pierre-Francois Loos 2021-07-20 14:26:18 +02:00
parent 409cfbc1cd
commit 34b89845fd

View File

@ -205,21 +205,21 @@ where $\hH$ is the (non-relativistic) electronic Hamiltonian,
\label{eq:Psivar} \label{eq:Psivar}
\Psivar^{(k)} = \sum_{I \in \cI_k} c_I^{(k)} \ket*{I} \Psivar^{(k)} = \sum_{I \in \cI_k} c_I^{(k)} \ket*{I}
\end{equation} \end{equation}
is the variational wave function, $\cI_k$ is the set of internal determinants $\ket*{I}$ and $\cA_k$ is the set of external determinants (or perturbers) $\ket*{\alpha}$ which do not belong to the variational space but are linked to it via a nonzero matrix element, \ie, $\mel*{\Psivar^{(k)}}{\hH}{\alpha} \neq 0$. is the variational wave function, $\cI_k$ is the set of internal determinants $\ket*{I}$ and $\cA_k$ is the set of external determinants (or perturbers) $\ket*{\alpha}$ which do not belong to the variational space at the $k$th iteration but are linked to it via a nonzero matrix element, \ie, $\mel*{\Psivar^{(k)}}{\hH}{\alpha} \neq 0$.
The sets $\cI_k$ and $\cA_k$ define, at the $k$th iteration, the internal and external spaces, respectively. The sets $\cI_k$ and $\cA_k$ define, at the $k$th iteration, the internal and external spaces, respectively.
In the selection step, the perturbers corresponding to the largest $\abs*{e_{\alpha}^{(k)}}$ values are then added to the variational space at iteration $k+1$. In the selection step, the perturbers corresponding to the largest $\abs*{e_{\alpha}^{(k)}}$ values are then added to the variational space at iteration $k+1$.
In our implementation, the size of the variational space is roughly doubled at each iteration. In our implementation, the size of the variational space is roughly doubled at each iteration.
Hereafter, we label these iterations over $k$ which consist in enlarging the variational space as \textit{macroiterations}. Hereafter, we label these iterations over $k$ which consist in enlarging the variational space as \textit{macroiterations}.
In practice, $\Evar^{(k)}$ is computed by diagonalizing the $\Ndet^{(k)} \times \Ndet^{(k)}$ CI matrix with elements $\mel{I}{\hH}{J}$ via Davidson's algorithm. \cite{Davidson_1975} In practice, $\Evar^{(k)}$ is computed by diagonalizing the $\Ndet^{(k)} \times \Ndet^{(k)}$ CI matrix with elements $\mel{I}{\hH}{J}$ via Davidson's algorithm. \cite{Davidson_1975}
The magnitude of $\EPT^{(k)}$ provides, at iteration $k$, a qualitative idea of the ``distance'' to the FCI limit. \cite{Garniron_2018} The magnitude of $\EPT^{(k)}$ provides, at iteration $k$, a qualitative idea of the ``distance'' to the FCI limit. \cite{Garniron_2018}
We then linearly extrapolate, using large variational space, the CIPSI energy to $\EPT = 0$ (which effectively corresponds to the FCI limit). We then linearly extrapolate, using large variational wave functions, the CIPSI energy to $\EPT = 0$ (which effectively corresponds to the FCI limit).
Further details concerning the extrapolation procedure are provided below (see Sec.~\ref{sec:res}). Further details concerning the extrapolation procedure are provided below (see Sec.~\ref{sec:res}).
Orbital optimization techniques at the SCI level are theoretically straightforward, but practically challenging. Orbital optimization techniques at the SCI level are theoretically straightforward, but practically challenging.
Here, we detail our orbital optimization procedure within the CIPSI algorithm and we assume that the variational wave function is normalized, \ie, $\braket*{\Psivar}{\Psivar} = 1$. Here, we detail our orbital optimization procedure within the CIPSI algorithm and we assume that the variational wave function is normalized, \ie, $\braket*{\Psivar}{\Psivar} = 1$.
As stated in Sec.~\ref{sec:intro}, $\Evar$ depends on both the CI coefficients $\{ c_I \}_{1 \le I \le \Ndet}$ [see Eq.~\eqref{eq:Psivar}] but also on the orbital rotation parameters $\{\kappa_{pq}\}_{1 \le p,q \le \Norb}$. As stated in Sec.~\ref{sec:intro}, $\Evar$ depends on both the CI coefficients $\{ c_I \}_{1 \le I \le \Ndet}$ [see Eq.~\eqref{eq:Psivar}] but also on the orbital rotation parameters $\{\kappa_{pq}\}_{1 \le p,q \le \Norb}$.
Here, we have chosen to optimise separately the CI and orbital coefficients by alternatively diagonalizing the CI matrix after each selection step and then rotating the orbitals until the variational energy for a given number of determinants is minimal. Here, motivated by cost saving arguments, we have chosen to optimize separately the CI and orbital coefficients by alternatively diagonalizing the CI matrix after each selection step and then rotating the orbitals until the variational energy for a given number of determinants is minimal.
To do so, we conveniently rewrite the variational energy as To do so, we conveniently rewrite the variational energy as
\begin{equation} \begin{equation}
\label{eq:Evar_c_k} \label{eq:Evar_c_k}
@ -249,7 +249,7 @@ Their elements are explicitly given by the following expressions: \cite{Henderso
\\ \\
&= \sum_{\sigma} \mel{\Psivar}{\comm*{\cre{p\sigma} \ani{q\sigma} - \cre{q\sigma} \ani{p\sigma}}{\hH}}{\Psivar} &= \sum_{\sigma} \mel{\Psivar}{\comm*{\cre{p\sigma} \ani{q\sigma} - \cre{q\sigma} \ani{p\sigma}}{\hH}}{\Psivar}
\\ \\
&= \cP_{pq} \qty[ \sum_r \left( h_p^r \ \gamma_r^q - h_r^q \ \gamma_p^r \right) + \sum_{rst} \qty( v_{pt}^{rs} \Gamma_{rs}^{qt} - v_{rs}^{qt} \Gamma_{pt}^{rs} ) ] &= \cP_{pq} \qty[ \sum_r \left( h_p^r \ \gamma_r^q - h_r^q \ \gamma_p^r \right) + \sum_{rst} \qty( v_{pt}^{rs} \Gamma_{rs}^{qt} - v_{rs}^{qt} \Gamma_{pt}^{rs} ) ],
\end{split} \end{split}
\end{equation} \end{equation}
and and
@ -276,7 +276,7 @@ and
& \phantom{\cP_{pq} \cP_{rs} \Bigg\{} + \sum_{uv} (v_{pr}^{uv} \Gamma_{uv}^{qs} + v_{uv}^{qs} \Gamma_{ps}^{uv}) & \phantom{\cP_{pq} \cP_{rs} \Bigg\{} + \sum_{uv} (v_{pr}^{uv} \Gamma_{uv}^{qs} + v_{uv}^{qs} \Gamma_{ps}^{uv})
\\ \\
& \phantom{\cP_{pq} \cP_{rs} \Bigg\{} - \sum_{tu} (v_{pu}^{st} \Gamma_{rt}^{qu}+v_{pu}^{tr} \Gamma_{tr}^{qu}+v_{rt}^{qu}\Gamma_{pu}^{st} + v_{tr}^{qu}\Gamma_{pu}^{ts})] & \phantom{\cP_{pq} \cP_{rs} \Bigg\{} - \sum_{tu} (v_{pu}^{st} \Gamma_{rt}^{qu}+v_{pu}^{tr} \Gamma_{tr}^{qu}+v_{rt}^{qu}\Gamma_{pu}^{st} + v_{tr}^{qu}\Gamma_{pu}^{ts})]
\Bigg\} \Bigg\},
\end{split} \end{split}
\end{equation} \end{equation}
where $\delta_{pq}$ is the Kronecker delta, $\cP_{pq} = 1 - (p \leftrightarrow q)$ is a permutation operator, where $\delta_{pq}$ is the Kronecker delta, $\cP_{pq} = 1 - (p \leftrightarrow q)$ is a permutation operator,
@ -284,7 +284,7 @@ where $\delta_{pq}$ is the Kronecker delta, $\cP_{pq} = 1 - (p \leftrightarrow q
\begin{gather} \begin{gather}
\gamma_p^q = \sum_{\sigma} \mel{\Psivar}{\hat{a}_{p \sigma}^{\dagger} \hat{a}_{q \sigma}^{}}{\Psivar}, \gamma_p^q = \sum_{\sigma} \mel{\Psivar}{\hat{a}_{p \sigma}^{\dagger} \hat{a}_{q \sigma}^{}}{\Psivar},
\\ \\
\Gamma_{pq}^{rs} = \sum_{\sigma \sigma'} \mel{\Psivar}{\cre{p\sigma} \cre{r\sigma'} \ani{s\sigma'} \ani{q\sigma}}{\Psivar}, \Gamma_{pq}^{rs} = \sum_{\sigma \sigma'} \mel{\Psivar}{\cre{p\sigma} \cre{r\sigma'} \ani{s\sigma'} \ani{q\sigma}}{\Psivar}
\end{gather} \end{gather}
\end{subequations} \end{subequations}
are the elements of the one- and two-electron density matrices, and are the elements of the one- and two-electron density matrices, and
@ -294,7 +294,7 @@ are the elements of the one- and two-electron density matrices, and
h_p^q = \int \MO{p}(\br) \, \hh(\br) \, \MO{q}(\br) d\br, h_p^q = \int \MO{p}(\br) \, \hh(\br) \, \MO{q}(\br) d\br,
\\ \\
\label{eq:two} \label{eq:two}
v_{pq}^{rs} = \iint \MO{p}(\br_1) \MO{q}(\br_2) \frac{1}{\abs*{\br_1 - \br_2}} \MO{r}(\br_1) \MO{s}(\br_2) d\br_1 d\br_2. v_{pq}^{rs} = \iint \MO{p}(\br_1) \MO{q}(\br_2) \frac{1}{\abs*{\br_1 - \br_2}} \MO{r}(\br_1) \MO{s}(\br_2) d\br_1 d\br_2
\end{gather} \end{gather}
\end{subequations} \end{subequations}
are the one- and two-electron integrals, respectively. are the one- and two-electron integrals, respectively.
@ -307,13 +307,13 @@ It is also worth pointing out that, after each orbital rotation, the one- and tw
% \Evar = \sum_{pq} h_p^q \gamma_p^q + \frac{1}{2} \sum_{pqrs} v_{pq}^{rs} \Gamma_{pq}^{rs}, % \Evar = \sum_{pq} h_p^q \gamma_p^q + \frac{1}{2} \sum_{pqrs} v_{pq}^{rs} \Gamma_{pq}^{rs},
%\end{equation} %\end{equation}
\titou{To enhance convergence, we here employ a variant of the Newton-Raphson method known as ``trust region''. \cite{Nocedal_1999} To enhance the convergence of the microiteration process, we employ a variant of the Newton-Raphson method known as ``trust region''. \cite{Nocedal_1999}
This popular variant defines a region where the quadratic approximation \eqref{eq:EvarTaylor} is an adequate representation of the objective energy function \eqref{eq:Evar_c_k} and it evolves during the optimization process in order to preserve the adequacy via a constraint on the step size $\norm{\bk^{(\ell+1)}} \leq \Delta^{(\ell)}$, where $\Delta^{(\ell)}$ is the trust radius at the $\ell$th microiteration. This popular variant defines a region where the quadratic approximation \eqref{eq:EvarTaylor} is an adequate representation of the objective energy function \eqref{eq:Evar_c_k} and it evolves during the optimization process in order to preserve the adequacy via a constraint on the step size preventing it from overstepping, \ie, $\norm{\bk^{(\ell+1)}} \leq \Delta^{(\ell)}$, where $\Delta^{(\ell)}$ is the trust radius at the $\ell$th microiteration.
By introduction a Lagrange multiplier $\lambda$, one obtains $\bk^{(\ell+1)} = - (\bH^{(\ell)} + \lambda \bI)^{-1} \cdot \bg^{(\ell)}$. By introduction a Lagrange multiplier $\lambda$ to control the trust-region size, one obtains $\bk^{(\ell+1)} = - (\bH^{(\ell)} + \lambda \bI)^{-1} \cdot \bg^{(\ell)}$.
The addition of the level shift $\lambda \geq 0$ removes the negative eigenvalues and ensure the positive definiteness of the Hessian matrix by reducing the step size. The addition of the level shift $\lambda \geq 0$ removes the negative eigenvalues and ensure the positive definiteness of the Hessian matrix by reducing the step size.
By choosing the right $\lambda$ the step size is constraint into a hypersphere of radius $\Delta^{(\ell)}$. By choosing the right value of $\lambda$, the step size is constraint into a hypersphere of radius $\Delta^{(\ell)}$ and is able to evolve from the Newton direction at $\lambda = 0$ to the steepest descent direction as $\lambda$ grows.
In addition, the evolution of $\Delta^{(\ell)}$ during the optimization and the use of a condition to cancel a step ensure the convergence of the algorithm. The evolution of the trust radius during the optimization and the use of a condition to cancel the step where the energy rises ensure the convergence of the algorithm.
More details can be found in Ref.~\onlinecite{Nocedal_1999}.} More details can be found in Ref.~\onlinecite{Nocedal_1999}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Results and discussion} \section{Results and discussion}