From 34b89845fdcf34762bf3efc2258a5b2194518b63 Mon Sep 17 00:00:00 2001 From: Pierre-Francois Loos Date: Tue, 20 Jul 2021 14:26:18 +0200 Subject: [PATCH] 1st draft of Sec III --- Manuscript/Ec.tex | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/Manuscript/Ec.tex b/Manuscript/Ec.tex index 2396c07..4bac422 100644 --- a/Manuscript/Ec.tex +++ b/Manuscript/Ec.tex @@ -205,21 +205,21 @@ where $\hH$ is the (non-relativistic) electronic Hamiltonian, \label{eq:Psivar} \Psivar^{(k)} = \sum_{I \in \cI_k} c_I^{(k)} \ket*{I} \end{equation} -is the variational wave function, $\cI_k$ is the set of internal determinants $\ket*{I}$ and $\cA_k$ is the set of external determinants (or perturbers) $\ket*{\alpha}$ which do not belong to the variational space but are linked to it via a nonzero matrix element, \ie, $\mel*{\Psivar^{(k)}}{\hH}{\alpha} \neq 0$. +is the variational wave function, $\cI_k$ is the set of internal determinants $\ket*{I}$ and $\cA_k$ is the set of external determinants (or perturbers) $\ket*{\alpha}$ which do not belong to the variational space at the $k$th iteration but are linked to it via a nonzero matrix element, \ie, $\mel*{\Psivar^{(k)}}{\hH}{\alpha} \neq 0$. The sets $\cI_k$ and $\cA_k$ define, at the $k$th iteration, the internal and external spaces, respectively. In the selection step, the perturbers corresponding to the largest $\abs*{e_{\alpha}^{(k)}}$ values are then added to the variational space at iteration $k+1$. In our implementation, the size of the variational space is roughly doubled at each iteration. Hereafter, we label these iterations over $k$ which consist in enlarging the variational space as \textit{macroiterations}. In practice, $\Evar^{(k)}$ is computed by diagonalizing the $\Ndet^{(k)} \times \Ndet^{(k)}$ CI matrix with elements $\mel{I}{\hH}{J}$ via Davidson's algorithm. \cite{Davidson_1975} The magnitude of $\EPT^{(k)}$ provides, at iteration $k$, a qualitative idea of the ``distance'' to the FCI limit. \cite{Garniron_2018} -We then linearly extrapolate, using large variational space, the CIPSI energy to $\EPT = 0$ (which effectively corresponds to the FCI limit). +We then linearly extrapolate, using large variational wave functions, the CIPSI energy to $\EPT = 0$ (which effectively corresponds to the FCI limit). Further details concerning the extrapolation procedure are provided below (see Sec.~\ref{sec:res}). Orbital optimization techniques at the SCI level are theoretically straightforward, but practically challenging. Here, we detail our orbital optimization procedure within the CIPSI algorithm and we assume that the variational wave function is normalized, \ie, $\braket*{\Psivar}{\Psivar} = 1$. As stated in Sec.~\ref{sec:intro}, $\Evar$ depends on both the CI coefficients $\{ c_I \}_{1 \le I \le \Ndet}$ [see Eq.~\eqref{eq:Psivar}] but also on the orbital rotation parameters $\{\kappa_{pq}\}_{1 \le p,q \le \Norb}$. -Here, we have chosen to optimise separately the CI and orbital coefficients by alternatively diagonalizing the CI matrix after each selection step and then rotating the orbitals until the variational energy for a given number of determinants is minimal. +Here, motivated by cost saving arguments, we have chosen to optimize separately the CI and orbital coefficients by alternatively diagonalizing the CI matrix after each selection step and then rotating the orbitals until the variational energy for a given number of determinants is minimal. To do so, we conveniently rewrite the variational energy as \begin{equation} \label{eq:Evar_c_k} @@ -249,7 +249,7 @@ Their elements are explicitly given by the following expressions: \cite{Henderso \\ &= \sum_{\sigma} \mel{\Psivar}{\comm*{\cre{p\sigma} \ani{q\sigma} - \cre{q\sigma} \ani{p\sigma}}{\hH}}{\Psivar} \\ - &= \cP_{pq} \qty[ \sum_r \left( h_p^r \ \gamma_r^q - h_r^q \ \gamma_p^r \right) + \sum_{rst} \qty( v_{pt}^{rs} \Gamma_{rs}^{qt} - v_{rs}^{qt} \Gamma_{pt}^{rs} ) ] + &= \cP_{pq} \qty[ \sum_r \left( h_p^r \ \gamma_r^q - h_r^q \ \gamma_p^r \right) + \sum_{rst} \qty( v_{pt}^{rs} \Gamma_{rs}^{qt} - v_{rs}^{qt} \Gamma_{pt}^{rs} ) ], \end{split} \end{equation} and @@ -276,7 +276,7 @@ and & \phantom{\cP_{pq} \cP_{rs} \Bigg\{} + \sum_{uv} (v_{pr}^{uv} \Gamma_{uv}^{qs} + v_{uv}^{qs} \Gamma_{ps}^{uv}) \\ & \phantom{\cP_{pq} \cP_{rs} \Bigg\{} - \sum_{tu} (v_{pu}^{st} \Gamma_{rt}^{qu}+v_{pu}^{tr} \Gamma_{tr}^{qu}+v_{rt}^{qu}\Gamma_{pu}^{st} + v_{tr}^{qu}\Gamma_{pu}^{ts})] - \Bigg\} + \Bigg\}, \end{split} \end{equation} where $\delta_{pq}$ is the Kronecker delta, $\cP_{pq} = 1 - (p \leftrightarrow q)$ is a permutation operator, @@ -284,7 +284,7 @@ where $\delta_{pq}$ is the Kronecker delta, $\cP_{pq} = 1 - (p \leftrightarrow q \begin{gather} \gamma_p^q = \sum_{\sigma} \mel{\Psivar}{\hat{a}_{p \sigma}^{\dagger} \hat{a}_{q \sigma}^{}}{\Psivar}, \\ - \Gamma_{pq}^{rs} = \sum_{\sigma \sigma'} \mel{\Psivar}{\cre{p\sigma} \cre{r\sigma'} \ani{s\sigma'} \ani{q\sigma}}{\Psivar}, + \Gamma_{pq}^{rs} = \sum_{\sigma \sigma'} \mel{\Psivar}{\cre{p\sigma} \cre{r\sigma'} \ani{s\sigma'} \ani{q\sigma}}{\Psivar} \end{gather} \end{subequations} are the elements of the one- and two-electron density matrices, and @@ -294,7 +294,7 @@ are the elements of the one- and two-electron density matrices, and h_p^q = \int \MO{p}(\br) \, \hh(\br) \, \MO{q}(\br) d\br, \\ \label{eq:two} - v_{pq}^{rs} = \iint \MO{p}(\br_1) \MO{q}(\br_2) \frac{1}{\abs*{\br_1 - \br_2}} \MO{r}(\br_1) \MO{s}(\br_2) d\br_1 d\br_2. + v_{pq}^{rs} = \iint \MO{p}(\br_1) \MO{q}(\br_2) \frac{1}{\abs*{\br_1 - \br_2}} \MO{r}(\br_1) \MO{s}(\br_2) d\br_1 d\br_2 \end{gather} \end{subequations} are the one- and two-electron integrals, respectively. @@ -307,13 +307,13 @@ It is also worth pointing out that, after each orbital rotation, the one- and tw % \Evar = \sum_{pq} h_p^q \gamma_p^q + \frac{1}{2} \sum_{pqrs} v_{pq}^{rs} \Gamma_{pq}^{rs}, %\end{equation} -\titou{To enhance convergence, we here employ a variant of the Newton-Raphson method known as ``trust region''. \cite{Nocedal_1999} -This popular variant defines a region where the quadratic approximation \eqref{eq:EvarTaylor} is an adequate representation of the objective energy function \eqref{eq:Evar_c_k} and it evolves during the optimization process in order to preserve the adequacy via a constraint on the step size $\norm{\bk^{(\ell+1)}} \leq \Delta^{(\ell)}$, where $\Delta^{(\ell)}$ is the trust radius at the $\ell$th microiteration. -By introduction a Lagrange multiplier $\lambda$, one obtains $\bk^{(\ell+1)} = - (\bH^{(\ell)} + \lambda \bI)^{-1} \cdot \bg^{(\ell)}$. +To enhance the convergence of the microiteration process, we employ a variant of the Newton-Raphson method known as ``trust region''. \cite{Nocedal_1999} +This popular variant defines a region where the quadratic approximation \eqref{eq:EvarTaylor} is an adequate representation of the objective energy function \eqref{eq:Evar_c_k} and it evolves during the optimization process in order to preserve the adequacy via a constraint on the step size preventing it from overstepping, \ie, $\norm{\bk^{(\ell+1)}} \leq \Delta^{(\ell)}$, where $\Delta^{(\ell)}$ is the trust radius at the $\ell$th microiteration. +By introduction a Lagrange multiplier $\lambda$ to control the trust-region size, one obtains $\bk^{(\ell+1)} = - (\bH^{(\ell)} + \lambda \bI)^{-1} \cdot \bg^{(\ell)}$. The addition of the level shift $\lambda \geq 0$ removes the negative eigenvalues and ensure the positive definiteness of the Hessian matrix by reducing the step size. -By choosing the right $\lambda$ the step size is constraint into a hypersphere of radius $\Delta^{(\ell)}$. -In addition, the evolution of $\Delta^{(\ell)}$ during the optimization and the use of a condition to cancel a step ensure the convergence of the algorithm. -More details can be found in Ref.~\onlinecite{Nocedal_1999}.} +By choosing the right value of $\lambda$, the step size is constraint into a hypersphere of radius $\Delta^{(\ell)}$ and is able to evolve from the Newton direction at $\lambda = 0$ to the steepest descent direction as $\lambda$ grows. +The evolution of the trust radius during the optimization and the use of a condition to cancel the step where the energy rises ensure the convergence of the algorithm. +More details can be found in Ref.~\onlinecite{Nocedal_1999}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Results and discussion}