saving work

This commit is contained in:
Pierre-Francois Loos 2021-07-20 10:07:15 +02:00
parent e419811229
commit 409cfbc1cd

View File

@ -207,7 +207,7 @@ where $\hH$ is the (non-relativistic) electronic Hamiltonian,
\end{equation} \end{equation}
is the variational wave function, $\cI_k$ is the set of internal determinants $\ket*{I}$ and $\cA_k$ is the set of external determinants (or perturbers) $\ket*{\alpha}$ which do not belong to the variational space but are linked to it via a nonzero matrix element, \ie, $\mel*{\Psivar^{(k)}}{\hH}{\alpha} \neq 0$. is the variational wave function, $\cI_k$ is the set of internal determinants $\ket*{I}$ and $\cA_k$ is the set of external determinants (or perturbers) $\ket*{\alpha}$ which do not belong to the variational space but are linked to it via a nonzero matrix element, \ie, $\mel*{\Psivar^{(k)}}{\hH}{\alpha} \neq 0$.
The sets $\cI_k$ and $\cA_k$ define, at the $k$th iteration, the internal and external spaces, respectively. The sets $\cI_k$ and $\cA_k$ define, at the $k$th iteration, the internal and external spaces, respectively.
The perturbers corresponding to the largest $\abs*{e_{\alpha}^{(k)}}$ values are then added to the variational space at iteration $k+1$. In the selection step, the perturbers corresponding to the largest $\abs*{e_{\alpha}^{(k)}}$ values are then added to the variational space at iteration $k+1$.
In our implementation, the size of the variational space is roughly doubled at each iteration. In our implementation, the size of the variational space is roughly doubled at each iteration.
Hereafter, we label these iterations over $k$ which consist in enlarging the variational space as \textit{macroiterations}. Hereafter, we label these iterations over $k$ which consist in enlarging the variational space as \textit{macroiterations}.
In practice, $\Evar^{(k)}$ is computed by diagonalizing the $\Ndet^{(k)} \times \Ndet^{(k)}$ CI matrix with elements $\mel{I}{\hH}{J}$ via Davidson's algorithm. \cite{Davidson_1975} In practice, $\Evar^{(k)}$ is computed by diagonalizing the $\Ndet^{(k)} \times \Ndet^{(k)}$ CI matrix with elements $\mel{I}{\hH}{J}$ via Davidson's algorithm. \cite{Davidson_1975}
@ -219,7 +219,8 @@ Orbital optimization techniques at the SCI level are theoretically straightforwa
Here, we detail our orbital optimization procedure within the CIPSI algorithm and we assume that the variational wave function is normalized, \ie, $\braket*{\Psivar}{\Psivar} = 1$. Here, we detail our orbital optimization procedure within the CIPSI algorithm and we assume that the variational wave function is normalized, \ie, $\braket*{\Psivar}{\Psivar} = 1$.
As stated in Sec.~\ref{sec:intro}, $\Evar$ depends on both the CI coefficients $\{ c_I \}_{1 \le I \le \Ndet}$ [see Eq.~\eqref{eq:Psivar}] but also on the orbital rotation parameters $\{\kappa_{pq}\}_{1 \le p,q \le \Norb}$. As stated in Sec.~\ref{sec:intro}, $\Evar$ depends on both the CI coefficients $\{ c_I \}_{1 \le I \le \Ndet}$ [see Eq.~\eqref{eq:Psivar}] but also on the orbital rotation parameters $\{\kappa_{pq}\}_{1 \le p,q \le \Norb}$.
Then, one can conveniently rewrite the variational energy as Here, we have chosen to optimise separately the CI and orbital coefficients by alternatively diagonalizing the CI matrix after each selection step and then rotating the orbitals until the variational energy for a given number of determinants is minimal.
To do so, we conveniently rewrite the variational energy as
\begin{equation} \begin{equation}
\label{eq:Evar_c_k} \label{eq:Evar_c_k}
\Evar(\bc,\bk) = \mel{\Psivar}{e^{\hk} \hH e^{-\hk}}{\Psivar}, \Evar(\bc,\bk) = \mel{\Psivar}{e^{\hk} \hH e^{-\hk}}{\Psivar},
@ -289,8 +290,10 @@ where $\delta_{pq}$ is the Kronecker delta, $\cP_{pq} = 1 - (p \leftrightarrow q
are the elements of the one- and two-electron density matrices, and are the elements of the one- and two-electron density matrices, and
\begin{subequations} \begin{subequations}
\begin{gather} \begin{gather}
\label{eq:one}
h_p^q = \int \MO{p}(\br) \, \hh(\br) \, \MO{q}(\br) d\br, h_p^q = \int \MO{p}(\br) \, \hh(\br) \, \MO{q}(\br) d\br,
\\ \\
\label{eq:two}
v_{pq}^{rs} = \iint \MO{p}(\br_1) \MO{q}(\br_2) \frac{1}{\abs*{\br_1 - \br_2}} \MO{r}(\br_1) \MO{s}(\br_2) d\br_1 d\br_2. v_{pq}^{rs} = \iint \MO{p}(\br_1) \MO{q}(\br_2) \frac{1}{\abs*{\br_1 - \br_2}} \MO{r}(\br_1) \MO{s}(\br_2) d\br_1 d\br_2.
\end{gather} \end{gather}
\end{subequations} \end{subequations}
@ -299,17 +302,18 @@ are the one- and two-electron integrals, respectively.
Because the size of the CI space is much larger than the orbital space, for each macroiteration, we perform multiple \textit{microiterations} which consist in iteratively minimizing the variational energy \eqref{eq:Evar_c_k} with respect to the $\Norb(\Norb-1)/2$ independent orbital rotation parameters. Because the size of the CI space is much larger than the orbital space, for each macroiteration, we perform multiple \textit{microiterations} which consist in iteratively minimizing the variational energy \eqref{eq:Evar_c_k} with respect to the $\Norb(\Norb-1)/2$ independent orbital rotation parameters.
Micoriterations are stopped when a stationary point is found, \ie, $\norm{\bg}_\infty < \tau$, where $\tau$ is a user-defined threshold which has been set to $10^{-3}$ a.u.~in the present study, and a new CIPSI selection step is performed. Micoriterations are stopped when a stationary point is found, \ie, $\norm{\bg}_\infty < \tau$, where $\tau$ is a user-defined threshold which has been set to $10^{-3}$ a.u.~in the present study, and a new CIPSI selection step is performed.
Note that a tight convergence is not critical here as a new set of microiterations is performed at each macroiteration and a new production CIPSI run is performed from scratch using the final set of orbitals. Note that a tight convergence is not critical here as a new set of microiterations is performed at each macroiteration and a new production CIPSI run is performed from scratch using the final set of orbitals.
It is also worth pointing out that, after each orbital rotation, the one- and two-electron integrals defined in Eqs.~\eqref{eq:one} and \eqref{eq:two} have to be updated for the next iteration.
%\begin{equation} %\begin{equation}
% \Evar = \sum_{pq} h_p^q \gamma_p^q + \frac{1}{2} \sum_{pqrs} v_{pq}^{rs} \Gamma_{pq}^{rs}, % \Evar = \sum_{pq} h_p^q \gamma_p^q + \frac{1}{2} \sum_{pqrs} v_{pq}^{rs} \Gamma_{pq}^{rs},
%\end{equation} %\end{equation}
To enhance convergence, we here employ a variant of the Newton-Raphson method known as ``trust region''. \cite{Nocedal_1999} \titou{To enhance convergence, we here employ a variant of the Newton-Raphson method known as ``trust region''. \cite{Nocedal_1999}
This popular variant defines a region where the quadratic approximation \eqref{eq:EvarTaylor} is an adequate representation of the objective energy function \eqref{eq:Evar_c_k} and it evolves during the optimization process in order to preserve the adequacy via a constraint on the step size: $\norm{\bk^{(\ell+1)}} \leq \Delta^{(\ell)}$ with $\Delta^{(\ell)}$ the trust radius at the $\ell$th microiteration. This popular variant defines a region where the quadratic approximation \eqref{eq:EvarTaylor} is an adequate representation of the objective energy function \eqref{eq:Evar_c_k} and it evolves during the optimization process in order to preserve the adequacy via a constraint on the step size $\norm{\bk^{(\ell+1)}} \leq \Delta^{(\ell)}$, where $\Delta^{(\ell)}$ is the trust radius at the $\ell$th microiteration.
By putting the constraint with a Lagrange multiplier $\lambda$ and differentiating the Lagrangian, the solution is $\bk^{(\ell+1)} = - (\bH^{(\ell)} + \lambda \bI)^{-1} \cdot \bg^{(\ell)}$. By introduction a Lagrange multiplier $\lambda$, one obtains $\bk^{(\ell+1)} = - (\bH^{(\ell)} + \lambda \bI)^{-1} \cdot \bg^{(\ell)}$.
The addition of a constant $\lambda \geq 0$ on the diagonal of the hessian removes the negative eigenvalues and reduce the size of the step since the calculation uses its inverse. The addition of the level shift $\lambda \geq 0$ removes the negative eigenvalues and ensure the positive definiteness of the Hessian matrix by reducing the step size.
By choosing the right $\lambda$ the step size is constraint into a hypersphere of radius $\Delta^{(\ell)}$. By choosing the right $\lambda$ the step size is constraint into a hypersphere of radius $\Delta^{(\ell)}$.
In addition, the evolution of $\Delta^{(\ell)}$ during the optimization and the use of a condition to cancel a step ensure the convergence of the algorithm. In addition, the evolution of $\Delta^{(\ell)}$ during the optimization and the use of a condition to cancel a step ensure the convergence of the algorithm.
More details can be found in Ref.~\onlinecite{Nocedal_1999}. More details can be found in Ref.~\onlinecite{Nocedal_1999}.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Results and discussion} \section{Results and discussion}