This commit is contained in:
Anthony Scemama 2024-06-19 14:22:19 +02:00
parent 24f2b44dae
commit 0643fb7008

View File

@ -190,7 +190,7 @@ In the algorithm proposed by Rendell\cite{rendell_1991}, for each given triplet
\subsection{Stochastic formulation}
We propose an algorithm influenced by the semi-stochastic approach introduced in Ref.~\citenum{garniron_2017}, originally developed for computing the Epstein-Nesbet second-order perturbation correction to the energy.
We propose an algorithm influenced by the semi-stochastic approach introduced in Ref.~\citenum{garniron_2017}, originally developed for computing the Epstein-Nesbet second-order perturbation correction to the energy.
The perturbative triples correction is expressed as a sum of corrections, each indexed solely by virtual orbitals:
\begin{equation}
@ -212,7 +212,7 @@ P^{abc} = \frac{1}{\mathcal{N}} \frac{1}{\max \left(\epsilon_{\min}, \epsilon_a
where $\mathcal{N}$ normalizes the sum such that $\sum_{abc} P^{abc} = 1$, and $\epsilon_{\min}$ is an arbitrary minimal denominator to ensure that $P^{abc}$ does not diverge. In our calculations, we have set $\epsilon_{\min}$ to 0.2~a.u.
The perturbative contribution is then evaluated as an average over $M$ samples
\begin{equation}
E_{(T)} = \left\langle \frac{E^{abc}}{P^{abc}} \right \rangle_{P^{abc}} =
E_{(T)} = \left\langle \frac{E^{abc}}{P^{abc}} \right \rangle_{P^{abc}} =
\lim_{M \to \infty} \sum_{abc} \frac{n^{abc}}{M} \frac{E^{abc}}{P^{abc}}.
\end{equation}
where $n^{abc}$ is the number of times the triplet $(a,b,c)$ was drawn with probability $P^{abc}$.
@ -383,7 +383,7 @@ Our methodology proves especially advantageous for scenarios requiring the
aggregation of numerous CCSD(T) energies, such as neural network training or
the exploration of potential energy surfaces.
In a recent article, Ceperley \textit{et al} highlight the pivotal role of Quantum Monte
Carlo (QMC) in generating data for constructing potential energy surfaces.\cite{ceperley_2024}
Carlo (QMC) in generating data for constructing potential energy surfaces.\cite{ceperley_2024}
The study suggests that stochastic noise inherent in QMC can facilitate machine
learning model training, demonstrating that models can benefit from numerous,
less precise data points. These findings are supported by an analysis of
@ -434,11 +434,11 @@ On the ARM architecture, we utilized the \textsc{ArmPL} library for BLAS operati
\begin{ruledtabular}
\begin{tabular}{lccccccc}
CPU & $N_{\text{cores}}$ & Shared L3 cache & $V$ & $F$ & Memory Bandwidth & Peak DP & Measured performance \\
& (MB) & & & (GHz) & (GB/s) & (GFlop/s) & (GFlop/s) \\
\hline
\textsc{EPYC} 7513 & $2\times 128$ & $2 \times 32)$ & 4 & 2.6 & 409.6 & 2~662 & 1~576 \\
Xeon Gold 6130 & $2\times 22$ & $2 \times 16)$ & 8 & 2.1 & 256.0 & 2~150 & 667 \\ % 239.891
ARM Q80 & $32$ & $80$ & 2 & 2.8 & 204.8 & 1~792 & 547 \\ % 292.492
& & (MB) & & (GHz) & (GB/s) & (GFlop/s) & (GFlop/s) \\
\hline
\textsc{EPYC} 7513 & $2 \times 32$ & $2\times 128$ & 4 & 2.6 & 409.6 & 2~662 & 1~576 \\
Xeon Gold 6130 & $2 \times 16$ & $2\times 22$ & 8 & 2.1 & 256.0 & 2~150 & 667 \\ % 239.891
ARM Q80 & $80$ & $32$ & 2 & 2.8 & 204.8 & 1~792 & 547 \\ % 292.492
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:flops} Average performance of the code measured as the number of double precision (DP) floating-point operations per second (Flop/s) on different machines.}