From b300815680ca9cc79328127e9413c52c3aba353f Mon Sep 17 00:00:00 2001 From: Pierre-Francois Loos Date: Mon, 18 Nov 2019 10:58:14 +0100 Subject: [PATCH] minor corrections on Toto part --- Manuscript/ExPerspective.tex | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/Manuscript/ExPerspective.tex b/Manuscript/ExPerspective.tex index 15b073a..24cc3f7 100644 --- a/Manuscript/ExPerspective.tex +++ b/Manuscript/ExPerspective.tex @@ -318,8 +318,9 @@ It would surely stimulate further theoretical developments in excited-state meth %%%%%%%%%%%%%%%%% %%% COMPUTERS %%% %%%%%%%%%%%%%%%%% -\alert{ -To keep on with Moore's ``Law'' in the early 2000's, the processor designers had no other choice than to propose multi-core chips to avoid an explosion of the energy requirements. +For someone who has never worked with SCI methods, it might be surprising to see that one is able to compute near-FCI excitation energies for molecules as big as benzene. \cite{Chi18,Loo19c,Loo20} +This is mainly due to some specific choices in terms of implementation as explained below. +Indeed, to keep up with Moore's ``Law'' in the early 2000's, the processor designers had no other choice than to propose multi-core chips to avoid an explosion of the energy requirements. Increasing the number of floating-point operations per second (flops/s) by doubling the number of CPU cores only requires to double the required energy, while doubling the frequency multiplies the required energy by a factor close to 8. This bifuraction in hardware design implied a \emph{change of paradigm}\cite{Sut05} in the implementation and design of computational algorithms. A large degree of parallelism is now required to benefit from a significant acceleration. Fifteen years later, the community has made a significant effort to redesign the methods with parallel-friendly algorithms.\cite{Val10,Cle10,Gar17b,Pen16,Kri13,Sce13} @@ -331,7 +332,7 @@ the Hamiltonian matrix elements over arbitrary determinants. Then massive parallelism can be harnessed to compute the second-order perturbative correction with semi-stochatic algorithms,\cite{Gar17b,Sha17} and perform the sparse matrix multiplications required in Davidson's algorithm to find the eigenvectors associated with the lowest eigenvalues. Block-Davidson methods can require a large amount of memory, and the recent introduction of byte-addressable non-volatile memory as a new tier in the memory hierarchy\cite{Pen19} will enable SCI calculations on larger molecules. The next generation of supercomputers is going to generalize the presence of accelerators (graphical processing units, GPUs), leading to a new software crisis. -Fortunately, some authors have already prepared this transition.\cite{Dep11,Kim18,Sny15,Ufi08,Kal17} } +Fortunately, some authors have already prepared this transition.\cite{Dep11,Kim18,Sny15,Ufi08,Kal17} %%%%%%%%%%%%%%%%%% %%% CONCLUSION %%%