minor corrections on Toto part

This commit is contained in:
Pierre-Francois Loos 2019-11-18 10:58:14 +01:00
parent 3d5cab0259
commit b300815680

View File

@ -318,8 +318,9 @@ It would surely stimulate further theoretical developments in excited-state meth
%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%
%%% COMPUTERS %%% %%% COMPUTERS %%%
%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%
\alert{ For someone who has never worked with SCI methods, it might be surprising to see that one is able to compute near-FCI excitation energies for molecules as big as benzene. \cite{Chi18,Loo19c,Loo20}
To keep on with Moore's ``Law'' in the early 2000's, the processor designers had no other choice than to propose multi-core chips to avoid an explosion of the energy requirements. This is mainly due to some specific choices in terms of implementation as explained below.
Indeed, to keep up with Moore's ``Law'' in the early 2000's, the processor designers had no other choice than to propose multi-core chips to avoid an explosion of the energy requirements.
Increasing the number of floating-point operations per second (flops/s) by doubling the number of CPU cores only requires to double the required energy, while doubling the frequency multiplies the required energy by a factor close to 8. Increasing the number of floating-point operations per second (flops/s) by doubling the number of CPU cores only requires to double the required energy, while doubling the frequency multiplies the required energy by a factor close to 8.
This bifuraction in hardware design implied a \emph{change of paradigm}\cite{Sut05} in the implementation and design of computational algorithms. A large degree of parallelism is now required to benefit from a significant acceleration. This bifuraction in hardware design implied a \emph{change of paradigm}\cite{Sut05} in the implementation and design of computational algorithms. A large degree of parallelism is now required to benefit from a significant acceleration.
Fifteen years later, the community has made a significant effort to redesign the methods with parallel-friendly algorithms.\cite{Val10,Cle10,Gar17b,Pen16,Kri13,Sce13} Fifteen years later, the community has made a significant effort to redesign the methods with parallel-friendly algorithms.\cite{Val10,Cle10,Gar17b,Pen16,Kri13,Sce13}
@ -331,7 +332,7 @@ the Hamiltonian matrix elements over arbitrary determinants.
Then massive parallelism can be harnessed to compute the second-order perturbative correction with semi-stochatic algorithms,\cite{Gar17b,Sha17} and perform the sparse matrix multiplications required in Davidson's algorithm to find the eigenvectors associated with the lowest eigenvalues. Then massive parallelism can be harnessed to compute the second-order perturbative correction with semi-stochatic algorithms,\cite{Gar17b,Sha17} and perform the sparse matrix multiplications required in Davidson's algorithm to find the eigenvectors associated with the lowest eigenvalues.
Block-Davidson methods can require a large amount of memory, and the recent introduction of byte-addressable non-volatile memory as a new tier in the memory hierarchy\cite{Pen19} will enable SCI calculations on larger molecules. Block-Davidson methods can require a large amount of memory, and the recent introduction of byte-addressable non-volatile memory as a new tier in the memory hierarchy\cite{Pen19} will enable SCI calculations on larger molecules.
The next generation of supercomputers is going to generalize the presence of accelerators (graphical processing units, GPUs), leading to a new software crisis. The next generation of supercomputers is going to generalize the presence of accelerators (graphical processing units, GPUs), leading to a new software crisis.
Fortunately, some authors have already prepared this transition.\cite{Dep11,Kim18,Sny15,Ufi08,Kal17} } Fortunately, some authors have already prepared this transition.\cite{Dep11,Kim18,Sny15,Ufi08,Kal17}
%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%
%%% CONCLUSION %%% %%% CONCLUSION %%%