Toto
This commit is contained in:
parent
05f202a75e
commit
80c8d33f2d
@ -332,19 +332,21 @@ It would surely stimulate further theoretical developments in excited-state meth
|
|||||||
%%% COMPUTERS %%%
|
%%% COMPUTERS %%%
|
||||||
%%%%%%%%%%%%%%%%%
|
%%%%%%%%%%%%%%%%%
|
||||||
\alert{
|
\alert{
|
||||||
To keep on with Moore's ``Law'' in the early 2000's, computer manufacturers had no other choice than to propose multi-core chips to avoid an explosion of the energy requirements.
|
To keep on with Moore's ``Law'' in the early 2000's, the processor designers had no other choice than to propose multi-core chips to avoid an explosion of the energy requirements.
|
||||||
Doubling the number of floating-point operations per second (flops/s) by doubling the number of CPU cores only requires to double the required energy, while doubling the frequency multiplies the required energy by a factor close to 8.
|
Increasing the number of floating-point operations per second (flops/s) by doubling the number of CPU cores only requires to double the required energy, while doubling the frequency multiplies the required energy by a factor close to 8.
|
||||||
This bifuraction in hardware design implied a \emph{change of paradigm}\cite{Sut05} in the implementation and design of computational algorithms, which needed to express a large degree of parallelism to benefit from a significant acceleration.
|
This bifuraction in hardware design implied a \emph{change of paradigm}\cite{Sut05} in the implementation and design of computational algorithms. A large degree of parallelism is now required to benefit from a significant acceleration.
|
||||||
Fifteen years later, the community has made a significant effort of redesigning the methods with parallel-friendly algorithms.\cite{Val10,Cle10,Gar17b,Pen16,Kri13,Sce13}
|
Fifteen years later, the community has made a significant effort to redesign the methods with parallel-friendly algorithms.\cite{Val10,Cle10,Gar17b,Pen16,Kri13,Sce13}
|
||||||
In particular, the change of paradigm to reach FCI accuracy with SCI methods came
|
In particular, the change of paradigm to reach FCI accuracy with SCI methods came
|
||||||
from the use of determinant-driven algorithms which were considered for long as inefficient
|
from the use of determinant-driven algorithms which were considered for long as inefficient
|
||||||
with respect to integral-driven algorithms.
|
with respect to integral-driven algorithms.
|
||||||
The first important element making these algorithms efficient is the introduction of new bit manipulation instructions (BMI) in the hardware of modern processors that enable an extremely fast evaluation of Slater-Condon rules\cite{Sce13b} for the direct calculation of
|
The first important element making these algorithms efficient is the introduction of new bit manipulation instructions (BMI) in the hardware that enable an extremely fast evaluation of Slater-Condon rules\cite{Sce13b} for the direct calculation of
|
||||||
the Hamiltonian matrix elements over arbitrary determinants.
|
the Hamiltonian matrix elements over arbitrary determinants.
|
||||||
Then massive parallelism can be harnessed to perform the sparse matrix-vector multiplications required in Davidson's algorithm, and to compute the second-order perturbative correction with semi-stochatic algorithms.\cite{Gar17b,Sha17}
|
Then massive parallelism can be harnessed to compute the second-order perturbative correction with semi-stochatic algorithms,\cite{Gar17b,Sha17} and perform the sparse matrix multiplications required in Davidson's algorithm to find the eigenvectors associated with the lowest eigenvalues.
|
||||||
|
Storing
|
||||||
|
%A major drawback of determinant-driven algorithms is that they make random accesses to the electron repulsion integrals (ERI) expressed in the basis of MOs.
|
||||||
|
%Therefore, to make the implementation efficient it is desirable to have all the ERI in memory, which limits the applicability of the method.
|
||||||
The next generation of supercomputers is going to generalize the presence of accelerators (graphical processing units, GPUs), leading to a new software crisis.
|
The next generation of supercomputers is going to generalize the presence of accelerators (graphical processing units, GPUs), leading to a new software crisis.
|
||||||
Fortunately, some authors have already prepared this transition.\cite{Dep11,Kim18,Sny15,Ufi08,Kal17}
|
Fortunately, some authors have already prepared this transition.\cite{Dep11,Kim18,Sny15,Ufi08,Kal17} }
|
||||||
}
|
|
||||||
|
|
||||||
%%%%%%%%%%%%%%%%%%
|
%%%%%%%%%%%%%%%%%%
|
||||||
%%% CONCLUSION %%%
|
%%% CONCLUSION %%%
|
||||||
|
Loading…
Reference in New Issue
Block a user