- Implement the basic structure of the mpi lib
and specialization for arrays, basic types, std::vector
- adapted the array class for the lazy mpi mechanism
- pass tests on arrays :
- scatter, gather on array<long,2> array<complex,2>, etc...
- broadcast
- several files for readibility
- the std::vector coded but not tested.
- generic mecanism implemented and tested (mpi_generic test)
- added several tests for the mpi lib.
- TODO : more tests, doc...
- For users : only change is :
H5::H5File in apps. to be replaced by triqs::h5::file, same API.
- using only the C API because :
- it is cleaner, better documented, more examples.
- it is the native hdf5 interface.
- simplify the installation e.g. on mac. Indeed, hdf5 is
usually installed without C++ interface, which is optional.
E.g. EPD et al., brew by default.
Also the infamous mpi+ hdf5_cpp bug, for which we have no clean solution.
- clean the notion of parent of a group. Not needed, better iterate function in C LT API.
- modified doc : no need for C++ bindings any more.
- modified cmake to avoid requiring CPP bindings.
- Use a new buffered_function to replace the complicated generator code from ALPS.
- Clean the implementation of the random_generator
- update the documentation
- update to the new python wrapper (could not be done with the previous
version, because of lack of move constructor).
- Make more general constructors for the gf.
gf( mesh, target_shape_t)
- remove the old make_gf for the basic gf.
- 2 var non generic gf removed.
- clean evaluator
- add tensor_valued
- add a simple vertex test.
- clean specialisation
- Fix bug introduced in 1906dc3
- forgot to resize the gf in new version of operator =
- Fix make_singularity in gf.hpp
- clean resize in operator =
- update h5 read/write for block gf
- changed a bit the general trait to save *all* the gf.
- allows a more general specialization, then a correct for blocks
- NOT FINISHED : need to save the block indice for python.
How to reread ?
Currently it read the blocks names and reconstitute the mesh from it.
Is it sufficient ?
- clean block constructors
- block constructors simplest possible : an int for the number of blocks
- rest in free factories.
- fixed the generic constructor from GfType for the regular type :
only enable iif GfType is ImmutableGreenFunction
- multivar. fix linear index in C, and h5 format
- linear index now correctly flatten in C mode
(was in fortran mode), using a simple reverse of the tuple in the folding.
- fix the h5 read write of the multivar fonctions
in order to write an array on dimension # variables + dim_target
i.e. without flattening the indices of the meshes.
Easier for later data analysis, e.g. in Python.
- merge matrix/tensor_valued. improve factories
- matrix_valued now = tensor_valued<2>
(simplifies generic code for h5).
- factories_one_var -> factories : this is the generic case ...
only a few specialization, code is simpler.
- clef expression call with rvalue for *this
- generalize matrix_proxy to tensor and clean
- clean exception catch in tests
- exception catching catch in need in test
because the silly OS X does not print anything, just "exception occurred".
Very convenient for the developer...
- BUT, one MUST add return 1, or the make test will *pass* !!
- --> systematically replace the catch by a macro TRIQS_CATCH_AND_ABORT
which return a non zero error code.
- exception : curry_and_fourier which does not work at this stage
(mesh incompatible).
- gf: clean draft of gf 2 times
- comment the python interface for the moment.
- rm useless tests
- a thin layer, using a bit boost::mpi (for the communicator mostly ...)
along the lines discussed in #12.
- implemented reduce, allreduce, bcast for arrays, simple scalars,
and any custom type that support boost serialization.
- Custom types : the operations are done recursively on members.
No change is needed in the class to use this mpi routine, as long as
serialize function is defined.
- For arrays of basic types (int, double...), a direct call to MPI C API, which works also for views
(as long as they are contiguous).
- For arrays of more complex types, we revert to boost::mpi.
- Added a simple test.
- Work still in progress :
- missing a simple scatter/gather for the arrays
- need more tests & API thinking.
- dispatch array code to array lib
- reduce is "sum" only, but do we need more.