3
0
mirror of https://github.com/triqs/dft_tools synced 2025-01-12 22:18:23 +01:00

Work on doc

This commit is contained in:
Olivier Parcollet 2013-08-30 11:07:39 +02:00
parent 5a12b7eeb5
commit 741829909f
74 changed files with 162 additions and 4260 deletions

View File

@ -1,57 +0,0 @@
.. highlight:: c
Assignment
=========================
The `value classes` and the `view classes` have a quite general assignment operator.
We will illustrate it on the `array` class, it is the same for `matrix` and `vector`.
* **Syntax** : the syntax is the same in both cases::
template<typename RHS> array & operator=(const RHS & X);
template<typename RHS> array_view & operator=(const RHS & X);
* **What can be RHS ?**
RHS can be ... anything that models the :ref:`ImmutableCuboidArray` concept !
e.g. : array, array_view, matrix, matrix_view,
but also formal expression (See , e.g. A+B), or any custom object of your choice.
* **Behaviour**
================= ======================================================================= ======================================================================================
Topic value class : array, matrix, vector view: array_view, matrix_view, vector_view
================= ======================================================================= ======================================================================================
RHS ... - RHS models the concept :ref:`ImmutableCuboidArray`. - RHS models :ref:`ImmutableCuboidArray`
[or less ? : RHS can be evaluated in the domain_type::value_type, no domain needed.].
Effect - array is resized to have a domain X.domain() - View's domain must match X's domain or behaviour is undefined.
- array is filled with the evaluation of X. - array is filled with the evaluation of X.
================= ======================================================================= ======================================================================================
* By evaluation of X, we mean :
- a copy if X is an array.
- computing the value if X is an expression.
Compound operators (+=, -=, * =, /=)
-------------------------------------------------
* **Syntax**
The syntax is natural ::
template<typename RHS> array & operator += (const RHS & X);
template<typename RHS> array & operator -= (const RHS & X);
template<typename RHS> array & operator *= (const Scalar & S);
template<typename RHS> array & operator /= (const Scalar & S);
* **What can be RHS ?**
Same as for = operator.
* **Behaviour**
Similar to for = operator, with obvious changes.

View File

@ -18,17 +18,14 @@ i.e. whether one can or not modify a(i,j).
* An `Immutable` array is just a pure function on the domain of definition.
a(i,j) returns a int, or a int const &, that can not be modified (hence immutable).
* A `Mutable` array is on the other hand, a piece of memory, addressed by the indices,
which can be modified. a(i,j) can return a int &.
The formal definition is given below
.. note::
* A `Mutable` array is an Immutable array, which can also be modified. Non const object return a reference,
e.g. a(i,j) can return a int &. Typically this is a piece of memory, with a integer coordinate system on it.
The tag (made by derivation) is use to quickly recognize that an object
models a given concept, due to the current lack of concept support in C++.
It is used e.g. to detect to which algebra (Array or Matrix/Vector) the object belongs...
The main point here is that `Immutable` array is a much more general notion :
a formal expression made of array (E.g. A + 2*B) models this concept, but not the `Mutable` one.
Most algorithms only use the `Immutable` notion, when then are pure (mathematical) function
that returns something depending on the value of an object, without side effects.
.. _ImmutableCuboidArray:
@ -37,9 +34,9 @@ ImmutableCuboidArray
* **Purpose** :
The most abstract definition of something that behaves like an immutable array.
The most abstract definition of something that behaves like an immutable array on a cuboid domain.
* it has a domain (hence a rank).
* it has a cuboid domain (hence a rank).
* it can be evaluated on any value of the indices in the domain
* NB : It does not need to be stored in memory. A formal expression, e.g. model this concept.
@ -220,7 +217,7 @@ Example :
auto d = immutable_diagonal_matrix_view<int>{a};
std::cout << "domain = " << d.domain()<< std::endl;
std::cout << "d = "<< d << std::endl;
std::cout << "2*d = "<< matrix<int>(2*d) << std::endl;
std::cout << "2*d = "<< make_matrix(2*d) << std::endl;
std::cout << "d*d = "<< matrix<int>(d*d) << std::endl;
}

View File

@ -92,6 +92,10 @@ Example
}
.. toctree::
:hidden:
range_ell
.. highlight:: c

View File

@ -52,7 +52,7 @@ NB: Rank is only present for array, since matrix have rank 2 and vector rank 1.
:hidden:
template_parameters
Member types
--------------------------------------
@ -108,7 +108,7 @@ Member functions
reg_constructors
reg_assign
comp_ops
compound_ops
call
resize
STL
@ -132,4 +132,4 @@ Non-member functions
:hidden:
stream
swap

View File

@ -38,6 +38,12 @@ where triqs::ull_t is the type defined by :
Views offers a strong guarantee of the existence of the data, like a std::shared_ptr.
Cf :ref:`arr_view_memory` for details.
.. toctree::
:hidden:
memory
Template parameters
----------------------------

View File

@ -1,39 +0,0 @@
The central concept : HasImmutableArrayInterface
=======================================================
The central concept used by the library (both in the colloquial and in the technical acceptance of the word)
is the notion of **Immutable Array Interface**.
This is just a formal definition of what has an object has to implement ("which concept it has to model")
to look like an array.
Basically, the answer is simple, an array is a (discrete) map between a domain of indices into an algebra
(typically R, ou C). Therefore it has to defined :
* a domain of indices, which can be enumerated, e.g. cartesian products of integer ranges.
* an evaluation method [] so that a[ I ] has a `value_type` type for any index I in the domain.
The precise concept if defined at :ref:`HasImmutableArrayInterface` concept.
Examples :
* a matrix has a domain of range(0,n1) x range(0,n2).
* a general array has a domain of dimension rank and values in R, C, other arrays...
* a triangular matrix has a more complex domain...
* any algebraic expression of arrays, matrices are like this (and those one are certainly NOT mutable objects !).
* What is then a (partial) view/slice of an array ?
Simply another map on the same data.
* Do you want to see a 4 indices arrays as a matrix of double indices ?
Simply change the map...
* Why is this concept useful ?
Because if you program another object of your own, which models it,
it is guaranteed to interact properly with arrays....

View File

@ -13,6 +13,7 @@ in a way suitable for studying/reading/maintening...
:numbered:
concepts
strategy
storage
indexmaps
isp
@ -24,4 +25,6 @@ in a way suitable for studying/reading/maintening...
hdf5
reinterpretation
python
cuboid_formula
slicing

View File

@ -1,107 +0,0 @@
.. highlight:: c
Partial views
==================================
Various kind of partial views and slices can be made on arrays and matrices.
* A `partial view` is defined as a view of a restricted portion of the array while
a `slice` is strictly speaking a partial view of a lower dimension of the original array,
e.g. a column of a matrix.
* Partial views uses the ( ) operator, as the evaluation of the array::
array<T,2> A(10,10); // defines an array
A(1, range(0,2) ) // 1d slice
A(1, range()) // 1d slice taking all the second dim
A(range(0,10,2), range(0,10,2)) // a 2d slice viewing every each elements with even coordinates.
array_view<T,1> SL = A(0,range(0,3)); // naming the view. No data copied here !
array_view<T,1> SL ( A(0,range(0,3))); // same thing !
auto SL = A(0,range(0,3)); // even simpler with C++11.
// CAREFUL : this is a weak view !!!! -> to be explained.
* **Return type** :
* Partial views of array or array_view return an array_view.
* Partial views of vector or vector_view return an vector_view.
* 2d partial views of matrix or matrix_view return matrix_view.
* BUT : (1d) slices of matrix or matrix_view return vector_view.
* 0d slices of anything are converted to the `value_type` of the array.
Memory Weak view
^^^^^^^^^^^^^^^^^^^^^
The `range` type
^^^^^^^^^^^^^^^^^^^^^
`range` mimics the python `range`. It can be constructed with :
* no argument : it then takes the whole set of indices in the dimension (like `:` in python) ::
A(range(), 0) // take the first column of A
* two arguments to specify a range ::
A(range (0,3), 0) // means A(0,0), A(1,0), A(2,0)
.. warning::
the second element is excluded : range(0,3) is 0,1,2, like in Python.
* three arguments : a range with a step ::
A(range(0,4,2), 0) // means A(0,0), A(2,0)
The `ellipsis` type
^^^^^^^^^^^^^^^^^^^^^^
* Ellipsis can be provided in place of `range`, as in python. The type `ellipsis` is similar to range
except that it is implicitely repeated to as much as necessary.
* Example:
.. compileblock ::
#include <triqs/arrays.hpp>
using triqs::arrays::array; using triqs::arrays::ellipsis;
int main(){
array<long,4> B(2,3,4,5) ;
B(0,ellipsis(),3) ; // same as B(0, range(),range(), 3 )
B(0,ellipsis(),2,3); // same as B(0, range(), 2, 3 )
B(ellipsis(),2,3) ; // same as B( range(),range(), 2, 3 )
}
* NB : there can be at most one ellipsis per expression (otherwise it would be meaningless).
* Example of usage :
Ellipsis are useful to write generic algorithms. For example, imagine that you want to sum
arrays on their first index :
.. compileblock ::
#include <triqs/arrays.hpp>
using triqs::arrays::array; using triqs::arrays::ellipsis;
// a generic function that sum array, array_view or in fact anything
// with the right concept on its first dimension
template<typename ArrayType>
array<typename ArrayType::value_type, ArrayType::rank-1> sum0 (ArrayType const & A) {
array<typename ArrayType::value_type, ArrayType::rank-1> res = A(0,ellipsis());
for (size_t u =1; u< first_dim(A); ++u) res += A(u,ellipsis());
return res;
}
// test
int main(){
array<double,2> A(5,2); A()=2;
array<double,3> B(5,2,3); B() = 1;
std::cout<< sum0(A) << sum0(B) <<std::endl;
}

View File

@ -1,18 +0,0 @@
FAQ
======
Should I use array_view or array & as a return_type of a function ?
----------------------------------------------------------------------
It depends...
* array & is slightly quicker, since creating a view means copying the index systems (lengths and strides)
and a shared_ptr.
* BUT array_view will keep the reference counting.
So except in very critical parts of the code, the recommendation is to return a view.

View File

@ -1,20 +0,0 @@
.. highlight:: c
The HDF5 interface
######################
The array classes (`array`, `matrix`, `vector`) have a simple interface to the HDF5 files.
.. toctree::
:maxdepth: 1
h5_rw
h5_stack
h5_proxy
h5_complex
You can also get the :arrays_doxy:`full C++ documentation<triqs::arrays::h5>` for these classes and functions.

View File

@ -1,44 +0,0 @@
.. highlight:: c
MPI and serialization
##########################################
Serialization
============================
The `value classes` and the views are compatible with Boost Serialization library.
MPI
============================
The `value classes` (array, matrix, vector) can be bcasted, reduced,
with the boost.mpi library.
Example::
#include <<array.hpp>
#include <boost/mpi.hpp>
#include <iostream>
using namespace triqs::arrays;
namespace mpi=boost::mpi;
int main() {
mpi::environment env(argc, argv);
mpi::communicator world;
array<long,2> A (2,2), B(2,2),C(2,2);
for (int i =0; i<2; ++i)
for (int j=0; j<2; ++j)
{ A(i,j) = (1+world.rank())*(10*i+ j);}
if (world.rank() ==0) cout<<" A = "<<A<<endl;
boost::mpi::reduce (world, A,C, std::plus<array<long,2> >(),0);
int s= world.size();
if (world.rank() ==0) cout<<" C = "<<C<< " should be "<< array<long,2>( (s*(s+1)/2) * A) <<endl;
return 0;
}

View File

@ -1,151 +0,0 @@
.. highlight:: c
Interoperability with numpy arrays in Python
===================================================================
.. warning::
This section assume that the reader has minimal knowledge of Python/C++ interface building with boost.python.
.. warning::
In order to activate python support, the macro TRIQS_ARRAYS_WITH_PYTHON_SUPPORT must be defined.
The array, matrix, vector and their views are fully interoperable with the numpy array objects in python.
We need to distinguish between :
* the conversion of the object between C++ and python, which can happen anywhere in the code.
* the interfacing or wrapping of the function and classes, which is done at the interface of the languages.
The conversion processes is largely independent of the wrapping technique (boost.python, swig, e.g.).
But the wrapping technique of course uses the converters to convert the object at the interface between Python and C++.
Converters
--------------------------
The conversion of a C++ object to a numpy consists in two converters :
* The C to Python conversion :
It returns a boost::python::object, the opaque description of a python object in C
It is handled by the C_to_Py::convert<T> ::
boost::python::object C_to_Py::convert<T>::invoke (T const & x); // convert T into a python object
* The opposite conversion consists in constructing a C++ object from the boost::python::object::
Py_to_C::convert<T>::possible(boost::python::object x); // returns true iif the conversion of x into a T is possible
T Py_to_C::convert<T>::invoke(boost::python::object x); // do it !
* T can be :
* array, array_view, matrix, matrix_view, vector, vector_view
* [this is not array lib but triqs] any std::vector, std::pair, boost::unordered_map of those, recursively.
---> cf the doc of this converters
Copy vs View
-----------------------
The python conversion of the array and array_view follows the policy of the C++ classes :
* `value classes` (array, matrix, vector) **always** make copies :
* when returning to python : they present a fresh copy of the data.
* when being contructed from python data, they make their own copy.
* `view classes` **never** make copies :
* when returning to python : they return a numpy, which is a view of their data.
By doing this, they transfer ownership of the data to the python interpreter.
Cf memorymanagement.
* when being contructed from python data, they make view.
If this is not possible (e.g. the python object is not a numpy, but a list, the type are not exactly the same)
they throw an exception (`triqs::runtime_error`).
Interfacing, wrapping
---------------------------
* boost.python
These converters can be registered to build regular boost.python converter, using, e.g. ::
triqs::python_tools::register_converter< triqs::arrays::array<double,2> > ();
* swig : not implemented [ to do : add the typemap]
Examples
-----------------------
Example 1 : wrapping a simple function with Boost Python
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* The problem we want to solve :
- write a simple function in C++ taking an array, and compute something from it.
- wrap this function and use it from python.
* The function ::
array_view<double,2> f( array_view<double,2> A) {
A(0,0) = 34; // do something
}
* The wrapping code ::
#include <boost/python.hpp>
#include <boost/python/def.hpp>
BOOST_PYTHON_MODULE(_testarray) {
using namespace boost::python;
triqs::python_tools::register_converter< triqs::arrays::array_view<double,2> > ();
def("f", f);
}
* Use it in a python code.
.. code-block:: python
import numpy,_testarray
a=numpy.array([[1.0,2],[3,4]])
_testarray.f(a)
Example 2 : wrapping a simple function with Boost Python
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* A variant: we can be more explicit in the type conversion
(in this case, we don't need to register the converter, since we do the job ourselves).
* The function ::
boost::python::object f( boost::python::object M ) {
array_view<double,2> A(M.ptr()); // make a view of M. Throws if impossible
A(0,0) = 100; // do something
return M; // return it
}
* The wrapping code ::
#include <boost/python.hpp>
#include <boost/python/def.hpp>
BOOST_PYTHON_MODULE(_testarray) {
using namespace boost::python;
def("f", f);
}
* Use it in a python code.
.. code-block:: python
import numpy,_testarray
a=numpy.array([[1.0,2],[3,4]])
print _testarray.f(a)

View File

@ -1,2 +0,0 @@
.. highlight:: c

View File

@ -1,92 +0,0 @@
.. highlight:: c
Compatibility with STL and iterators, Interaction with STL
##################################################################
The arrays are compatible with STL containers and algorithms.
Iterators
================
Standard iterators are provided that model the boost Mutable_ForwardIterator and ForwardIterator concepts
(and hence are STL compliant).
The iterator implements also two additionnal methods :
* it can be casted to a bool : it is true iif the iteration is not at end.
* it.indices() : returns const & to the indices at the point of the iteration.
Examples::
array<long,2> A (2,3);
for (auto it = A.begin(); it; ++it) *it =it.indices()[0] + 10 *it.indices()[1] ;
//for (array<long,2>::iterator it = A.begin(); it; ++it) *it =it.indices()[0] + 10 *it.indices()[1] ;
Indices generators
================================
The domain have an index generator that enumerates the indices in the domain.
Examples::
array<long,2> A (2,3);
for (auto it = A.indexmap().domain().begin(); it; ++it) cout<<" "<<*it<<endl;
The indices can also be produces in a custom order ::
typedef IndexMaps::cuboid_index_generator< array<long,2>::indexmap_type::domain_type, Permutations::permutation<0,1> > it_type;
for (it_type it(A.indexmap().domain()); it; ++it) cout<<" "<<*it<<endl;
.. warning::
improve this : too long, by passing a parameter ... of the Permutation type.... with default, not used, to begin....
something like :
for (auto it = A.indexmap().domain().begin(permutation<0,1>()); it; ++it) cout<<" "<<*it<<endl;
Using STL containers and algorithms
===================================================================
* `Regular classes` have a standard value semantics, which enable them
to be used in STL containers, like vector, list, map, unordered_map ...
.. warning::
Views can not be used in STL containers. They have no default constructors.
* For example, one can make a vector of arrays ... ::
array<long,2> A (2,3);
std::vector<array<long,2> > VV;
VV.push_back(A);
* ... or a map ::
std::map<string, array<long,2> > MAP;
MAP["1"] = A;
* We can put a std::vector in an array ... ::
std::vector<int> V (10);
array<int,1 > B(V.size()), C(V.size());
std::copy(V.begin(),V.end(),B.begin());
* ... or in reverse ::
std::copy(B.begin(),B.end(),V.begin());
* ... or use other algorithms of std::
bool te(int x) { return (x<25);}
//....
cout<<" max(B) "<< *std::max_element(B.begin(),B.end())<<endl;
std::replace_if (B.begin(), B.end(), te, 0);
std::swap(B,C);
.. warning::
It works, but it may not be optimal. For example, swap is certainly not optimal [ to be improved].

View File

@ -1,98 +0,0 @@
.. highlight:: c
array and matrix/vector algebra
=======================================================
Arrays and matrices can be combined in formal algebraic expressions, which models the :ref:`HasImmutableArrayInterface` concept.
This algebraic expressions can therefore be used as RHS of assignment (SEE) or in array/matrix contructors.
.. warning::
To use this technique, you have to include the <arrays/expressions/array_algebra.hpp> or <arrays/expressions/matrix_algebra.hpp>
files.
For example ::
array<long,2> A (2,2), B(2,2),C;
C= A + 2*B;
array<long,2> D( A+ 2*B);
array<double,2> F( 0.5 * A); // Type promotion is automatic
The technique is called `expression templates`. It allows the elimination of temporaries, so that the clear
and readable code::
Z= A + 2*B + C/2;
is in fact rewritten by the compiler into ::
for (i,j,...) indices of A, B :
C(i,j) = A(i,j) + 2* B(i,j) + C(i,j)/2
instead of making a chain of temporaries (C/2, 2*B, 2*B + C/2...) that "ordinary" object-oriented programming would produce.
As a result, the produced code is as fast as if you were writing the loop yourself,
but with several advantages :
* It is more **compact** and **readable** : you don't have to write the loop, and he indices range are computed automatically.
* It is much better for **optimization** :
* What you want is to tell the compiler/library to compute this expression, not *how* to do it optimally on a given machine.
* For example, since the memory layout is decided at compile time, the library can traverse the data
in an optimal way, allowing machine-dependent optimization.
* The library can perform easy optimisations, e.g. for vector it will use blas if possible.
Arrays vs matrices
----------------------
Because their multiplication is not the same, arrays and matrices don't form the same algebra.
Mixing them in expression would therefore be meaningless and it is therefore not allowed ::
array<long,2> A;
matrix<long,2> M;
M + A; // --> ERROR
However, you can always make a matrix_view from a array of rank 2 ::
A + make_matrix_view(M); //--> OK.
.. note::
Making view is very cheap, it only copies the index systems.
Expressions are lazy
---------------------------
This means that constructing an expression is separated from evaluating it ::
auto e = A + 2*B; // expression, purely formal, no computation is done
cout<< e <<endl ; // prints the expression
cout<< e(1,2) <<endl ; // evaluates just at a point
cout<< e.domain() <<endl ; // just computes its domain
array<long,2> D(e); // now really makes the computation and store the result in D.
The expression type is complicated (the expression in stored in the C++ type), so we used here
the C++0x `auto` to make simple things simple...
FAQ
----------
Where can expressions be used ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expressions models :ref:`HasImmutableArrayInterface` concept, so they can used *anywhere*
an object modeling this concept is accepted, e.g. :
* array, matrix contruction
* operator =, +=, -=, ...
They behave like an immutable array : they have a domain, they can be evaluated.
When `C` is assigned to the expression in the previous example,
the compiler just needs to compute the new domain for `C`, resize it and fill it by evaluation of the expression.
What is the cost of this technique ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Thanks to the Boost Proto library, this can be done :
* with an acceptable increase in compilation time (try it !).
* in about *2 pages of readable code* !

View File

@ -1,361 +0,0 @@
.. highlight:: c
array and array_view
============================
array and array_view are the class for standard d-dimensional cuboid array
and the corresponding view.
Template parameters
----------------------------
* The class has four template parameters (same for array_view).
.. code-block:: c
array<ValueType, // The type of the element of the array
Rank, // int : the rank of the array
IndexOrderTag=Tag::C, // The ordering in memory : can be Tag::C, Tag::Fortran or a permutation
StorageTag=Tag::shared_block // The storage : Tag::shared_block or Tag::numpy
>
============================ ========================== =======================================
Template parameter Access in the class Meaning
============================ ========================== =======================================
ValueType value_type The type of the element of the array
Rank rank The rank of the array *[int]*
IndexOrderTag indexmap_type The ordering in memory : can be Tag::C, Tag::Fortran or a permutation
StorageTag storage_type The storage : Tag::shared_block or Tag::numpy
============================ ========================== =======================================
* Various IndexOrderTag are possible :
================= ====================================================================================================
IndexOrderTag Meaning
================= ====================================================================================================
Tag::C C-style order *[default]*
Tag::Fortran Fortran-style order
P - P is a permutation
- Determined by a permutation P at compile time. Explain here the permutation, the convention.
================= ====================================================================================================
* Two possible storages :
================== ============================================================================
StorageTag Meaning
================== ============================================================================
Tag::shared_block a (shared_ptr on a) C++ block *[default]*
Tag::numpy stored in a numpy array, in which case the array is also a numpy array
and read numpy, be returned as numpy, sliced into a numpy, etc...
================== ============================================================================
.. _array_constructors:
Constructors
-----------------
Intentionally, array and array_view have only a few constructors :
========================================== ===========================================================================================
Constructors of array Comments
========================================== ===========================================================================================
array() - empty array of size 0
array(size_t, ...., size_t) - from the dimensions
array(cuboid_domain<rank> const &) - a new array with the corresponding cuboid
array(const array &) - copy construction
array(const T & X) - Type T models the :ref:`HasImmutableArrayInterface` concept.
- X must have the appropriate domain (checked at compile time).
- Enabled iif has_immutable_array_interface<T>::value == true.
- Constructs a new array of domain X.domain() and fills it with evaluation of X.
========================================== ===========================================================================================
====================================================================== =======================================================================================
Constructors of array_view Comments
====================================================================== =======================================================================================
array_view(indexmap_type const & I, S_type const &) from a couple of indexmap I and storage of type S_type
array_view(const T & X) T is any type such that X.indexmap() and X.storage(). Models ISP ...
====================================================================== =======================================================================================
array_view are typically constructed by slicing (Cf below).
* Examples ::
array<int,2> A(10,2);
array<int,2,Tag::Fortran> Af (2,2);
//Higher dim, custom order :
array<long, 3, Permutations::permutation<2,1,0> > A0 (2,3,4);
array<long, 3, Permutations::permutation<0,1,2> > A1 (2,3,4);
array<long, 3, Permutations::permutation<1,0,2> > A2 (2,3,4);
array<long, 3 > A3 (2,3,4);
array<long, 3, Tag::Fortran > A4 (2,3,4);
Access to data, domain, simple evaluation, ...
--------------------------------------------------------
array, array_view, matrix, matrix_view, vector, vector_view model HasImmutableArrayInterface.
Assignment & Copy
--------------------
Every classes comes in two flavors: C and C_view (with C = array, matrix, vector, etc...).
These two flavors differ in the way they handle their data in construction, copy construction, assignement.
Basically, C owns its data, while C_view if only a view.
array, matrix, vector
^^^^^^^^^^^^^^^^^^^^^^^^
They own their data. In many aspects, they are similar to like std::vector.
* The data are contiguous in memory.
* Constructors and copy constructors all create a new memory block. If needed, they
make a *true* copy of the data.
* The assignment operator may create a new Storage if size do not match.
* As a result, /pointers to the data/ and reference to the storage are invalid after assignment.
* They can be resized, again invalidating all references.
array_view, matrix_view, vector_view
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These classes do not own their data, but only present a view of them.
* The data may not be contiguous in memory (e.g. if the view is the result of a slice).
* Constructors only make another view of the data.
* They *never* copy data, so they are quite quick.
In particular, copy constructor makes shallow copy (i.e. return another view).
* The assignement operator just copy data into the view. Behaviour is undefined if the
size of the view is too small (define the macro ARRAY_CHECK for dynamical debugging checks).
* Pointers to data taken from the views are still valid after assignement.
* Views can be not be resized.
.. warning:: **Memory management**
Views carry a reference to the memory block they view,
which guarantees that memory will not be
dellocated before the destruction of the view.
Indeed, the Storage types implement incorporated a reference counting mechanism,
either using boost::shared_ptr for the C++ arrays, or using the python references
for the numpy storage.
The memory block will be dellocated when its array and all array_view
pointing to it or to a portion of it will be destroyed, and only at that moment.
Examples::
array<int,Matrix> *A = new array<int,Matrix> (Matrix(2,3)); // create an array A
array_view<int, Matrix> B(*A); // making a view
delete A; // A is gone...
cout<<B<<endl; // ok, but B (and the data) is still alive
Operator =
------------
array, matrix, vector
^^^^^^^^^^^^^^^^^^^^^^^^
//.. cpp:member::
template<typename RHS> array & operator=(const RHS & X);
* RHS models HasImmutableArrayInterface.
* array is first resized to have a domain X.domain(), and then filled
with the evaluation of X (e.g. a copy if X is an array, computing the value if X is an expression).
array_view, matrix_view, vector_view
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
//.. cpp:function::
template<typename RHS> array_view & operator=(const RHS & X);
* RHS models HasImmutableArrayInterface [ or less ? : RHS can be evaluated in the domain_type::value_type, no domain needed.].
* Dimension of the view must match or behaviour is undefined.
Iterators and interaction with STL containers and algorithms
----------------------------------------------------------------
STL compliant iterators, hence STL algorithms work...
Examples::
array<long,2> A (2,3);
// first print the index generator
for (array<long,2>::indexmap_type::domain_type::generator it = A.indexmap().domain().begin(); !it.at_end(); ++it)
cout<<" "<<*it<<endl;
//A(i,j) = i + 10*j
for (array<long,2>::iterator it = A.begin(); !it.at_end(); ++it)
{ *it =it.indices().get<0>() + 10 *it.indices().get<1>() ; }
int u=0;
for (array<long,2>::iterator it = A.begin(); !it.at_end(); ++it,++u) { *it =u; }
array<long,2> A (2,3);
std::vector<array<long,2> > VV; VV.push_back(A);
map<string, array<long,2> > MAP; MAP["1"] = A;
// Trying to put a vector in an array
std::vector<int> V (10);
array<int,1 > B(V.size()), C(V.size());
for (unsigned int i =0; i<10; ++i) V[i] = 10+i;
std::copy(V.begin(),V.end(),B.begin());
std::copy(B.begin(),B.end(),V.begin());
cout<<" Number of elements <25 : "<< std::count_if(B.begin(), B.end(),te)<<endl;
cout<<" max(B) "<< *std::max_element(B.begin(),B.end())<<endl;
std::replace_if (B.begin(), B.end(), te, 0);
std::swap(B,C);
Slicing (or partial view)
-------------------------------------------------
Slicing or partial view consists in presenting a view of a sub-part of the array, e.g.
Examples::
array<long,2> A (2,3);
array_view<long,1> SL( A(range(0,2),0));
array_view<long,1> SL2( A(1,range(0,2)));
It is the standard way to produce a view.
NB :
* we use here the python convention: range(0,3) is 0:3, i.e. 0,1,2 NOT 0,1,2,3.
* Todo : in fact we should wrap the range to python::slice for interoperability with python.
Serialization
-------------------------------------------------
* Boost.serialization
* Boost.mpi
Examples::
array<long,2> A (2,2), B(2,2),C(2,2);
boost::mpi::reduce (world, A,C, std::plus<array<long,2> >(),0);
* HDF5 (ALPS), eg.
Examples::
array<long,2> A (2,3),B,vc;
array<long,2,Tag::Fortran> Af,Bf,vf;
alps::hdf5::oarchive ar1("data.h5");
ar1 << alps::make_pvp("Tableau", A);
ar1 << alps::make_pvp("Tableau2", Af);
ar1 << alps::make_pvp("Tableau_view", A(range(),range(1,3)));
alps::hdf5::iarchive ar2("data.h5");
ar2 >> alps::make_pvp("Tableau", B);
ar2 >> alps::make_pvp("Tableau", Bf);
ar2 >> alps::make_pvp("Tableau_view", vc);
ar2 >> alps::make_pvp("TableauC",C);
blas/lapack interface
-------------------------------------------------
* matrix, vector and their views are interfaced with blas/lapack, via boost::numerics::bindings.
* If needed (for a view), a temporary (and silent) copy is made to reorganize the
data before calling blas/lapack (impl: cache class).
Of course, performance is altered, but code is simple...
Examples::
namespace blas = boost::numeric::bindings::blas;
namespace lapack = boost::numeric::bindings::lapack;
triqs_arrays::vector<std::complex<double> > V(5),V2(5);
triqs_arrays::vector <double> V3(2);
triqs_arrays::matrix<double,'F' > M1(2,2), M2(2,2), M3(2,2);
blas::axpy(2.0,V,V2);
blas::gemm(1.0,M1, M2, 1.0, M3);
blas::ger(1.0,V3,V3,M2);
// invert
triqs_arrays::vector <int> ipiv(2);
lapack::getrf(M1, ipiv);
lapack::getri(M1, ipiv);
Transparent use of python arrays
-------------------------------------------------
* If the storage is Tag::numpy, the memory block is allocated/viewed through the numpy interface.
* One can mix arrays with any storage in expression (they have the same concepts).
* boost python converters are enable for those arrays into numpy and their views [impl :broken for views].
Expression
-------------------------------------------------
Simple expressions are made using boost.proto.
Examples ::
array<long,2> A (2,2), B(2,2),C;
C= A + 2*B;
array<long,2> D( A+ 2*B);
// or even in C++0x :
auto e = A + 2*B; // expression, purely formal
array<long,2> D(e); // really makes the computation
cout<< e <<endl ; // prints the expression
cout<< e(1,2) <<endl ; // evaluates just at a point
cout<< e.domain() <<endl ; // just computes the domain
array<long,2> A (2,2), B(2,2),C(2,2);
C= A + 2*B;
C= std::plus<array<long,2> >()(A,B);
C = A + Transpose(B); // Transpose(X) returns a lazy object that models HasImmutableArrayInterface.
C = A + Transpose(B + B); // X can also be an expression...
C = Transpose(B); //
array<double,2> F( 0.5 * A); // Type promotion is automatic
// non square
array<long,2> R(2,3),Rt(3,2);
cout<<" R = "<< array<long,2>(Transpose(R)) <<endl;
// mapping any function
C = map_expr(&sqr,A);
cout<<" C = "<< map_expr(&sqr,A,"SQR")<<" = "<<C<<endl;
// matmul as expression Oi are 'C' or 'F'
matrix<double,O1> M1(2,2); matrix<double,O2> M2(2,2); matrix<double,O3> M3;
// The central instruction : note that matmul returns a lazy object
// that has ImmutableArray interface, and defines a specialized version assignment
// As a result this is equivalent to some matmul_with_lapack(M1,M2,M3) : there is NO intermediate copy.
M3 = matmul(M1,M2);
See expression.hpp.
At the moment, only +,-, * and / by scalar are implemented.
An expression models HasImmutableArrayInterface, i.e. :
* It has a domain (computed from the expression)
* It can be evaluated.
It is then easy to mix them with other objects,
that model the same concept. See e.g. expr2.cpp (map_expr) for examples.
* *Multiplication* :
not decided, since it is not the same for array or matrices.
Two choices :
* Do not add * for array, matrices (use matmul e.g.) and allow mixing array, matrix
e.g. add an array<int,2> and a matrix <int>
* Add the *, but then do different expression for array and matrix/vector,
then one can not mix them.
In that case, it is however trivial to say e.g. M + matrix_view<int>(A) if A is an array.

View File

@ -1,120 +0,0 @@
.. highlight:: c
array and array_view : the general cuboid n-dimensional array
====================================================================================================================
array and array_view :
* are the standard d-dimensional cuboid array and the corresponding view.
* allow **custom memory ordering** at compile time.
* model the :ref:`HasImmutableArrayInterface` concept.
Template parameters
----------------------------
* The classes have three template parameters.
.. code-block:: c
template <typename ValueType, int Rank, typename Opt= Option::Default > class array_view;
template <typename ValueType, int Rank, typename Opt= Option::Default > class array;
============================ ================================== ========================== ====================================================================
Template parameter Accepted type Access in the class Meaning
============================ ================================== ========================== ====================================================================
ValueType normally a scalar, but any default value_type The type of the element of the array
constructible type (?).
Rank int rank The rank of the array
Opt Option::options< ...> opt_type Compile time options, see below.
============================ ================================== ========================== ====================================================================
Options template parameters are described :ref:`here <option_template>`.
.. _array_constructors:
Constructors of array
---------------------------
* Intentionally, array and array_view have only a few constructors :
========================================== ===========================================================================================
Constructors of array Comments
========================================== ===========================================================================================
array() - Empty array of size 0
array(size_t, ...., size_t) - From the dimensions [number of parameters checked at compile time].
Does **NOT** initialize the array elements to 0 ! (see compile time option)
Use my_array() =0 to do it explicitely.
array(cuboid_domain<rank> const &) - New array with the corresponding domain
array(const array &) - Copy construction
array(PyObject * X) - Construct a new array from the Python object X.
NB : X is a borrowed reference, array does affect its counting reference.
- it takes a **copy** of the data of X (or of numpy(X)) into C++.
- X is first transformed into a numpy by the python numpy lib itself
(as if you call numpy.array(X)) if X is not a numpy array or an array of the wrong type
(e.g. you construct an array<double,2> from a numpy of int....), and
copied into the array.
array(const T & X) - Type T models the :ref:`HasImmutableArrayInterface` concept.
- X must have the appropriate domain (checked at compile time).
- Constructs a new array of domain X.domain() and fills it with evaluation of X.
========================================== ===========================================================================================
.. warning::
The constructor from the dimensions does **NOT** initialize the matrix to 0
(because it may not be optimal).
If needed, do it explicitely by (a if the array) `a()=0`;
* Examples ::
array<int,2> A(10,2);
A()=0; // init the array to 0
array<int,2, Option::Fortran> Af (2,2);
//Higher dim, custom order :
array<long, 3, Option::options<Option::memory_order <2,1,0> > > A0 (2,3,4);
array<long, 3, Option::options<Option::memory_order <0,1,2> > > A1 (2,3,4);
array<long, 3, Option::options<Option::memory_order <1,0,2> > > A2 (2,3,4);
array<long, 3 > A3 (2,3,4);
array<long, 3, Option::Fortran > A4 (2,3,4);
// if Obj is boost::python object
array<double,2> A (Obj.ptr());
.. _array_constructors:
Constructors of array_views
----------------------------------------------
Automatic construction
^^^^^^^^^^^^^^^^^^^^^^^^^^^
array_view are normally automatically constructed by making (partial) views, ref:`Slicing`, e.g. ::
array<int,2> A(2,2);
A(range(),2) ; // --> this makes a view...
A() ; // --> this makes a view over the full array
Explicit construction
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To explicitly make a view of an array, use make_view or the ()::
array<int,2> A(2,2);
make_view(A); //-> a view...
make_view(A) = 13 ; // to assign e.g.
A() = 13; // same thing...
====================================================================== =====================================================================================================
Constructors of array_view Comments
====================================================================== =====================================================================================================
array_view(const array_view &) - Copy construction (shallow copy)
array_view(const T & X) - `[Advanced]` T is any type such that X.indexmap() and X.storage() can be used to construct a view.
array_view(indexmap_type const & I, S_type const &) - `[Advanced]` From a couple of indexmap I and storage of type S_type.
====================================================================== =====================================================================================================

View File

@ -1,91 +0,0 @@
.. highlight:: c
Assignment
=========================
The `value classes` and the `view classes` have a quite general assignment operator (=),
and corresponding compound operators (+=, -=).
We will illustrate it on the `array` class, it is the same for `matrix` and `vector`.
Assignment operator (=)
-------------------------------------------------
The `value classes` and the `view classes` have a quite general assignment operator.
* **Syntax** : the syntax is the same in both cases::
template<typename RHS> array & operator=(const RHS & X);
template<typename RHS> array_view & operator=(const RHS & X);
* **What can be RHS ?**
RHS can be ... anything that models the HasImmutableArrayInterface concept !
e.g. : array, array_view, matrix, matrix_view,
but also formal expression (See , e.g. A+B), or any custom object of your choice.
* **Behaviour**
================= ======================================================================= ======================================================================================
Topic value class : array, matrix, vector view: array_view, matrix_view, vector_view
================= ======================================================================= ======================================================================================
RHS ... - RHS models the concept `HasImmutableArrayInterface`. - RHS models HasImmutableArrayInterface
[or less ? : RHS can be evaluated in the domain_type::value_type, no domain needed.].
Effect - array is resized to have a domain X.domain() - View's domain must match X's domain or behaviour is undefined.
- array is filled with the evaluation of X. - array is filled with the evaluation of X.
================= ======================================================================= ======================================================================================
* By evaluation of X, we mean :
- a copy if X is an array.
- computing the value if X is an expression.
* **Optimisation** `[advanced]`
In some case, we know that optimisation is possible.
If the RHS class derives from `Tag::has_special_assign`, then the instruction ::
A = rhs;
will be rewritten by compiler as ::
rhs.assign_invoke(A);
An example is the matmul class, which stores the ref. to the objects to be multiplied.
If A and B are matrices, writing ::
C = matmul(A,B); // (1)
is rewritten as ::
matmul(A,B).assign_invoke(C);
which is then itself rewritten into (without all the messy details) ::
call_lapack( A, B, C); // (2)
The point is that :
* the user code is (1), very simple and readable
* but the compiler compiles (2), which eliminates temporaries...
Compound operators (+=, -=, * =, /=)
-------------------------------------------------
* **Syntax**
The syntax is natural ::
template<typename RHS> array & operator += (const RHS & X);
template<typename RHS> array & operator -= (const RHS & X);
template<typename RHS> array & operator *= (const Scalar & S);
template<typename RHS> array & operator /= (const Scalar & S);
* **Behaviour**
- Domain must match.
- X is evaluated and added term by term.
To be written : special operators.

View File

@ -1,23 +0,0 @@
Basic classes and constructors
=================================================================
The library provides several interoperable forms of arrays : array, matrix and vector.
.. toctree::
:maxdepth: 2
arrays
matrix
vector
options
memory_management
basic_interop
.. note:: Why several classes ?
Because their algebra differ, e.g. a matrix<T> is not really a array<T,2> :
the multiplication for a matrix is the matrix multiplication, while it is element wise for the array.

View File

@ -1,21 +0,0 @@
Interoperability
---------------------------------------------------
* These classes are largely similar, and are interoperable.
* One can construct a matrix_view from a array, e.g. (providing the rank of the array is 2, of course...).
* Why several classes then ? The reason is that their algebra (i.e. their behaviour under operations) differ.
For example, a matrix<T> is not really a array<T,2> :
the multiplication for a matrix is the matrix multiplication, while it is element wise for the array.
The expression template implements two different algebras, one for arrays, one for matrices and vectors.
TODO : write the example and code the make_matrix_view function....
Example::
array<double,3> A; matrix<double> M;
auto res = matrix_view<double> ( A(0,range(),range()) ) + M;

View File

@ -1,60 +0,0 @@
.. highlight:: c
Linear Algebra with matrices and vectors
===============================================
* Matrices and vectors are interfaced with the blas/lapack library
* Moreover, for some current operations, simple operations can be programmed efficiently
and simply with expression.
Use of special expression
-----------------------------
Code::
#include <iostream>
#include "expressions/matmul.hpp"
using namespace triqs_arrays;
using Expression::matmul;
int main(int argc, char **argv) {
matrix<double> M1(2,2), M2(2,2), M3;
for (int i =0; i<2; ++i) for (int j=0; j<2; ++j) { M1(i,j) = i+j; M2(i,j) = 1 + i -j ; }
// The central instruction : note that matmul returns a lazy object
// that has ImmutableArray interface, and defines a specialized version assignment
// As a result this is equivalent to matmul_with_lapack(M1,M2,M3) : there is NO intermediate copy.
M3 = matmul(M1,M2);
std::cout<<"M3 = "<<M3<<std::endl;
}
General blas-Lapack interface for matrices and vectors
-------------------------------------------------------------------
* matrix, vector and their views are interfaced with blas/lapack, via boost::numerics::bindings.
* If needed (for a view), a temporary (and silent) copy is made to reorganize the
data before calling blas/lapack. Of course, performance is altered, but code is simple...
* TO DO : add a compile warning for such performance hits...
Examples::
namespace blas = boost::numeric::bindings::blas;
namespace lapack = boost::numeric::bindings::lapack;
triqs_arrays::vector<std::complex<double> > V(5),V2(5);
triqs_arrays::vector <double> V3(2);
triqs_arrays::matrix<double,'F' > M1(2,2), M2(2,2), M3(2,2);
blas::axpy(2.0,V,V2);
blas::gemm(1.0,M1, M2, 1.0, M3);
blas::ger(1.0,V3,V3,M2);
// invert
triqs_arrays::vector <int> ipiv(2);
lapack::getrf(M1, ipiv);
lapack::getri(M1, ipiv);

View File

@ -1,39 +0,0 @@
The central concept : HasImmutableArrayInterface
=======================================================
The central concept used by the library (both in the colloquial and in the technical acceptance of the word)
is the notion of **Immutable Array Interface**.
This is just a formal definition of what has an object has to implement ("which concept it has to model")
to look like an array.
Basically, the answer is simple, an array is a (discrete) map between a domain of indices into an algebra
(typically R, ou C). Therefore it has to defined :
* a domain of indices, which can be enumerated, e.g. cartesian products of integer ranges.
* an evaluation method [] so that a[ I ] has a `value_type` type for any index I in the domain.
The precise concept if defined at :ref:`HasImmutableArrayInterface` concept.
Examples :
* a matrix has a domain of range(0,n1) x range(0,n2).
* a general array has a domain of dimension rank and values in R, C, other arrays...
* a triangular matrix has a more complex domain...
* any algebraic expression of arrays, matrices are like this (and those one are certainly NOT mutable objects !).
* What is then a (partial) view/slice of an array ?
Simply another map on the same data.
* Do you want to see a 4 indices arrays as a matrix of double indices ?
Simply change the map...
* Why is this concept useful ?
Because if you program another object of your own, which models it,
it is guaranteed to interact properly with arrays....

View File

@ -1,21 +0,0 @@
.. highlight:: c
Compound operators (+=, -=, *=, /=)
======================================
The `value classes` and the `view classes` behaves similarly.
We will illustrate it on the `array` class, it is the same for `matrix` and `vector`.
* **Syntax**
The syntax is natural ::
template<typename RHS> array & operator += (const RHS & X);
template<typename RHS> array & operator -= (const RHS & X);
template<typename RHS> array & operator *= (const Scalar & S);
template<typename RHS> array & operator /= (const Scalar & S);
* **Behaviour**
- Domain must match.
- X is evaluated and added term by term.

View File

@ -1,33 +0,0 @@
Array library
****************
.. warning::
This library is beta.
This manual is not finished: work in progress.
.. highlight:: c
.. toctree::
:maxdepth: 1
:numbered:
introduction
getting_started
view_or_not_view
basic_classes
slicing
shape
debug
assignment
algebras
foreach
functional
STL
H5
Interop_Python
IO
blas_lapack
FAQ
design/contents

View File

@ -1,48 +0,0 @@
.. highlight:: python
.. _Debug:
Bound checking and debug macros
===================================
To be fast, by default, no check are made on the indices while accessing elements or slicing.
However, checks can be activated in two ways :
* Adding the `Tag::BoundCheck` option (in the options<...> template of the array, matrix, vector)
* Defining the debug macro TRIQS_ARRAYS_ENFORCE_BOUNDCHECK, which switches the default option from `Tag::NoBoundCheck` to `Tag::BoundCheck`
for all arrays, matrices and vectors.
In both cases, if the indices are not within the domain of defintion, an exception triqs::arrays::key_error
will be thrown. It's .what() returns the file and line where the exception occurs, with the stack of all in C++,
e.g. ::
BOX>./bound_check_nopy
catching key error in B
triqs::arrays key_error at ..../triqs_source/triqs/arrays/test/C++/./src/IndexMaps/cuboid/./cuboid_domain.hpp : 104
Trace is :
./bound_check_nopy void triqs::arrays::IndexMaps::cuboid_domain<2>::assert_key_in_domain<boost::tuples::tuple<int, int> > \
(boost::tuples::tuple<int, int> const&) const 0x181 [0x403e11]
./bound_check_nopy main 0x916 [0x403016]
/lib/libc.so.6 __libc_start_main 0xfd [0x7f389e6abc4d]
./bound_check_nopy [0x4023c9]
key out of domain
key [1] = 3 is not within [0,3[
Further information on the line of the error in the stack can be retrieved with the addr2line utility `[linux only]`
which retrieve the source file and line from the address of the function (if you compile with -g). The script in TRIQS_SOURCE_DIR/dev_tools/analyse_stack.py
does it automatically for you (when compiled with -g) ::
BOX> ./bound_check_nopy 2>&1 | analyse_stack.py
....
./bound_check_nopy main 0x8a7 [0x403477]
---> /home/parcolle/triqs_source/triqs/arrays/test/C++/bound_check.cpp:83

View File

@ -1,270 +0,0 @@
Concepts
=============================================================
BoostSerializable
-------------------------------------------------
* define serialize for boost serialize library
Printable
-------------------------------------------------
====================================================== ===========================================================
Elements Comment
====================================================== ===========================================================
std::ostream & operator << (std::ostream & out, ...) Printing
====================================================== ===========================================================
Domain
-------------------------------------------------
* **Purpose** : The domain of definition of the array, i.e. the possible value of the indices.
* **Refines** : BoostSerializable, Printable
* **Definition** :
=========================================== ===========================================================
Elements Comment
=========================================== ===========================================================
* static const unsigned int rank rank
* index_value_type type of the multi-index
* default constructor, copy construction
* operator =
* operator ==, !=
* size_t number_of_elements() const number of elements
* generator type of the IndexGenerator that generates the indices
* begin() const/ end() const a generator at start/end
=========================================== ===========================================================
* **Examples** :
Typically, this is is a multi-index for an array, matrix, ...., e.g.
* Cuboid<rank> : standard hyperrectangular arrays. This is little more than the tuple of the lengths.
* triangle, .... ?
.. _HasImmutableArrayInterface:
HasImmutableArrayInterface
-------------------------------------------------
* **Purpose** : The most abstract definition of something that behaves like an immutable array.
* it has a domain (hence a rank).
* it can be evaluated on any value of the indices in the domain
* By combining the generator of the domain with the evaluation, it is therefore easy to
iterate on the values of such an object.
* NB : It does not need to be stored in memory. A formal expression, e.g. model this concept.
* **Definition** ([A|B] denotes that the return type maybe A or B).
================================================================================================== =============================================================
Elements Comment
================================================================================================== =============================================================
value_type Type of the element of the array
domain_type Type of the domain.
[domain_type const & | domain_type] domain() const Access to the domain.
[ value_type | value_type const &] operator[] (domain_type::index_value_type const &) const Evaluation.
================================================================================================== =============================================================
* **Examples** :
* array, array_view, matrix, matrix_view, vector, vector_view (all implemented as Indexmap_Storage_Pair)
* array expressions (in which case the returns are not const & since they are computed, not stored).
IndexGenerator
-------------------------------------------------
* **Purpose** : Generate the indices of a domain.
* **Definition** :
============================================================== ==================================================================================================
Elements Comment
============================================================== ==================================================================================================
domain_type Type of the domain whose indices are generated.
default contruction, copy construction
construction from (domain_type const &, bool atend=false)
IndexGenerator & operator=(const IndexGenerator & )
operator ++
operator =
operator ==, !=
domain_type::index_value_type const & operator * () const Access to the value of the multi-index
bool at_end() const True iif the generator has reached the last value
============================================================== ==================================================================================================
* **Examples** :
* cuboid_index_generator
IndexMap
-------------------------------------------------
* **Purpose** :
* Store the mapping of the index domain to a linear array.
* It is basically a function : indices --> 1d position, the bijection between the indices
of an element and its position in the memory block.
* **Refines** : BoostSerializable, Printable
* **Definition** :
======================================================================== ==================================================================================================
Elements Comment
======================================================================== ==================================================================================================
* domain_type The type of the domain.
* domain_type const & domain() const The domain.
* default constructor, copy construction Cpy is a true copy.
* can be constructed from domain_type const &
* size_t operator[] (domain_type::index_value_type const & key ) const The mapping itself.
* iterator A type modeling IndexMapIterator, which is the optimal memory traversal.
NB : the order of indices is chosen for optimal traversal of memory, it
does not need to be "natural".
cuboid_map also provides a natural_iterator for that purpose.
======================================================================== ==================================================================================================
* The type also has to define two free functions and to specialize a template :
========================================================== ==================================================================================================
Elements Comment
========================================================== ==================================================================================================
* bool compatible_for_assignment (M1, M2) Returns whether an array/matrix/vector with map M1 can be equated to a array/matrix/vector with
map M2
* bool raw_copy_possible (M1, M2) Is the assignment of an array/matrix/vector with map M2 into an array/matrix/vector with map M1
doable with raw copy
* struct indexmap_iterator_adapter< It, I > Metafunction :
- I is the IndexMap class
- It any similar IndexMapIterator which returns (in ::type) the IndexMapIterator on I
with the same order traversal as It.
Example : It is a IndexMapIterator on I1 stored in C order, I is in Fortran order,
the result will be an IndexMapIterator on I that presents the data of I in C order
This is used in copying array with different indexmaps.
========================================================== ==================================================================================================
* **Examples** :
* cuboid_map<IterationOrder> : a map of the cuboid indices in a fixed order in memory.
IndexMapIterator
-------------------------------------------------
* **Purpose** :
* A basic iterator on an IndexMap which can be dereferenced into the shift of successive elements compared to the start of the memory block.
* These iterators are kept as simple as possible, so that it is easy to implement new indices maps and their iterators.
* NB : In particular, they are *not* necessary STL-compliant. The array_iterator class will
take such an iterator and a Storage and produce a true, STL compliant iterator on the array (iterator_adapter).
* **Definition** :
========================================================== ==================================================================================================
Elements Comment
========================================================== ==================================================================================================
indexmap_type The index_map on which the iterator is iterating
domain_type Type of the domain whose indices are generated.
default contruction, copy construction
construction from (domain_type const &, bool atend=false)
IndexMapIterator & operator=(const IndexMapIterator & )
IndexMapIterator & operator ++
operator ==, !=
std::ptrdiff_t operator*() const Dereference as a shift from the beginning of the array
domain_type::index_value_type const & indices () const Access to the value of the multi-index at the iterator position
bool at_end() const True iif the generator has reached the last value (in practice quicker that it = XX.end()).
========================================================== ==================================================================================================
* **Example(s)** :
* cuboid_map_iterator
Storage
-------------------------------------------------
* **Purpose** :
* The storage of the array in memory, e.g. plain C++ array, a numpy, etc...
* A Storage keeps the reference to the memory block where the array is stored.
* NB : This memory block can be typically shared between various arrays and views,
so the Storage is just a reference. The memory is deallocated only
when all storages referencing it has been destroyed.
* **Refines** : BoostSerializable
* **Definition** :
====================================================== ==================================================================================================
Elements Comment
====================================================== ==================================================================================================
value_type Type of the element stored, e.g. int, const int, double, const double, ...
default construction Makes a storage of size 0
copy construction a shallow copy (another reference to the same data).
the copy construction is possible from another storage of the same value_type
up to the const qualifier.
The construction of a storage with value_type=T from a storage with value_type const T
is forbidden at compile time.
void operator = (const STO &) A shallow copy of the reference to the data.
clone() const Create a clone of the data.
const_clone() const Create a clone of the data with const value_type (e.g. int--> const int).
void raw_copy_from(const STO & X) Copy all the data from X to *this. Behaviour undefined if sizes do not match.
size_t size() const Number of elements in the storage
value_type & operator[](size_t p) const Access to the data. Behaviour is undefined if empty()==true.
====================================================== ==================================================================================================
* **Examples** :
* shared_block : basically a shared_ptr to a basic C++ array, dressed to model the Storage concept.
* numpy : a dressing of a python numpy object to model the Storage concept.
StorageOrder concept
-------------------------------------------------
* **Purpose** :
* Store the order of indices in memory.
* Can be fixed at compile time, or dynamically (not implemented).
* **Refines** : BoostSerializable
* **Definition** :
====================================================== ==================================================================================================
Elements Comment
====================================================== ==================================================================================================
size_t index_number(size_t i)
static unsigned int rank
default construction
copy construction
bool is_Fortran() const Is it Fortran-style ordering ?
bool is_C() const Is it C-style ordering ?
====================================================== ==================================================================================================
* The type also has to define the == operator :
========================================================== ==================================================================================================
Elements Comment
========================================================== ==================================================================================================
Operator == Defined between any of the ordering.
========================================================== ==================================================================================================
* **Examples** ::
storage_order_C <rank> // canonical C ordering
storage_order_fortran <rank> // canonical Fortran ordering
storage_order_custom <P> // custom storage given by a permutation P
* Details on the custom order :
* P = [p_1,... p_r] : p_1 is the fastest index, p_r the slowest.
* Example::
storage_order_C <rank> == storage_order_custom< Permutations::reverse_identity<rank> >
storage_order_fortran <rank> == storage_order_custom< Permutations::identity<rank> >
storage_order_custom <Permutations::permutation<0,2,1> > // the fastest index in memory is 0, then 1, then 2.

View File

@ -1,10 +0,0 @@
Design & Implementation
=========================
.. toctree::
:maxdepth: 2
strategy
slicing
concepts
cuboid_formula

View File

@ -1,60 +0,0 @@
.. _cuboid_formula:
Cuboid formula
======================
* **Notations** :
* R = rank
* index : (i0,...,i_{R-1})
* Lengths : (L0, ... , L_{R-1})
* Strides : (S0, ... , S_{R-1})
* position = start_shift + \sum_{j=0}^{R-1} i_j S_j
* **Strides for compact cuboid** :
If :math:`\sigma(i)` is the i-th index in memory (0 the first, R-1 the last one), we have
.. math::
:nowrap:
\begin{align*}
S_{\sigma(0)} &= 1\\
S_{\sigma(1)} &= L_{\sigma(0)} S_{\sigma(0)} = L_{\sigma(0)}\\
S_{\sigma(i)} &= L_{\sigma(i-1)} S_{\sigma(i-1)} = \prod_{j=i-1}^{0} L_{\sigma(j)}
\end{align*}
* **Slicing the cuboid** :
* Argument of the slice : (R0, R1, ..., R_{R-1}), with R_i = range : (start,end,step)
for the moment, if R_i = E_i, a number, consider it as a special range of length 1.
* new index (k0,...., k_{R-1}).
i_u = R_u.start + k_u * R_u.step
* Compute the new strides : Sn_j
.. math::
:nowrap:
\def\offset{\text{offset}}
\begin{align*}
position &= \offset + \sum_{j=0}^{R-1} i_j S_j \\
&= \offset + \sum_{j=0}^{R-1} (k_j* R_j.step + R_j.start) S_j \\
&= (\offset + \sum_{j=0}^{R-1} (k_j* R_j.step + R_j.start) S_j ) + \sum_{j=0}^{R-1} k_j (S_j * R_j.start)
\\
\offset_\text{new} &= \offset + \sum_{j=0}^{R-1} (k_j* R_j.step + R_j.start) S_j \\
&= position( R_j.start) \\
Sn_j &= S_j * R_j.start
\end{align*}
* Now : for the cases R_i = E_i, just drop the index.

View File

@ -1,51 +0,0 @@
.. _DesignSlicing:
Slicing
=============================================================
In this section, IM denotes some IndexMaps.
* We refer here to any partial view of the arrays as "slicing", i.e. subarrays, true slices, etc...
A slicing is any (meta)-function that take an indexmap and return another one.
* It is a meta-function that computes the type of the resulting indexmap
* It is a function that computes the resulting indexmap.
* The array/matrix/vector classes and their views, when called with the () operator, will :
* forward all the arguments and their types to IndexMaps::slice, to compute the new indexmap IM2.
* If IM2 is of dimension 0, return a value_type & or const & (it is a simple number, not an array).
* Otherwise : return a new view, made of IM2 and the same data as for the original object.
* Possible slices are defined by the IndexMap type.
Slicing an class C with IndexMap I1 produces a class C2_view, with IndexMap I2,
i.e. a new sliced IndexMap on the same data.
* **Examples** :
* array and array_view can be sliced :
``
array<T,2> A(10,10); : defines an array
A(1,2) : element access.
A ( 1, range(0,2) ) : 1d slice
A( range(0,10,2), range(0,10,2)) : a 2d slice viewing every each elements with even coordinates.
``
* matrix, matrix_view when sliced, return vector_view or matrix_view.
* One can be much more general : e.g. slicing the diagonal of a matrix, etc...
* Implementation is entirely done in the IndexMaps classes, by specializing 2 templates
(basically by writing the function IM1-> IM2).
The rest is done by indexmap_storage_pair class, which will compute the correct view class
depending on the view class and IM2 (view_from_tag_I_S template).
::
//In namespace IndexMaps::result_of
template<typename IM, typename ArgsTuple>
struct slice< IM, ArgsTuple> { typedef IM2 type; };
//In namespace IndexMaps :
template<typename IM, typename ArgsTuple>
typename result_of::slice<IM,ArgsTuple>::type slice(IM const &, ArgsTuple args);

View File

@ -1,62 +0,0 @@
.. _Design:
Strategy
=============================================================
All the classes are a combination of a system of indices (called IndexMap I in the following)
and a physical storage S in the computer (a block of memory), denoted as an IndexMap_Storage_Pair (I,S).
* I models the IndexMap concept [REF below].
* It is the bijection between the a set of indices and the position in the memory block. It can be though as a coordinate system on the (linear) memory block.
* Various types of indices are possible (only the first is implemented now).
* cuboid (the standard hypercubic array, the only one currently implemented)
* triangular arrays
* band matrix
* multi-indices, with indices made of pair<int,int> e.g.
* S models the Storage concept [REF].
* It is a handle to the memory block containing the actual data.
* It can be e.g.:
* a C++ shared pointer to a memory block.
* a reference to a numpy array.
This design has several consequences :
* **Interoperability** : classes are widely interoperable, e.g. one can add a array and a matrix (if dimensions are ok of course).
one can also add a python numpy and a C++ array without any further coding.
* It is straighforward to construct a matrix_view<T> from an array<T,2>, since it is the same couple <I,S>,
just interpreted differently.
* It is easy to view a array<T,4> as a matrix by gathering indices (properly ordered in memory) :
one just has to provide a new IndexMap I2 to see the same data.
[ --> very useful for vertex computation in many body...]
* Slicing, or partial view is very natural : it is just a function on indexmaps : I--> I2,
independantly of any storage.
Quick guide through the implementation
=============================================================
The implementation is divided into basically 4 parts :
* Storages : implements two storages shared_block and numpy
* IndexMaps : implements cuboid index map, its domain and iterators
* impl/indexmap_storage_pair.hpp : the basic implementation class for all user class.
It is basically just a couple of an indexmap and a storage, with shallow copy.
It also forward the slices to the indexmap and construct the correct views.
* upper level :
* user class : array, array_view, matrix, matrix_view, vector, vector_view
* expression.hpp : boost proto expression
* numpy_interface.hpp : helper to get numpy into array
* lapack/blas interface
* hdf5 support.

View File

@ -1,104 +0,0 @@
.. highlight:: c
.. _arith_expression:
Arithmetic Expressions
-------------------------------------------------
* **Definition** :
By `expression`, we mean here an object, typically representing a mathematical expression,
which models the :ref:`HasImmutableArrayInterface` concept.
* **Use** :
Expression can be used :
- as RHS (Right Hand Side) of various operators (=, +=, -=, ...)
- at any place where an `expression` is expected.
* **How to make expression ?**
Expressions can be build easily in various ways :
- Using arithmetic operators, e.g. A + 2*B
Examples ::
array<long,2> A (2,2), B(2,2),C;
C= A + 2*B;
array<long,2> D( A+ 2*B);
array<double,2> F( 0.5 * A); // Type promotion is automatic
// or even in C++0x :
auto e = A + 2*B; // expression, purely formal
array<long,2> D(e); // really makes the computation
cout<< e <<endl ; // prints the expression
cout<< e(1,2) <<endl ; // evaluates just at a point
cout<< e.domain() <<endl ; // just computes the domain
- Using predefined expressions :
- matmul
The code::
matrix<double> M1(2,2), M2(2,2), M3;
M3 = matmul(M1,M2);
will do the following :
- matmul returns a lazy object modelling the :ref:`Expression` concept.
- a result this is compiled as some `matmul_with_lapack(M1,M2,M3)` : **there is NO intermediate copy**.
- mat_vec_mul
- Building custom expressions. The transpose example...
Describe the concept, what to implement, etc....
- Transpose::
array<long,2> A (2,2), B(2,2),C(2,2);
C= A + 2*B;
C = A + Transpose(B); // Transpose(X) returns a lazy object that models HasImmutableArrayInterface.
C = A + Transpose(B + B); // X can also be an expression...
C = Transpose(B); //
// non square
array<long,2> R(2,3),Rt(3,2);
cout<<" R = "<< array<long,2>(Transpose(R)) <<endl;
// mapping any function
C = map_expr(&sqr,A);
cout<<" C = "<< map_expr(&sqr,A,"SQR")<<" = "<<C<<endl;
// matmul as expression Oi are 'C' or 'F'
matrix<double,O1> M1(2,2); matrix<double,O2> M2(2,2); matrix<double,O3> M3;
// The central instruction : note that matmul returns a lazy object
// that has ImmutableArray interface, and defines a specialized version assignment
// As a result this is equivalent to some matmul_with_lapack(M1,M2,M3) : there is NO intermediate copy.
M3 = matmul(M1,M2);
See expression.hpp.
At the moment, only +,-, * and / by scalar are implemented.
An expression models HasImmutableArrayInterface, i.e. :
* It has a domain (computed from the expression)
* It can be evaluated.
It is then easy to mix them with other objects,
that model the same concept. See e.g. expr2.cpp (map_expr) for examples.
* *Multiplication* :
not decided, since it is not the same for array or matrices.
Two choices :
* Do not add \* for array, matrices (use matmul e.g.) and allow mixing array, matrix
e.g. add an array<int,2> and a matrix <int>
* Add the \*, but then do different expression for array and matrix/vector,
then one can not mix them.
In that case, it is however trivial to say e.g. M + matrix_view<int>(A) if A is an array.

View File

@ -1,59 +0,0 @@
.. highlight:: c
.. _custom_expression:
Custom Expressions
-------------------------------------------------
- Building custom expressions. The transpose example...
Describe the concept, what to implement, etc....
- Transpose::
array<long,2> A (2,2), B(2,2),C(2,2);
C= A + 2*B;
C = A + Transpose(B); // Transpose(X) returns a lazy object that models HasImmutableArrayInterface.
C = A + Transpose(B + B); // X can also be an expression...
C = Transpose(B); //
// non square
array<long,2> R(2,3),Rt(3,2);
cout<<" R = "<< array<long,2>(Transpose(R)) <<endl;
// mapping any function
C = map_expr(&sqr,A);
cout<<" C = "<< map_expr(&sqr,A,"SQR")<<" = "<<C<<endl;
// matmul as expression Oi are 'C' or 'F'
matrix<double,O1> M1(2,2); matrix<double,O2> M2(2,2); matrix<double,O3> M3;
// The central instruction : note that matmul returns a lazy object
// that has ImmutableArray interface, and defines a specialized version assignment
// As a result this is equivalent to some matmul_with_lapack(M1,M2,M3) : there is NO intermediate copy.
M3 = matmul(M1,M2);
See expression.hpp.
At the moment, only +,-, * and / by scalar are implemented.
An expression models HasImmutableArrayInterface, i.e. :
* It has a domain (computed from the expression)
* It can be evaluated.
It is then easy to mix them with other objects,
that model the same concept. See e.g. expr2.cpp (map_expr) for examples.
* *Multiplication* :
not decided, since it is not the same for array or matrices.
Two choices :
* Do not add \* for array, matrices (use matmul e.g.) and allow mixing array, matrix
e.g. add an array<int,2> and a matrix <int>
* Add the \*, but then do different expression for array and matrix/vector,
then one can not mix them.
In that case, it is however trivial to say e.g. M + matrix_view<int>(A) if A is an array.

View File

@ -1,22 +0,0 @@
.. highlight:: c
.. _predef_expression:
Predefined Expressions
-------------------------------------------------
- Using predefined expressions :
- matmul
The code::
matrix<double> M1(2,2), M2(2,2), M3;
M3 = matmul(M1,M2);
will do the following :
- matmul returns a lazy object modelling the :ref:`Expression` concept.
- a result this is compiled as some `matmul_with_lapack(M1,M2,M3)` : **there is NO intermediate copy**.
- mat_vec_mul

View File

@ -1,48 +0,0 @@
.. highlight:: c
to be removed...
-------------------------------------------------
* **Definition** :
By `expression`, we mean here an object, typically representing a mathematical expression,
which models the :ref:`HasImmutableArrayInterface` concept.
* **Use** :
Expression can be used :
- as RHS (Right Hand Side) of various operators (=, +=, -=, ...)
- at any place where an `expression` is expected.
* **How to make expression ?**
Expressions can be build easily in various ways :
- Using :ref:`arith_expression`, e.g. A + 2*B
- Using predefined expressions, e.g. for Linear Algebra operations.
For example, in the code::
matrix<double> M1(2,2), M2(2,2), M3;
M3 = matmul(M1,M2);
`matmul` returns an expression, a lazy object modelling the :ref:`Expression` concept.
REF :
- Building :ref:`custom_expression`.
.. toctree::
:maxdepth: 1
expr_arith
expr_predef
expr_custom

View File

@ -1,7 +0,0 @@
.. highlight:: c
Iterating on an array : foreach
========================================================
TO BE WRITTEN

View File

@ -1,78 +0,0 @@
.. highlight:: c
Functional constructs : map, fold, ...
========================================================
map
--------
* **Purpose** :
map promotes any function into an `array function`, acting term by term.
* **Syntax** :
If `f` is a function, or a function object (SEE : ref to function object lib)::
f : ValueType1 --> ValueType2
Then map(f) is a function::
ReturnType map(f) ( ArgType const & A)
where ArgType models the :ref:`HasImmutableArrayInterface` concept
* with value_type == ValueType1
and ReturnType models the :ref:`HasImmutableArrayInterface` concept
* with the same domain as ArgType
* with value_type == ValueType2
* Examples::
double f(int i) { return i*10;}
int main() {
auto F = map(f);
array<int,2> A(2,2);
array<double,2> B;
A() =2;
B = F(A);
// works also with expressions of course
C = F( 2*A );
C = C + 3* F(2*A); // ok that is just an example...
}
* Some cases require explicit cast, e.g. for the standard abs function (defined in XXXX), or the compiler does not know which std::abs you are talking about ::
auto Abs = map( static_cast< double (*)(double)> (std::abs) );
fold
-------------------------------------------------
* **Purpose** :
fold implement the folding (or reduction) on the array.
* **Syntax** :
If A is a type which models the :ref:`HasImmutableArrayInterface` concept
(e.g. an array<ValueType1,R, Opt> , a matrix, a vector, an expression, ...)
and `f` is a function, or a function object (SEE : ref to function object lib)::
ValueType2 f (A::value_type , ValueType2)
Then fold (f , A, ValueType2 init = ValueType2() ) = f( f( f( ... f( a(0,1), a(0,0)))))
.. warning::
The order of traversal is the same as foreach.
* **Example** : sum is implemented like this ::
template <class A>
typename A::value_type sum(A const & a) { return fold ( std::plus<typename A::value_type>()) (a); }

View File

@ -1,8 +0,0 @@
A little tour of the library
===================================
.. highlight:: c
TO BE WRITTEN : a few simple examples

View File

@ -1,14 +0,0 @@
.. highlight:: c
Issue with the complex numbers
============================================================================
There is no standard representation of the complex numbers `std::complex<T>` in hdf5.
The classes therefore represent them as arrays of `T` with one more dimension.
* This is compatible with the ALPS 2.0 storage.
* This is **not** compatible with the regular h5py storage for complex.
The conversion is made by HDFArchive class automatically.

View File

@ -1,47 +0,0 @@
.. highlight:: c
h5::array_proxy : a simple proxy to the array in file
===========================================================
The principle is that `array_proxy` is a proxy to an array, possibly big, in the h5 file.
It has a domain, and can be assigned from/to an array and sliced.
This allows to read/write only parts of a large array using the same slicing syntax
as for the array class itself (which is then translated into the hyperslab technique of the HDF5 library).
Tutorial
-----------
* Write a "big" array in the h5 file and write it slice by slice::
#include <triqs/arrays/h5/array_proxy.hpp>
using namespace triqs::arrays;
h5::H5File file( "ess.h5", H5F_ACC_TRUNC );
// first create a 'big' array in the file (no temporary is made in memory, it uses directly the HDF5 API).
const size_t N = 100;
h5::array_proxy<T,3,3> Pn( file, "Z", make_shape(N,2,2) );
// a slice...
array<double,2> the_slice (2,2); the_slice() = 1;
// write the large array slice by slice
for (int u = 0; u<N; ++u) P (u,range(), range()) = u* the_slice; // or any array expression
* Read a "big" array slice by slice::
#include <triqs/arrays/h5/array_proxy.hpp>
using namespace triqs::arrays;
h5::H5File file ("ess.h5", H5F_ACC_RDONLY );
h5::array_proxy<T,3,3> P(file, "A");
array<T,2> a_slice ;
for (int u = 0; u<N; ++u) a_slice = P (u,range(), range()) ;
Reference
------------
You can also get the :arrays_doxy:`full C++ documentation<triqs::arrays::h5::array_proxy>` for this class.

View File

@ -1,20 +0,0 @@
.. highlight:: c
Simple read/write operations of an array (or a view)
============================================================================
Given an array (or an array_view), the functions `h5::write` and `h5::read` write and read it to/from the file
or any subgroup thereof. For example :
.. literalinclude:: examples_code/h5_rw.cpp
Note that :
* The data in the hdf5 file are stored in C order.
* The data is reordered in Fortran (or custom) order if necessary when reading/writing.
* It also works with matrix and vector (but the storage is the same in HDF5).
* It also works with the corresponding views. TO BE ILLUSTRATED.

View File

@ -1,42 +0,0 @@
.. highlight:: c
h5::array_stack : stacking arrays or scalars
================================================================
h5::array_stack writes a sequences of arrays of the same shape (or of scalars) into an hdf5 array with one more dimension, unlimited in the stacking direction.
It is typically used to store a Monte-Carlo data series for later analysis.
* If the base of the stack is an array of rank R, the resulting hdf5 array will be of rank R+1.
* If the base of the stack is a simple number (double, int, ...), R=0.
* The syntax is simple :
* The << operator piles up an array/scalar onto the stack.
* The ++ operator advances by one slice in the stack.
* The () operator returns a view on the current slice of the stack.
* The stack is bufferized in memory (`bufsize` parameter), so that the file access does not happen too often.
* NB : beware to complex numbers ---> REF TO COMPLEX
Reference
------------
Here is the :arrays_doxy:`full C++ documentation<triqs::arrays::h5::array_stack>` for this class.
Tutorial
-----------
A simple example with a stack of double:
.. literalinclude:: examples_code/h5_stack_ex_sca.cpp
A simple example with a stack of array of rank 2 :
.. literalinclude:: examples_code/h5_stack_ex.cpp

View File

@ -1,67 +0,0 @@
Motivation
===========================
.. highlight:: c
This library provides a multi-dimensionnal array library
for numerical computations with the following characteristics/goals :
* **Simplicity of use**.
Arrays must be as simple to use as in python (numpy) or fortran.
This library is designed to be used by physicists, not only by professionnal programmers,
We do *a lot* of array manipulations, and we want to maintain *readable* codes.
Examples::
using namespace triqs::arrays; // All classes of this library are here. We will omit in the following
array<int,3> A(4,2,3); // creating an array
A(0,1,2) // access an element
A(range(1,2), range(), 4) // a partial_view (or slice).
* Various interoperable forms of arrays :
* array : the general cuboid n-dimensional array, with **custom memory ordering** at compile time.
* matrix : a matrix class, with C or Fortran order at compile time.
* vector : a vector class.
* **Genericity, abstraction and performance** :
We want to have simple, readeable code, with the same (or better) speed than manually written low level code.
Most optimisations should be delegated to the library and the compiler.
For example, we allow the use of **expression templates** for the algebra of arrays or matrices ::
array<int,2> A(4,2),B(4,2),C;
// fill A, B....
C = 2* A + B/2
Expressions templates allows the elimination of temporaries and hence compact and readable code
with the speed of manually written codes. Thanks to boost::proto library, this can nowadays be achieved
while maintaining a good compilation time, and with a short and readable code.
* **Compilation time** is (at far as tested) kept under control.
* **Complete interoperability with python numpy arrays**.
This library is used a lot with mixed C++/python codes.
It handles quick conversion between the C++ and python world, e.g. :
* work on a view of a numpy,
* create a array in C++, and return it as a numpy.
* mix the various kind of arrays transparently in C++ expressions.
* **HDF5** : simple interface to hdf5 library for an easy storing/retrieving into/from HDF5 files.
* **STL compliance**.
The arrays are compliant with STL (algorithms, std::vector<array<T,N> >, iterators, map, etc...).
* **Simple interface to Blas/Lapack** for matrices, vectors.
* **MPI** : compatibility with boost::mpi interface.

View File

@ -1,101 +0,0 @@
.. highlight:: c
matrix and matrix_view
============================
matrix and matrix_view :
* are the class for matrix and the corresponding view.
* allow **custom memory ordering** (C or Fortran) at compile time.
* model the :ref:`HasImmutableArrayInterface` concept.
Template parameters
----------------------------
* The classes have two template parameters.
.. code-block:: c
template <typename ValueType, typename Opt = Option::Default> class matrix_view;
template <typename ValueType, typename Opt = Option::Default> class matrix;
============================ ================================== ========================== ====================================================================
Template parameter Accepted type Access in the class Meaning
============================ ================================== ========================== ====================================================================
ValueType normally a scalar, but any default value_type The type of the element of the array
constructible type (?).
Opt Option::options< ...> opt_type Compile time options, see below.
============================ ================================== ========================== ====================================================================
Options template parameters are described :ref:`here <option_template>`.
The memory order must obviously be only C or Fortran (memory_order is fine, but useless here...).
.. _matrix_constructors:
Constructors
-----------------
Intentionally, matrix and matrix_view have only a few constructors :
========================================== ===========================================================================================
Constructors of matrix Comments
========================================== ===========================================================================================
matrix() - empty matrix of size 0
matrix(size_t, size_t) - from the dimensions. Does NOT initialize the elements of matrix to 0 !
matrix(const matrix &) - copy construction
matrix (PyObject * X) - Construct a new matrix from the data of the Python object X.
NB : X is a borrowed reference, matrix does affect its counting reference.
- it takes a **copy** of the data of X (or of numpy(X)) into C++.
- X is first transformed into a numpy by the python numpy lib itself
(as if you call numpy.array(X)) if X is not a numpy array or an array of the wrong type
(e.g. you construct an matrix<double> from a numpy of int....).
matrix(const T & X) - Type T models the :ref:`HasImmutableArrayInterface` concept.
- X must have the appropriate domain dimension (checked at compile time).
- Constructs a new matrix of domain X.domain() and fills it with evaluation of X.
========================================== ===========================================================================================
* Examples ::
matrix<double> Mc(2,2);
matrix<double,Option::Fortran> Mf(2,2);
.. warning::
The constructor from the dimensions does **NOT** initialize the matrix to 0
(because it may not be optimal).
If needed, do it explicitely by `a()=0`;
Constructors of matrix_views
----------------------------------------------
Automatic construction
^^^^^^^^^^^^^^^^^^^^^^^^^^^
matrix_view are normally automatically constructed by making (partial) views, ref:`Slicing`, e.g. ::
matrix<int> A(2,2);
A(range(),2) ; // --> this makes a view...
A() ; // --> this makes a view over the full array
Explicit construction
^^^^^^^^^^^^^^^^^^^^^^^^
To explicitly make a view of an matrix, use make_view or () ::
matrix<int> A(2,2);
make_view(A) //-> a view...
make_view(A) = 13 ; // to assign e.g.
A() = 13; // idem
====================================================================== ===========================================================================================================
Constructors of matrix_view Comments
====================================================================== ===========================================================================================================
matrix_view(const matrix_view &) - Copy construction (shallow copy)
matrix_view(const T & X) - `[Advanced]` T is any type such that X.indexmap() and X.storage() can be used to construct a view.
====================================================================== ===========================================================================================================

View File

@ -1,40 +0,0 @@
Memory management
========================
.. highlight:: c
* `View classes` contains a **reference counting** system to the memory block they view.
This guarantees that memory will not be dellocated before the destruction of the view.
The memory block will be dellocated when its array and all array_view
pointing to it or to a portion of it will be destroyed, and only at that moment.
Example::
array<int,2> *p = new array<int,2> (2,3); // create an array p
array_view<int,2> B(*p); // making a view
delete p; // p is gone...
B(0,0) = 314; cout<<B<<endl; // B (and the data) is still alive
* **Python compatibility**
The reference counting system is *compatible* with the Python reference counting (but distinct),
if you compile with python support of course.
As long as you write pure C++ code, you are basically using a shared_ptr to your data block.
No python is involved.
But, if you return your view into a numpy array in python, ownership of your data
is automatically transfered to the python interpreter::
boost::python::object f() {
array<int,2> res;
/// I compute res
return boost::python::object(array_view<int,2>(res)); // Cf section for convertion...
}
The interpreter then take the responsability of destroying the data when needed (meaning here, long after f has returned,
when the python object returned will be cleaned).

View File

@ -1,79 +0,0 @@
.. highlight:: c
.. _option_template:
Options template parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `Option::options` template can take several parameters that determine various aspects of `array` (resp. `matrix`, `vector`) at **compile time**.
These parameters can be given in any order in the `options` template, but only at most only (they all have a default, that can also be changed
using macros for maximum flexibility).
These options parameters determine :
* the storage order of indices in memory at **compile time**.
=============================== ===========================================================================================
Template parameter Meaning
=============================== ===========================================================================================
`Tag::C` `[default]` C-style order
`Tag::Fortran` Fortran-style order ( NB the base index is still 0, not 1 !)
memory_order<i0,i1,i2,... > The tuple <i0,i1,i2...> of size rank determines the storage order of indices in memory
memory_order_p<P> P is a triqs::arrays::Permutations::permutation that determines the storage order in memory
P[0] is the fastest index, P[rank - 1] the slowest index
=============================== ===========================================================================================
Example of custom order ::
custom<Permutation<0,1,2> > // same as index_order_Fortran, the first index is the fastest
custom<Permutation<2,1,0> > // same as index_order_C, the last index is the fastest
custom<Permutation<1,2,0> > // if indices are (i,j,k) : index j is fastest, then k, then i
* whether bound check are done while accessing array elements or slicing the arrays.
=============================== ====================================================================================================
Template parameter Meaning
=============================== ====================================================================================================
`Tag::NoBoundCheck` `[default]` Do no bound check at all. This is the default, for speed !
`Tag::BoundCheck` Do bound checks. Boundary errors throw triqs::arrays::key_error exception (See ??)
=============================== ====================================================================================================
The macro `TRIQS_ARRAYS_ENFORCE_BOUNDCHECK` change the default to `Tag::BoundCheck` (for debugging purposes).
Rational : one may want for some array to activate bound check and catch the exception, while keeping faster unchecked arrays elsewhere.
* whether arrays are initialized at construction.
=============================== ====================================================================================================
Template parameter Meaning
=============================== ====================================================================================================
`Tag::no_init` `[default]` No initialization of the array at construction (fastest).
`Tag::default_init` Initialize with the default constructor of ValueType
`Tag::nan_inf_init` Initialize to nan for floating type and std::limits<ValueType>::infinity() for integer type
as in D e.g. mainly for detecting easily lack of initialization errors.
=============================== ====================================================================================================
Note that :
* The macro `TRIQS_ARRAYS_ENFORCE_INIT_NAN_INF` change the default to `Tag::nan_inf_init`.
* The macro `TRIQS_ARRAYS_ENFORCE_INIT_DEFAULT` change the default to `Tag::default_init`.
* Several simple aliases are defined in the namespace Option for the most current cases :
=============================== ===============================
Option aliases
=============================== ===============================
Option::C Option::options<`Tag::C`>
Option::Default Option::options<`Tag::C`>
Option::Fortran Option::options<`Tag::Fortran>`
=============================== ===============================
* Examples ::
array<long, 3, Option::options<Option::memory_order <2,1,0> > > A0 (2,3,4);
array<long, 3, Option::options<Option::memory_order <0,1,2> > > A1 (2,3,4);
array<long, 3, Option::options<Option::memory_order <1,0,2> > > A2 (2,3,4);
array<long, 3, Option::Fortran > A4 (2,3,4);

View File

@ -1,18 +0,0 @@
.. highlight:: c
::
namespace Py_to_C {
template<typename T> struct convert{
static bool possible (boost::python::object x); // returns true iif the conversion of x into a T is possible
static T invoke(boost::python::object x); // do it !
};
}
namespace C_to_Py {
template<typename T> struct convert {
static boost::python::object invoke (T const & x); // convert T into a python object
};
}

View File

@ -1,33 +0,0 @@
.. _Shape:
Shape, resize
==================================
Shape
--------------------
* array, matrix and vector have a method shape() that returns a `shape_type` object
i.e. a mini_vector<size_t,rank>
* Tutorial::
array<double,2> A(2,3);
A.shape()[0] == 2;
A.shape()[1] == 3;
Resize
--------
* The `value class` array, matrix and vector can be resized::
array<double,2> A(2,3);
A.resize ( make_shape (5,5) )
matrix<double,2> M;
M.resize( 3,3);
vector<double> V;
V.resize(10);

View File

@ -1,101 +0,0 @@
.. highlight:: c
.. _Slicing:
Element access & partial views
==================================
Element access
--------------------
* The elements of the arrays can be naturally accessed using the ( ) operator::
array<T,2> A(10,10); // defines an array
A(1,2) =10 // element access.
Partial view, slices
------------------------
Various kind of partial views and slices can be made on arrays and matrices.
* A `partial view` is defined as a view of a restricted portion of the array while
a `slice` is strictly speaking a partial view of a lower dimension of the original array,
e.g. a column of a matrix.
* Partial views uses the ( ) operator, as the evaluation of the array::
array<T,2> A(10,10); // defines an array
A ( 1, range(0,2) ) // 1d slice
A ( 1, range() ) // 1d slice taking all the second dim
A( range(0,10,2), range(0,10,2)) // a 2d slice viewing every each elements with even coordinates.
array_view<T,1> SL( A(0,range(0,3))); // naming the view
auto SL = A(0,range(0,3)); // C++0x with auto [check this]
* **Return type** :
* Partial views of array or array_view return an array_view.
* Partial views of vector or vector_view return an vector_view.
* 2d partial views of matrix or matrix_view return matrix_view.
* BUT : (1d) slices of matrix or matrix_view return vector_view.
* 0d slices of anything are converted to the `value_type` of the array.
The `range` type
^^^^^^^^^^^^^^^^^^^^^
`range` mimics the python `range`. It can be constructed with :
* no argument : it then takes the whole set of indices in the dimension (like `:` in python) ::
A(range(), 0) // take the first column of A
* two arguments to specify a range ::
A(range (0,3), 0) // means A(0,0), A(1,0), A(2,0)
.. warning::
the second element is excluded : range(0,3) is 0,1,2, like in Python.
* three arguments : a range with a step ::
A(range(0,4,2), 0) // means A(0,0), A(2,0)
The `ellipsis` type
^^^^^^^^^^^^^^^^^^^^^^
* Ellipsis can be provided in place of `range`, as in python. The type `ellipsis` is similar to range
except that it is implicitely repeated to as much as necessary.
* Example ::
array<long,4> B(2,3,4,5) ;
B(0, ellipsis(), 3 ) ; // same as B(0, range(),range(), 3 )
B(0, ellipsis(),2, 3 ) ; // same as B(0, range(), 2, 3 )
B( ellipsis(),2, 3 ) ; // same as B( range(),range(), 2, 3 )
* NB : there can be at most one ellipsis per expression (otherwise it would be meaning less).
* Example of usage :
Ellipsis are useful to write generic algorithms. For example, imagine that you want to sum
arrays on their first index ::
template<typename ArrayType>
array_view<typename ArrayType::value_type, ArrayType::rank-1> sum0 (ArrayType const & A) {
array<typename ArrayType::value_type, ArrayType::rank-1> res = A(0,ellipsis());
for (size_t u =1; u< A.shape()[0]; ++u) res += A(u,ellipsis());
return res;
}
///.....
array<double,2> A(5,2);
array<double,3> B(5,2,3);
std::cout<< sum0(A) << sum0(B) <<std::endl;

View File

@ -1,98 +0,0 @@
.. highlight:: c
vector and vector_view
============================
vector and vector_view :
* are the class for a vector and the corresponding view.
* model the :ref:`HasImmutableArrayInterface` concept.
Template parameters
----------------------------
* The classes have two template parameters.
.. code-block:: c
template <typename ValueType, typename Opt = Option::Default> class vector_view;
template <typename ValueType, typename Opt = Option::Default> class vector;
============================ ================================== ========================== ====================================================================
Template parameter Accepted type Access in the class Meaning
============================ ================================== ========================== ====================================================================
ValueType normally a scalar, but any default value_type The type of the element of the vector
constructible type (?).
Opt Option::options< ...> opt_type Compile time options, see below.
============================ ================================== ========================== ====================================================================
Options template parameters are described :ref:`here <option_template>`.
The memory order is obviously useless here...
.. _vector_constructors:
Constructors
-----------------
Intentionally, vector and vector_view have only a few constructors :
========================================== ===========================================================================================
Constructors of vector Comments
========================================== ===========================================================================================
vector() - empty vector of size 0
vector(size_t) - from the dimension. Does **NOT** initialize the elements of the vector to 0 !
vector(const vector &) - copy construction
vector (PyObject * X) - Construct a new vector from the data of the Python object X.
NB : X is a borrowed reference, vector does affect its counting reference.
- it takes a **copy** of the data of X (or of numpy(X)) into C++.
- X is first transformed into a numpy by the python numpy lib itself
(as if you call numpy.array(X)) if X is not a numpy array or an array of the wrong type
(e.g. you construct an vector<double> from a numpy of int....).
vector(const T & X) - Type T models the :ref:`HasImmutableArrayInterface` concept.
- X must have the appropriate domain (checked at compile time).
- Constructs a new vector of domain X.domain() and fills it with evaluation of X.
========================================== ===========================================================================================
* Examples ::
vector<int> A(10,2);
A()=0; // init A to 0
.. warning::
The constructor from the dimensions does **NOT** initialize the vector to 0
(because it may not be optimal).
If needed, do it explicitely by `a()=0`;
Constructors of vector_views
----------------------------------------------
Automatic construction
^^^^^^^^^^^^^^^^^^^^^^^^^^^
vector_view are normally automatically constructed by making (partial) views, ref:`Slicing`, e.g. ::
tqa::vector<int> A(2,2);
A(range(),2) ; // --> this makes a view...
A() ; // --> this makes a view over the full array
Explicit construction
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To explicitly make a view of an vector, use make_view ::
vector<int> A(2,2);
make_view(A) //-> a view...
make_view(A) = 13 ; // to assign e.g.
====================================================================== ===========================================================================================================
Constructors of vector_view Comments
====================================================================== ===========================================================================================================
vector_view(const array_view &) - Copy construction (shallow copy)
vector_view(const T & X) - `[Advanced]` T is any type such that X.indexmap() and X.storage() can be used to construct a view.
====================================================================== ===========================================================================================================

View File

@ -1,80 +0,0 @@
.. highlight:: c
A fundamental distinction : value classes vs view classes
=================================================================
A fundamental distinction in TRIQS is the difference between the `value class`
and the corresponding `view classes`.
* The **value classes** : e.g. array, matrix, vector, ...
* They own their data.
* They have a `value semantic`.
* When copied, assigned, they make deep copy.
* They are default constructible.
* They resize themselves under assignment if necessary (like e.g. std::vector).
* They can be put in standard STL containers.
* The **view classes** : e.g. array_view, matrix_view, vector_view, ...
* They are just partial views of the corresponding value class.
* They do not own their data.
* They have a `view semantic`, with reference counting to the data.
* They never make deep copy, either in copying, constructing, transforming to python,
* They can not be resized.
* They are useful to work on arrays, matrices, or on some part thereof, without making copies
(e.g. on a slice of an array, a column of a matrix).
Each class is therefore provided in two flavors, and it is crucial for the user
to learn and understand their difference in order to use them appropriately.
This distinction extends to more complex objects in TRIQS, such as Green functions.
Behaviour comparison table between value and view classes
------------------------------------------------------------
A detailed comparison of the behaviour of the two type of classes is provided in the following table.
=================== ======================================================================= ======================================================================================
Topics value class (e.g. array, matrix, vector0 view (e.g. array_view, matrix_view, vector_view)
=================== ======================================================================= ======================================================================================
Contructors - Constructors create a new fresh memory block. - Constructors only make another view of the data.
Default Contructor - Default constructor creates an empty array - No default constructor (what would it view ?).
Copy contructors - Create a new fresh memory block. - Make another view of the data (shallow copy).
- Make a *true* copy of the data. - *Never* copy data, so they are quite quick.
Assignment (=) - The assignment operator creates a new datablock if size mismatchs. - The assignment operator just copy data into the view.
- Hence, assignment never fail for size reason Behaviour is undefined if the size of the view is too small.
(unless there is a memory allocation exception of course) (it throws if the array has option `Tag::BoundCheck` or if
`TRIQS_ARRAYS_ENFORCE_BOUNDCHECK` is defined, Cf below).
Resizing - Can be resized, invalidating all references/pointers to the data. - Can be not be resized.
Invalidation - References/pointers to the data may become invalid after resize, - References/pointers to the data are still valid after assignment.
or assignment.
STL - Can be used with STL container and algorithms - Can not be put in STL containers, but have STL compliant iterators.
Data Contiguity - The data are contiguous in memory. - The data may not be contiguous in memory (e.g. a slice).
=================== ======================================================================= ======================================================================================
An example
--------------
To be written....
A simple example ::
array<int,2> f(size_t n) {
array<int,2> res(n,n);
res() = 0;
return res;
}
array_view <int,2> f(size_t n) {
array<int,2> res(n,n);
res() = 0;
return res;
}

View File

@ -7,52 +7,77 @@ gf and gf_view
gf is the Green function container, with its view gf_view, defined as ::
template<typename Variable, typename Target=matrix_valued, typename Opt=void> class gf; // the regular type
template<typename Variable, typename Target=matrix_valued, typename Opt=void> class gf_view; // the view type
template<typename Variable, typename Target=matrix_valued, typename Opt=void> class gf; // regular type
template<typename Variable, typename Target=matrix_valued, typename Opt=void> class gf_view; // view type
In this section, we describe all common properties of gf.
In practice, one works with one of the specialization described below (:ref:`gf_special`),
with their specific aspects (domains, interpolation methods, ....).
Template parameters
----------------------------
* Variable determines the domain of the Green function. It can be :
* **Variable** determines the domain of the Green function
* **Target** determines the value of the Green function
* **Opt** is currently not used [default to void]. It may be used to differentiate different Green functions
with the same Variable, Target but different behaviour (e.g. interpolation, ...).
============================ ====================================================================
Variable tag Meaning
============================ ====================================================================
:ref:`imfreq<gf_imfreq>` Imaginary Matsubara frequency
imtime Imaginary Matsubara time
refreq Real frequency
retime Real time
legendre Legendre polynomial representation
block_index Block Green function
cartesian_product<Gs...> Cartesian product of gf<Gs> ... functions.
============================ ====================================================================
They can have the following values :
* Target determines the value of the Green function
+--------------------------+--------------------------------------------+
| Variable | Meaning |
+==========================+============================================+
| gf_imfreq | Imaginary Matsubara frequency |
+--------------------------+--------------------------------------------+
| imtime | Imaginary Matsubara time |
+--------------------------+--------------------------------------------+
| refreq | Real frequency |
+--------------------------+--------------------------------------------+
| retime | Real time |
+--------------------------+--------------------------------------------+
| legendre | Legendre polynomial representation |
+--------------------------+--------------------------------------------+
| block_index | Block Green function |
+--------------------------+--------------------------------------------+
| cartesian_product<Gs...> | Cartesian product of gf<Gs> ... functions. |
+--------------------------+--------------------------------------------+
============================ ====================================================================
Target tag Meaning
============================ ====================================================================
matrix_valued [default] The function is matrix valued.
scalar_valued The function is scalar valued (double, complex...).
============================ ====================================================================
and
* Opt is currently not used [default to void].
+-------------------------+-----------------------------------------------------+
| Target | Meaning |
+=========================+=====================================================+
| matrix_valued [default] | The function is matrix valued. |
+-------------------------+-----------------------------------------------------+
| scalar_valued | The function is scalar valued (double, complex...). |
+-------------------------+-----------------------------------------------------+
.. _gf_special:
Specializations
-------------------
+-----------------+------------------+--------------------+----------------------------+
| Variable\Target | matrix_valued | scalar_valued | G (another Green function) |
+=================+==================+====================+============================+
| imfreq | :doc:`gf_imfreq` | :doc:`gf_imfreq_s` | |
+-----------------+------------------+--------------------+----------------------------+
| imtime | :doc:`gf_imtime` | :doc:`gf_imtime_s` | |
+-----------------+------------------+--------------------+----------------------------+
| refreq | :doc:`gf_refreq` | :doc:`gf_refreq_s` | |
+-----------------+------------------+--------------------+----------------------------+
| retime | :doc:`gf_retime` | :doc:`gf_retime_s` | |
+-----------------+------------------+--------------------+----------------------------+
| block_index | | | :doc:`block_gf<gf_block>` |
+-----------------+------------------+--------------------+----------------------------+
.. toctree::
:hidden:
:maxdepth: 1
gf_imfreq
gf_imfreq_s
gf_refreq
gf_imtime
gf_retime

View File

@ -2,7 +2,7 @@
.. _gf_imfreq:
gf<imfreq, matrix_valued> & gf<imfreq, scalar_valued>
gf<imfreq>
==========================================================
This is a specialisation of :ref:`gf_and_view` for imaginary Matsubara frequencies.
@ -11,6 +11,7 @@ Domain & mesh
----------------
Singularity
-------------
@ -22,7 +23,7 @@ Factories
The factories are ::
make_gf(mesh<imfreq,Opt> m, matrix_shape_t shape, local::tail_view t = local::tail(shape) )
make_gf(gf_mesh<imfreq,Opt> m, matrix_shape_t shape, local::tail_view t = local::tail(shape) )
make_gf(double beta, statistic_enum S, matrix_shape_t shape, size_t Nmax = 1025, local::tail_view t = local::tail(shape) )

View File

@ -0,0 +1,79 @@
.. highlight:: c
.. _gf_imfreq_s:
gf<imfreq, scalar_valued>
==========================================================
This is a specialisation of :ref:`gf_and_view` for imaginary Matsubara frequencies.
Domain & mesh
----------------
Singularity
-------------
:ref:`gf_tail`.
Factories
-------------
The factories are ::
make_gf(mesh<imfreq,Opt> m, local::tail_view t = local::tail({1,1}) )
make_gf(double beta, statistic_enum S, size_t Nmax = 1025, local::tail_view t = local::tail({1,1}) )
Interpolation method
---------------------
None
Data storage
---------------
* `data_t` : 1d array (C ordered) of complex<double>.
* g.data()(i) is the value of g for the i-th point of the mesh.
HDF5 storage convention
---------------------------
h5 tag : `ImFreq_s`
Examples
---------
.. compileblock::
#include <triqs/gfs/imfreq.hpp>
using namespace triqs::gfs;
int main() {
double beta=1; // inverse temperature
size_t n_freq=5; // we will have 5 points including iw=0 and iw=beta
auto GF = make_gf<imfreq,scalar_valued>(beta, Fermion, n_freq);
};
An alternative declaration with an explicit construction of the underlying mesh:
.. compileblock::
#include <triqs/gfs/imfreq.hpp>
using namespace triqs::gfs;
int main(){
double beta=10;
int Nfreq =100;
auto GF = make_gf<imfreq,scalar_valued>(gf_mesh<imfreq>{beta,Fermion,Nfreq});
// auto GF2 = make_gf<imfreq,scalar_valued>({beta,Fermion,Nfreq});
}

View File

@ -1,326 +0,0 @@
.. highlight:: c
The new C++11 standard
===============================
C++11 is the new standard of the C++, voted in 2011 by the International C++ Standard Committee.
It brings several improvements to the C++.
The goal of this page is to highlight a few constructs, which may be useful for us in scientific computing
in order to make C++ more accessible and ease library writing.
A simple presentation of C++11 is given in `wikipedia <http://en.wikipedia.org/wiki/C%2B%2B11>`_.
Implementation
-----------------
The first thing is to know if/whether this standard is usable, i.e. implemented in compilers.
It turns out that recent versions of compilers (gcc, clang, icc) have already most of the features,
and the implementation is very rapidely progressing.
A simple `page <http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport>`_ present what is implemented in various compilers.
A detailed table of the implemented features with versions exists for `gcc <http://gcc.gnu.org/projects/cxx0x.html>`_
and for `clang <http://clang.llvm.org/cxx_status.html>`_.
.. note::
To use C++11, pass the option -std=c++0x (all compilers), or -std=c++11 (gcc >= 4.7, clang >= 3.0).
Rational for using C++11
----------------------------
C++11 add some libraries (standardizing several boost libraries, e.g. shared_ptr),
but also and most importantly several new features in the language itself.
Some are basically *syntactic sugar*, some have a deep effect on the programming style.
Some of these features allow a much cleaner programming with :
* functionnal programming : introduction of lambda, in particular, also auto, decltype.
* metaprogramming (variadic template, auto, decltype, etc).
C++11 allows :
* Write simpler code for the User of libraries
* A lot of simplification in the writing of libraries themselves.
In my opinion the gain in clarity that one can achieve with C++11 vastly outweigh the
cost in portability (to old versions of compilers).
There are already many articles on C++11, and soon books, so I will only focuss on
a few examples and basic explanations.
**Except when specified, the examples below compiles with gcc 4.6 (Ubuntu 12.04 LTS default) and clang 3.1 (current stable)**
Rvalue references
-----------------------
Probably one of the most useful feature is the *move semantics*, or rvalue references.
The problem
^^^^^^^^^^^^^
*How to return efficiently a big object built by a function ?*
Example : I have a function that creates a new std::vector (or any array, Green functions).
How do I return it ?
A simple solution::
std::vector<T> f ( my_parameters) {
std::vector<T> res( vec_size);
// fill res
res[0]=1; //...
return res;
}
/// usage
std::vector <T> v = f(1,2); // constructed a new v from the result of f call.
v = f(3,2); // reassign an already constructed v to the result of v
Normally, this solution has a major defect : *res* is **copied** when returned from the function,
hence there is a (potentially huge) performance penalty in doing such simple code.
Most current compilers will already elide (remove) the copy in the first case, but it is not guaranteed.
And in the second case, there will be a copy.
This situation has lead to several workarounds idioms, obscuring the coding style in C++, like ::
// solution 1 : use pointers
std::vector<T> * f (my_parameters) { // <---- return a pointer
std::vector<T> res = new std::vector<T>(vec_size); // <--- dynamical allocation
// fill res
(*res)[0] = 1; /// <---- not so nice to write
return res;
}
/// usage
std::vector <T> * v = f(1,2); // <--- put this in shared_ptr ??
*v = f(3,2); // <--- Oops : memory leak !
This has a big pb too : who is going to take care of deallocating the pointer (see shared_ptr section) ??
Another solution is ::
// solution 2 : pass by reference
void f( std::vector<T> & res, my_parameters) { // <---- pass a vector and change it
/// fill res....
}
// usage
std::vector<T> v;
f(v,1,2); // <------ not nice, not natural
The C++11 solution
^^^^^^^^^^^^^^^^^^^^^^
The **efficient** C++11 code is now **the simplest one** in this situation ::
std::vector<T> f (my_parameters) {
std::vector<T> res(vec_size);
res[0]=1; // fill res ...
return res;
}
/// usage
std::vector <T> v = f(1,2); // no copy (copy elision) !
v = f(2,3); // no copy (move operator =) !!
In that case, there will be no useless copy. The data of the temporary will be "stolen" by v.
Since the temporary is ... temporary, this is a good strategy. Note that :
* This is transparent in this code. The user of the vector object,
simply write the **simplest, natural code**. (it is a good idea still to understand the basic idea, see below).
* The charge of implementing the move semantics rely on the **library writer, not the user of the class**.
* Still, implementing this is not hard (see example below).
* A detailed series of (readable) articles on copy elision, and move semantics by D. Abrahams
can be found `here <http://cpp-next.com/archive/2009/08/want-speed-pass-by-value>`_.
An example of a move constructor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Let us now make a minimal vector class to illustrate the idea and how to implement move semantics.
In a situation like above, the vector (or anything that provides a **move constructor**) will
alleviate the copy by *stealing the data from the returned temporary*.
The temporary will be discarded, so there is no pb in stealing its data.
Typically a class with move semantics will provide :
* a copy constructor, an operator =, as in C++03.
* a move constructor, a move operator=
.. compileblock::
#include <iostream>
#include <string.h>
class my_vector {
double * data; // a piece of memory
size_t size_; // size of the piece
public :
my_vector(int s=0): data(s>0 ? new double[s] : NULL ), size_(s){} // basic constructor. Just allocate memory
~my_vector() { if (data) delete[] data;} // free the pointer
my_vector(my_vector const & x) { // copy contructor
std::cout << "copy construction"<< std::endl;
size_ = x.size(); data = new double[size_];
memcpy(this->data, x.data, x.size()*sizeof(double)); // copy data in memory (low level C)
}
my_vector & operator=( my_vector const & x) { // normal = operator
std::cout <<" normal ="<< std::endl;
size_ = x.size(); if (data) delete[] data; data = new double[size_];
memcpy(this->data, x.data, x.size()*sizeof(double)); // copy data in memory (low level C)
return * this;
}
my_vector(my_vector && V) { // C++11 move constructor
std::cout <<" move constructor"<< std::endl;
data = V.data; V.data = NULL; // swap the data with V and let V empty !
size_ = V.size_; V.size_ = 0;
}
my_vector & operator=( my_vector && V) { // C++11 move = operator
std::cout <<" Move ="<< std::endl;
std::swap(V.data, this->data); // swap data
std::swap(V.size_, this->size_);
return * this;
}
// ------ standard interface to vector (minimal) --------------------
double const & operator[](size_t i) const { return (data)[i];}
double & operator[](size_t i) { return (data)[i];}
size_t size() const { return size_;}
friend std::ostream & operator <<( std::ostream & out, my_vector const & v) {
for (size_t i =0; i<v.size(); ++i) { out << v[i] <<" ";}
return out;
}
};
// ------------- USAGE ----------------
my_vector make_vector(int s) { // a function that makes a vector
my_vector res(s);
for (int i =0; i<s; ++i) res[i] = i*10;
return res;
}
my_vector f(my_vector V) { // another one, that take a copy of V ...
V[0] = - V[0];
std::cout << "work done "<< std::endl;
return V;
}
int main () {
my_vector A(3); A[0] = 1; A[1] =2; A[2] =3;
my_vector B(3); B[0] = 10; B[1] =20; B[2] =30;
my_vector C(A); std::cout <<" C= "<< C << std::endl;
my_vector D(make_vector(4)); std::cout <<" D= "<< D << std::endl;
A = f(A); std::cout <<"A =f(A) : A= "<< A << std::endl;
A = f(B); std::cout <<"A = f(B) : A= "<< A << "B= "<< B<< std::endl;
std::swap(A,B); std::cout <<" swap: A= "<< A << "B= "<< B<< std::endl;
}
..
.. literalinclude:: babyvector.cpp
Explanation (quick) :
^^^^^^^^^^^^^^^^^^^^^^^
* && is a new type of reference, called *rvalue reference*, that binds to (unnamed) temporaries.
* ... See literature...
Other example
^^^^^^^^^^^^^^^
You may also explicitly ask for the move semantics, e.g. ::
{
A a (...); // building a temporary a
b = 2*a; // using a
c = std::move(a); // a is going to be destroyed. c "steals" its data ...
}
Automatic type deduction: auto, decltype
------------------------------------------------
In many situations, the compiler knows the type (better than you ?), so you can save time by asking the compiler
to deduce the type by itself ::
auto x = 3; // equivalent to int x= 3;
This because really powerful when the type (e.g. returned by a function) is getting more complex::
auto res = my_function(1,2,3);
// res is of the return_type of the function for this overload.
In some cases (see Expression templates, in particular), it is the *only* way to deduce a
type which may be internal to the compiler (e.g. lambda) or to a library, e.g.::
array<int,2> A(2,3), B(2,3);
auto expr = A + 2*B;
for range loop
-----------------
For any object that have iterators (in fact it is more general ....), a simpler syntax to iterate on them is.
Example ::
std::vector<int> V = {1,2,3};
for (auto & x : V) {
// do something with x ....(it is a reference)
}
Initializer list
------------------
Example ::
std::vector<int> V = {1,2,3};
triqs::arrays::matrix<double> M = {{1,2},{3,4}}; // not implemented yet....
Lambda
------------
As in python, C++11 now supports simple lambda functions ::
auto f = [](int) { return x+1;}
int u = f(3); // use as a regular function...
* It still allow inlining.
* This is particularly useful with the STL algorithms, or similar iteration algorithms :
..
.. literalinclude:: count_if.cpp
.. compileblock::
#include <algorithm>
#include <iostream>
#include <vector>
int main()
{
std::vector<double> v = { -1, -2, -3, 4, -4, 3, -7, 8, 9, 10 };
int num_items1 = std::count_if(v.begin(), v.end(), [](double i) {return i>0;});
std::cout << "number of >0 : " << num_items1 << '\n';
int bound = 3;
num_items1 = std::count_if(v.begin(), v.end(), [&bound](double i) {return i>bound;});
std::cout << "number of >bound : " << num_items1 << '\n';
}
Variadic template
---------------------
Improvement of the standard library
---------------------------------------
Most notably :
* std::shared_ptr, and std::unique_ptr (see Smart Pointers).
* std::unordered_map
In most cases, this is a standardisation of some boost libraries.

View File

@ -1,13 +0,0 @@
Various ways to take a decision at compile time
###############################################
* Overloading : often forgotten. Use of tags
* std::conditionnal or mpl::if\_
* std::enable_if
In combination with a template <> + 1 default ...
* template technique : specialisation, traits ...

View File

@ -1,16 +0,0 @@
General C++ techniques and idioms
************************************************************************************
.. highlight:: c
.. toctree::
:maxdepth: 1
:numbered:
C++11
concepts
smart_ptr
functions
compile_time_decision

View File

@ -1,14 +0,0 @@
Functions becomes first class citizens ...
#################################################
* functional programming : another paradigm ...
* std::function : store a function in a an object
* currifying: std::bind, std::placeholders.
* Do not USE function pointers....
* Function objects : wrap functions into an objects....
* lambda : example of use, STL....

View File

@ -1,182 +0,0 @@
.. highlight:: c
Pointers : from the evil to the smart...
######################################################
Raw pointers
===============
Raw pointers are ordinary C/C++ pointers.
Pointers allow to manipulate objects by reference (or indirectly).
RefTo : stack/heap. Pointers in C++...
They are used e.g. to allocate a new object on the heap ::
{
A * p = new A( constructor_arguments); // allocated and constructed on the heap.
A a( constructor_arguments); // constructed on the stack
delete p; // release memory
}
Raw pointers are the source of many bugs in C++. Two main dangers are :
* memory leak : when one forget to release the memory allocated to the pointer::
{
A * p = new A( constructor_arguments); // allocated and constructed on the heap.
//
// forgot to delete. but p is gone....
}
* a more subtle memory leak ::
{
A * p = new A( constructor_arguments); // allocated and constructed on the heap.
// working with p...
delete p;
}
What happens if there is an exception during the work ??
The pointer is not deallocated (but all objects on stack would be destroyed).
* dangling pointers : reusing a pointer after the object has been deleted ::
{
A * p = new A( constructor_arguments); // allocated and constructed on the heap.
//...
A *q =p; a copy of the pointer is made somewhere
//...
delete p;
// ...
q->method(..) ; // DEAD : segfault.
}
To conclude :
* pointers are a low level object
* pointers induce many dangers...
The solution
====================
Smart pointer are a way to manipulate objects by reference with strong guarantee
that such common problems will **never** happen.
The bottom line can be summarized as :
*Smart pointers are pointers with strong guarantee to avoid dangling references and memory leaks.*
In order to avoid pointers problems, the style recommendation are :
#. Try to avoid pointers ! Use object by value, and objects like std::vector
which hide the dynamic allocation of memory from the user in a safe manner.
#. If you need a pointer to an object A which is pointed to by exactly one pointer
(a local object), then use std::unique_ptr (or boost:scoped_ptr if you insist on non C++11).
#. If you need an object A shared by multiple objects, i.e. pointed to by several pointers
use a std::shared_ptr.
#. Only when other options are not good, and in practice only at low level libraries,
use raw pointers to allocate objects, with new and delete.
std::unique_ptr
---------------------
Example ::
{
std::unique_ptr<A> p( new A( constructor_arguments) ); // allocated and constructed on the heap and stored.
//
// working with p. p has the semantics of a pointer :
// p-> method(), p->data, etc ...
// NO DELETE : when the std::unique_ptr is getting out of scope and is destroyed the
// underlying raw pointer are freed.
}
Note that :
* a unique_ptr **can not be copied**. If you put it as a member of a class, this class can therefore not be copied.
* a unique_ptr **can be moved** (see lecture on move semantics).
* This writing is exception-safe. If an exception occurs in the treatment, then C++ guarantees
the all the local objects are destroyed, hence p and the underlying raw pointer is deleted.
* unique_ptr has little if no overhead in critical code.
std::shared_ptr
---------------------
(or boost::shared_ptr < C++11).
In the case where we want to **share** an object, i.e. that multiple pointers points to an object,
we have to use std::shared_ptr.
A shared_ptr is no more than :
* a pointer to the object
* a reference counter, counting how many shared_ptr point to the objects.
When the shared_ptr are copied, assigned, the reference counter is updated accordingly.
* When a shared_ptr is created, copied, ref_counter is increased by 1.
* When a shared_ptr is deleted, ref_counter is decreased by 1.
When the last shared_ptr to an object is destroyed, the reference counter gets
to 0 and the pointed objects is deleted.
Example ::
class A;
boost::shared_ptr<A> p (construction_parameters);
// work with p like a pointer
// p has pointer semantic. *p, p->method as for raw pointer
// NO delete
Example ::
class A;
boost::shared_ptr<A> p1 (construction_parameters);
boost::shared_ptr<A> p2 (construction_parameters);
// p1 ----> ( a1, ref_counter = 1 )
// p2 ----> ( a2, ref_counter = 1 )
{ // group start
boost::shared_ptr<A> p3 = p2;
// p1 ----> ( a1, ref_counter = 1 )
// p2 ----> ( a2, ref_counter = 2 )
// p3 ----> ( a2, ref_counter = 2 )
} // group ends ... p3 is deleted
// p1 ----> ( a1, ref_counter = 1 )
// p2 ----> ( a2, ref_counter = 1 )
p1 = p2;
// p1 ----> ( a1, ref_counter = 2 )
// p2 ----> ( a1, ref_counter = 2 )
// a2 has been destroyed.

View File

@ -1,30 +0,0 @@
Notes for TRIQS C++ contributor
*********************************
*Our goal : write clear, readeable, high-level and yet efficient programs.*
The goal of these notes is to be :
* a starting point for learning modern C++ for scientific computation :
Traditionnally, it is often said that genericity, abstraction (i.e. high-level) programming
is not compatible with the efficency that we need in scientific computations.
The goal of these lectures is to show that we can achieve these goals with today C++ and libraries.
* a style guide for the TRIQS project (C++ part).
Most of the content is more general (and independant of the TRIQS project).
They are a few notes on modern C++ for scientific computations.
The intended audience is physicists with a basic knowledge of C++ (??).
These lectures do not focuss on basic syntax, but on useful **idioms**.
.. highlight:: c
.. toctree::
:maxdepth: 1
:numbered:
basic/contents
patterns/contents
concepts/contents

View File

@ -1,25 +0,0 @@
Introduction
============
*Our goal : write clear, readeable, high-level and yet efficient programs.*
This is the motto of the lectures.
Traditionnally, it is often said that genericity, abstraction (i.e. high-level) programming
is not compatible with the efficency that we need in scientific computations.
The goal of these lectures is to show that we can achieve these goals with today C++ and libraries.
Why is it important?
--------------------
* Our codes are changing all the times. We want to have code reuse.
* more to come here
Goal of the lectures
--------------------
* Present some patterns and techniques which are useful to write high level and efficient codes.
* Fix the "pattern" design of TRIQS. One problem : one standardised solution.

View File

@ -1,31 +0,0 @@
User Manual : the complete (?) reference
*******************************************
.. warning::
This library is beta.
This manual is not finished: work in progress.
.. highlight:: c
.. toctree::
:maxdepth: 2
:numbered:
view_or_not_view
basic_classes
slicing
shape
debug
assignment
algebras
foreach
functional
STL
H5
Interop_Python
IO
blas_lapack
FAQ

View File

@ -1,133 +0,0 @@
.. highlight:: c
Assignment Delegation
===============================
Problem
------------
Given a class A which defines an `operator =`, we would like to
be able to implement various optimisation depending on the RHS (right hand side).
Normally, we can simply define various operator in class A ::
class A {
// ...
template<typename RHS> A & operator = (RHS rhs);
}
maybe with some enabler to choose for different types.
Normally RHS are expected to model some precise concept (e.g. array, green function, etc...).
In other words, RHS are looking "like an A".
It is then easy to implement a generic version of operator = that uses just the concept.
We would like to be able to implement various optimisations depending on the RHS (right hand side).
This optimisation are decided are compile time and maybe also at run time.
Of course, we could split the definition of operator = into cases with enable_if.
But we want a non-intrusive solution, i.e. be able to add an optimisation
for a new type of RHS, without modifying the class A.
Real life examples
^^^^^^^^^^^^^^^^^^^^^^^^
* array, matrix, vector, with RHS a generic proto expression. For example in triqs::arrays ::
M = inverse(M); // (1)
M = 2*inverse(M); // (2)
inverse returns an inverse_lazy object. We would like that the inversion is done in place, *without
temporary* in this (1), but obviously not for a general expression like (2).
* Green function, for example in the case ::
gf = fourier (gt) ;
The Fourier transformation could be done directly in the gf data, without a temporary,
if this becomes a bottleneck the code (?).
The point here is also that we can first implement a generic algorithm. If we need,
we know that we will be able to optimise this temporary *without any change in the user code*.
Solution
------------------
A simple solution is to delegate the assignement and simply use overload e.g. ::
template <typename LHS, typename RHS>
void assign_delegation (LHS & lhs, RHS const & rhs) {
// implement the generic algorithm
}
class A {
// ...
template<typename RHS> A & operator = (RHS const & rhs) { assign_delegation(*this, rhs);}
}
And then later when defining MyRHS_with_optimisation ::
class MyRHS_with_optimisation {
// .... implement the object ...
template <typename LHS>
friend void assign_delegation (LHS & lhs, MyRHS_with_optimisation const & rhs) {
// implement the special algorithm
}
};
**NB** : Adding the function as a friend in the class has 2 advantages over defining it outside:
* It is friend, hence it can access the data of rhs.
* It will be injected in the namespace where the class MyRHS_with_optimisation, and therefore it will be found by ADL.
Variant
^^^^^^^^^^
With the same technique, one can also delegate the compound assignments : +=, -=, *=, /=.
Let us label these operators by a char :A, S, M, D. We can then differentiate between the various
case by adding an additionnal dummy parameter of mpl::char_<X> type::
template <typename LHS, typename RHS, char OP>
void compound_assign_delegation (LHS & lhs, RHS const & rhs, mpl::char_<OP>) {
// implement the generic algorithm
}
class A {
template<typename RHS> A & operator += (RHS const & rhs) {
compound_assign_delegation(*this, rhs, mpl::char_<'A'>() );
}
// and so on...
}
And then later when defining MyRHS_with_optimisation ::
class MyRHS_with_optimisation {
// .... implement the object ...
template <typename LHS>
friend void compound_assign_delegation (LHS & lhs, MyRHS_with_optimisation const & rhs,mpl::char_<'E'>) {
// implement the special algorithm
}
};
**NB** :
* One can then overload some or all of the operations for a given class.
* mpl::char_<'A'> is a different **type** than mpl::char_<'S'>, so the standard overload mechanism of C++ will
choose the right function at compile time (the `mpl::char_` object is tiny, and will be optimized away).
It better than passing a simple char and make a switch at run time (because there is no compile time choice in this case).
* One could a priori also add a char OP in the template parameter list, but then in the MyRHS_with_optimisation class, we would need
to specialize a function, which results in less readeable code, IMHO.
Examples
-----------
Used in triqs::arrays.

View File

@ -1,24 +0,0 @@
.. highlight:: c
Some useful idioms and patterns
#######################################
The goal of this list of patterns is to :
* illustrate some useful (?) patterns in C++.
* document TRIQS code and standardize the way things are implemented in TRIQS
.. toctree::
:maxdepth: 1
h5_read_write
lazy_objects
type_erasure
python
assign_delegation
proto
preproc
iterator

View File

@ -1,148 +0,0 @@
.. highlight:: c
HDF5 write/read convention
================================
All TRIQS objects that can be h5-serialized (i.e. put into an hdf5 file and reconstructed from it)
follow the following convention.
Convention
--------------
For each h5-serializable TRIQS object, one defines **free functions** *h5_read*, *h5_write* with the synopsis ::
// Write the object into the group or file F, either directly (for array, basic type)
// or by opening a subgroup and writing its components into this subgroup.
h5_read ( h5::group_or_file F, std::string const & Name, MyNiceObject & obj);
// The inverse operation. It fills the object from the file. The object must exists before.
h5_write ( h5::group_or_file F, std::string const & Name, MyNiceObject const & obj); |
* h5::group_or_file is a little opaque object that can be transparently constructed
from the two types H5::Group and H5::H5File (respectively the Group and the File in the HDF5 library).
* Recommended practice : **define these functions as friend functions of the class**.
*Rational* : they will access the data and by resolved by ADL.
Example :
----------
Suppose I have an object with one array A and a string:
.. compileblock::
#include <triqs/arrays.hpp>
#include <string>
namespace tqa= triqs::arrays;
class MyNiceObject {
tqa::array<double,2> A;
double x;
std::string s;
public :
MyNiceObject(): A(2,3), x(3), s("wonderful") { A() = 3;}
friend void h5_write (triqs::h5::group fg, std::string subgroup_name, MyNiceObject const & obj) {
auto g = fg.create_group(subgroup_name);
h5_write(g,"A",obj.A);
h5_write(g,"x",obj.x);
h5_write(g,"s",obj.s);
}
friend void h5_read (triqs::h5::group fg, std::string subgroup_name, MyNiceObject & obj){
auto g = fg.open_group(subgroup_name);
h5_read(g,"A",obj.A);
h5_read(g,"x",obj.x);
h5_read(g,"s",obj.s);
}
};
int main() {
H5::H5File file("ess.h5", H5F_ACC_TRUNC );
MyNiceObject a;
h5_write(file, "a1", a);
h5_write(file, "a2", a);
}
The generated file looks like::
> h5ls -r ess.h5
/ Group
/a1 Group
/a1/A Dataset {2, 3}
/a1/s Dataset {SCALAR}
/a1/x Dataset {SCALAR}
/a2 Group
/a2/A Dataset {2, 3}
/a2/s Dataset {SCALAR}
/a2/x Dataset {SCALAR}
and the full dump as ::
HDF5 "ess.h5" {
GROUP "/" {
GROUP "a1" {
DATASET "A" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 2, 3 ) / ( 2, 3 ) }
DATA {
(0,0): 3, 3, 3,
(1,0): 3, 3, 3
}
}
DATASET "s" {
DATATYPE H5T_STRING {
STRSIZE 9;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "wonderful"
}
}
DATASET "x" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SCALAR
DATA {
(0): 3
}
}
}
GROUP "a2" {
DATASET "A" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 2, 3 ) / ( 2, 3 ) }
DATA {
(0,0): 3, 3, 3,
(1,0): 3, 3, 3
}
}
DATASET "s" {
DATATYPE H5T_STRING {
STRSIZE 9;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "wonderful"
}
}
DATASET "x" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SCALAR
DATA {
(0): 3
}
}
}
}
}

View File

@ -1,47 +0,0 @@
.. highlight:: c
Lazy Objects
====================
PreCompute in Lazy Object
-----------------------------
* why a lazy object
* example
* idea : keep a view or a copy and then implement the concept.
* add possibly some optimisation
* When precomputation are needed.
--> LazyPreCompute pattern with the shared_ptr
Ex ::
class MyLazyObject {
// ..
private:
struct internal_data {
// contains the data for the precomputation
// e.g. std::vector<int> data;
internal_data(MyLazyObject const & P) {
// do the computation and fill the data
// data = ...;
}
};
friend struct internal_data; // internal_data can read the parent object P
mutable boost::shared_ptr<internal_data> _id; // holds the data
void activate() const { if (!_id) _id= boost::make_shared<internal_data>(*this);}
}
Comments :
* The data is hold in a specialized struct `internal_data`, which contains the precomputation.
* The data is hold by a shared_ptr, _id :
* The copy of MyLazyObject is then cheap and safe with shared_ptr.
* If the computation is not necessary, _id is empty, there is no memory usage.
* activate() activates the calculation of the internal data.
* _id is mutable : the precomputation is private, so even a const object should be able to activate safely.

View File

@ -1,57 +0,0 @@
Python bindings
#####################
All the idioms that we use from boost.python ....
Introduction to boost.python
==============================
* wrapping versus converting.
Example of wrapping a function, a class, with an array.
Using a more complex converters.
Wrapping enable a converter.
* converting : principle.
Ref to doc.
The simple triqs way to build a converter.
Example of range !
* Passing a pyobject.
* Difference between PyObject* and bpy::object
Reference counting.
* More sophisticated data structure.
* THE TRIQS tools : a thin layer (converter) for recursive conversion
(e.g. vector<T> , unordered_map, ....)
boost.python : a C++ API for python
======================================
Code injection
=================
The problem.
The solution : injection and documentation.
Examples
==========
Passing a dict for parameters in a routine.
Passing a list of gf_view into a vector of gf_view...
---> make it a const vector to avoid changing it....
Other options
==============
* Swig
* Cython

View File

@ -1,55 +0,0 @@
Type erasure : from static to dynamical polymorphism
############################################################
* What is the issue ?
* A basic example : std::function
* Example :
* A real life example : QMC class.
* The classical C++ solution with virtual functions
Polymorphism
================
Static polymorphism (template, generic programming)
---------------------------------------------------------
Dynamical polymorphism.
The crossing : why ? How ?
Concepts. When is the decision taken : at compile time or a run time ?
A simple example : std::function
Can handle any function or object which ahs the concept double(int,int) e.g.
It **erases** the type of the object.
The pb : find an envelop for an object with :
* the concept
* one type, that can handle any type.
---> Add Abrahams definitions ( contruct + =).
An example :
Pb is twofold :
* keep the undelying alive
* call its methods wihout remembering its type.
--> show a solution
Use std::function + std::bind variant without lambdas.
Use shared_ptr<void>
* shared_ptr<void> the universal handle
--> Hum : pb : restore the value semantics !
* The traditionnal C++ way using 3 classes. ugly...
The envelop as a concept checker ???
--> It does not compile it T does not model the concept.
Python : a general type eraser : PyObject * or boost::python::object.
2 frontiers : template, C++ polymorphism, Python : in easy and in speed.

View File

@ -1,18 +0,0 @@
ViewValue pattern in TRIQS
##################################
* for array, matrix, vector, tail, gf of any kind.
* always of the same form.
* A value class with value semantics
* A view class to take partial views & slices, with view semantics.
How to implement ?
* X_impl<View>
--> X and X_view by derivation : reimplement only construction (PATTERN) and =.
* Rational :
do NOT derive X from X_view !

View File

@ -1,42 +0,0 @@
How to write quickly and efficently an iterator
======================================================
* Do not use too much iterators, usually for loops are more efficient.
* Iterators are mainly useful in combination with STL.
* The best way to have an STL compliant iterator is to use the boost::iterator_facade documented
`here <http://www.boost.org/doc/libs/1_49_0/libs/iterator/doc/iterator_facade.html#tutorial-example>`_
Example (stupid since std::vector already has an iterator) ::
#include <boost/iterator/iterator_facade.hpp>
template <typename T>
class my_iterator : public boost::iterator_facade< my_iterator<T> , const std::ptrdiff_t , boost::forward_traversal_tag > {
public:
typedef CuboidMap indexmap_type;
typedef typename indexmap_type::domain_type domain_type;
typedef IterationOrder iteration_order;
typedef typename domain_type::index_value_type indices_type;
typedef const std::ptrdiff_t return_type;
my_iterator (): Parent(NULL), pos(0),atend(true) {}
my_iterator (const indexmap_type & P, bool atEnd=false): Parent(&P), pos(Parent->start_shift()),atend(atEnd) {}
indices_type const & indices() const { return indices_tuple; }
operator bool() const { return !atend;}
private:
friend class boost::iterator_core_access;
void increment(); // below
bool equal(my_iterator const & other) const {return ((other.Parent==Parent)&&(other.atend==atend)&&(other.pos==pos));}
return_type & dereference() const { assert (!atend); return pos; }
const indexmap_type * Parent;
indices_type indices_tuple;
std::ptrdiff_t pos;
bool atend;
};

View File

@ -1,91 +0,0 @@
#include <iostream>
#include <boost/shared_ptr.hpp>
class myObject{
int i;
public:
myObject(){}
myObject(int i_):i(i_){}
int getValue() const {return i;}
};
struct A{
};
struct B{
int data;
boost::shared_ptr<A> p;
};
struct C{
boost::shared_ptr<A> p;
};
int main(){
/* WHY POINTERS? */
myObject A(0); // object is allocated on the stack
/* stack VS heap */
// small variables on the stack
// dynamical memory on the heap == "new"
//when running, the OS has allocated the stack size...
//when allocatinh by new ==> HEAP, one asks the kernel for space in the system ... takes time!
//RULE: in critical loops, do not allocate using new !
//stack: only for objects whose size is known at compile time
//array lib : minivector on stack, to avoid allocation problems (ulimit...
std::cout << A.getValue()<<std::endl;
/* USUAL POINTERS */
myObject * p1;
p1 = new myObject(1); //allocate on the heap
myObject * p2(new myObject(2)); // shorter way
// two equivalent ways of accessing public members
std::cout << p1->getValue() << std::endl;
std::cout << (*p1).getValue() << std::endl;
/* THE ISSUE */
delete(p1); // deletes the pointer but not the underlying data
/* SHARED POINTER */
boost::shared_ptr<myObject> p1_sh(new myObject(1));
std::cout << p1_sh->getValue()<<std::endl;
boost::shared_ptr<myObject> p2_sh = p1_sh;
std::cout << p2_sh->getValue()<<std::endl;
//never delete it!!
/* difference betw pointer and reference : dynamical polymorphism */
return 0;
}

View File

@ -7,6 +7,7 @@ Utilities
:maxdepth: 1
:numbered:
exceptions
factory
tupletools
mini_vector

View File

@ -1,9 +1,11 @@
Utilities
##################
.. highlight:: c
.. _util_exceptions:
Exceptions
---------------
=============================
TRIQS defines special exceptions, with the following characteristics :