3
0
mirror of https://github.com/triqs/dft_tools synced 2024-12-25 22:03:43 +01:00

Iteration over the doc

This is an iteration over the doc mainly thank to Priyanka.
I fixed another couple of details on the way.
This commit is contained in:
Michel Ferrero 2013-12-31 14:22:00 +01:00
parent bdac3e159c
commit f7fad85fca
21 changed files with 121 additions and 118 deletions

View File

@ -14,8 +14,8 @@ various TRIQS-based applications: impurity solvers, realistic DMFT tools, ...
This page describes the installation of the TRIQS toolkit itself. The installation of the applications This page describes the installation of the TRIQS toolkit itself. The installation of the applications
is described in their respective documentation. is described in their respective documentation.
Prerequisite Prerequisites
------------ -------------
The TRIQS library relies on a certain number of standard libraries and tools described in The TRIQS library relies on a certain number of standard libraries and tools described in
the :ref:`list of requirements <requirements>`. Beware in particular to the :ref:`C++ compilers<require_cxx_compilers>` the :ref:`list of requirements <requirements>`. Beware in particular to the :ref:`C++ compilers<require_cxx_compilers>`

View File

@ -12,19 +12,22 @@ returning the element type of the array, e.g. int, double.
Indeed, if a is an two dimensionnal array of int, Indeed, if a is an two dimensionnal array of int,
it is expected that a(i,j) returns an int or a reference to an int, for i,j integers in some domain. it is expected that a(i,j) returns an int or a reference to an int, for i,j integers in some domain.
We distinguish two separate notions, whether this function is `pure` or not, We distinguish two separate notions based on whether this function is `pure`
i.e. whether one can or not modify a(i,j). or not, i.e. whether one can or not modify a(i,j).
* An `Immutable` array is just a pure function on the domain of definition. * An `Immutable` array is simply a pure function on the domain of definition.
a(i,j) returns a int, or a int const &, that can not be modified (hence immutable). a(i,j) returns a int, or a int const &, that can not be modified (hence immutable).
* A `Mutable` array is an Immutable array, which can also be modified. Non const object return a reference, * A `Mutable` array is an Immutable array that *can* be modified. The non-const
e.g. a(i,j) can return a int &. Typically this is a piece of memory, with a integer coordinate system on it. object returns a reference, e.g. a(i,j) can return a int &. Typically this is
a piece of memory, with a integer coordinate system on it.
The main point here is that `Immutable` array is a much more general notion : The main point here is that an `Immutable` array is a much more general notion:
a formal expression made of array (E.g. A + 2*B) models this concept, but not the `Mutable` one. a formal expression consisting of arrays (e.g. A + 2*B) models this concept,
Most algorithms only use the `Immutable` notion, when then are pure (mathematical) function but not the `Mutable` one.
that returns something depending on the value of an object, without side effects. Most algorithms only use the `Immutable` array notion, where they are pure
(mathematical) functions that return something depending on the value of an
object, without side effects.
.. _ImmutableCuboidArray: .. _ImmutableCuboidArray:
@ -39,7 +42,7 @@ ImmutableCuboidArray
* it has a cuboid domain (hence a rank). * it has a cuboid domain (hence a rank).
* it can be evaluated on any value of the indices in the domain * it can be evaluated on any value of the indices in the domain
* NB : It does not need to be stored in memory. A formal expression, e.g. model this concept. * NB : It does not need to be stored in memory. For example, a formal expression models this concept.
* **Definition** ([...] denotes something optional). * **Definition** ([...] denotes something optional).
@ -64,7 +67,7 @@ ImmutableCuboidArray
MutableCuboidArray MutableCuboidArray
------------------------- -------------------------
* **Purpose** : An array where the data can be modified... * **Purpose** : An array where the data can be modified.
* **Refines** : :ref:`ImmutableCuboidArray`. * **Refines** : :ref:`ImmutableCuboidArray`.
* **Definition** * **Definition**
@ -85,7 +88,7 @@ ImmutableArray
* Refines :ref:`ImmutableCuboidArray` * Refines :ref:`ImmutableCuboidArray`
* If X is the type : * If X is the type:
* ImmutableArray<A> == true_type * ImmutableArray<A> == true_type
@ -101,7 +104,7 @@ ImmutableMatrix
* If A is the type : * If A is the type :
* ImmutableMatrix<A> == true_type * ImmutableMatrix<A> == true_type
* A::domain_type::rank ==2 * A::domain_type::rank == 2
NB : this traits marks the fact that X belongs to the MatrixVector algebra. NB : this traits marks the fact that X belongs to the MatrixVector algebra.
@ -115,7 +118,7 @@ ImmutableVector
* If A is the type : * If A is the type :
* ImmutableMatrix<A> == true_type * ImmutableMatrix<A> == true_type
* A::domain_type::rank ==1 * A::domain_type::rank == 1
NB : this traits marks the fact that X belongs to the MatrixVector algebra. NB : this traits marks the fact that X belongs to the MatrixVector algebra.
@ -168,17 +171,18 @@ NB : this traits marks the fact that X belongs to the MatrixVector algebra.
Why concepts ? [Advanced] Why concepts ? [Advanced]
----------------------------- -----------------------------
Why is it useful to define those concepts ? Why is it useful to define these concepts ?
Simply because of lot of the library algorithms only use those concepts, and can be used Simply because of lot of the library algorithms only use these concepts,
for an array, or any custom class that model the concept. and such algorithms can be used for any array or custom class that models
the concept.
Example : For example:
* Problem: we want to quickly assemble a small class to store a diagonal matrix. * Problem: we want to quickly assemble a small class to store a diagonal matrix.
We want this class to operate with other matrices, e.g. be part of expression, be printed, We want this class to operate with other matrices, e.g. be part of an
or whatever. expression, be printed, etc.
But we only want to store the diagonal element. However, we only want to store the diagonal element.
* A simple solution : * A simple solution :

View File

@ -10,12 +10,13 @@ for numerical computations with the following characteristics/goals :
* **Simplicity of use** : * **Simplicity of use** :
Arrays must be as simple to use as in python (numpy) or fortran. Arrays must be as simple to use as in python (numpy) or fortran.
This library is designed to be used by physicists, not by professionnal programmers, This library is designed to be used by physicists, not by professionnal
We do *a lot* of array manipulations, and we want to maintain *readable* codes. programmers. We do *a lot* of array manipulations, and we want to maintain
*readable* codes.
* **Genericity, abstraction and performance** : * **Genericity, abstraction and performance** :
We want to have simple, readeable code, with the same (or better) speed than manually written low level code. We want to have simple, readeable code, with the same (or better) speed than manually written low-level code.
Most optimisations should be delegated to the library and the compiler. Most optimisations should be delegated to the library and the compiler.
(Some) LAPACK and BLAS operations are interfaced. (Some) LAPACK and BLAS operations are interfaced.
@ -28,7 +29,7 @@ for numerical computations with the following characteristics/goals :
* create a array in C++, and return it as a numpy. * create a array in C++, and return it as a numpy.
* mix the various kind of arrays transparently in C++ expressions and in cython code. * mix the various kind of arrays transparently in C++ expressions and in cython code.
* **HDF5** : simple interface to hdf5 library for an easy storing/retrieving into/from HDF5 files. * **HDF5** : simple interface to hdf5 library to ease storing/retrieving into/from HDF5 files.
* **MPI** : compatibility with boost::mpi interface. * **MPI** : compatibility with boost::mpi interface.

View File

@ -18,8 +18,8 @@ Example ::
placeholder <1> x_; placeholder <1> x_;
placeholder <2> y_; placeholder <2> y_;
Note that the only thing of significance in a placeholder is its type (i.e. Number). Note that the only thing of significance in a placeholder is its type (i.e.
A placeholder is **empty** : it contains **no value** at runtime. a number). A placeholder is **empty** : it contains **no value** at runtime.
.. warning:: .. warning::
@ -87,11 +87,11 @@ at compile time::
Note that : Note that :
* As a user, one *never* has to write such a type * As a user, one *never* has to write such a type.
One always use expression "on the fly", or use auto. One always use expression "on the fly", or use auto.
* Having the whole structure of the expression at compile time allows * Having the whole structure of the expression at compile time allows
efficient evaluation (it is the principle of expression template : add a ref here). efficient evaluation (it is the principle of expression template: add a ref here).
* Declaring an expression does not do any computation. * Declaring an expression does not do any computation.
It just stores the expression tree (its structure in the type, and the leaves of the tree). It just stores the expression tree (its structure in the type, and the leaves of the tree).

View File

@ -4,7 +4,7 @@
Motivation : a little tour of CLEF Motivation : a little tour of CLEF
===================================== =====================================
A usual, the best is to start with a few examples, to show the library in action. As usual, the best is to start with a few examples, to show the library in action.
.. compileblock:: .. compileblock::
@ -43,7 +43,7 @@ A usual, the best is to start with a few examples, to show the library in action
auto time_consuming_function=[](double x){std::cout<<"call time_consuming_function"<<std::endl;return 2*x;}; auto time_consuming_function=[](double x){std::cout<<"call time_consuming_function"<<std::endl;return 2*x;};
triqs::clef::make_expr(V) [i_] << cos( time_consuming_function(10) * i_ ); triqs::clef::make_expr(V) [i_] << cos( time_consuming_function(10) * i_ );
// If you insist using on more complex container .... // If you insist using on more complex containers...
std::vector<std::vector<double>> W(3, std::vector<double>(5)); std::vector<std::vector<double>> W(3, std::vector<double>(5));
triqs::clef::make_expr(W)[i_] [j_] << i_ + cos( time_consuming_function(10) * j_ + i_); triqs::clef::make_expr(W)[i_] [j_] << i_ + cos( time_consuming_function(10) * j_ + i_);
@ -60,7 +60,7 @@ A usual, the best is to start with a few examples, to show the library in action
std::cout<< "h(1)(2) = " << h(1)(2) << std::endl; std::cout<< "h(1)(2) = " << h(1)(2) << std::endl;
// You an also use this to quickly write some lambda, as an alternative syntax to the C++ lambda // You an also use this to quickly write some lambda, as an alternative syntax to the C++ lambda
// with e.g. STL algorithms (with the advantage that the function is polymorphic !). // with e.g. STL algorithms (with the advantage that the function is polymorphic!).
std::vector<int> v = {0,-1,2,-3,4,5,-6}; std::vector<int> v = {0,-1,2,-3,4,5,-6};
// replace all negative elements (i.e. those for which i -> (i<0) return true), by 0 // replace all negative elements (i.e. those for which i -> (i<0) return true), by 0
std::replace_if(begin(v), end(v), i_ >> (i_<0), 0); std::replace_if(begin(v), end(v), i_ >> (i_<0), 0);

View File

@ -4,27 +4,28 @@ C++11/14 & notations
C++11/C++14 C++11/C++14
--------------- ---------------
TRIQS is a C++11 library. It *requires* a last generation C++ compiler (Cf :ref:`require_cxx_compilers`). TRIQS is a C++11 library, and as such, it *requires* a last generation C++ compiler (Cf :ref:`require_cxx_compilers`).
C++11 compliant compilers (amongst which gcc and clang) are now widely available.
Indeed, the development of C++ is very dynamic these years. Indeed, the development of C++ has been very dynamic recently.
The language and its usage is changing very profoundly with the introduction of several The language and its usage is changing very profoundly with the introduction
notions (e.g. move semantics, type deduction, lambda, variadic templates ...). of several notions (e.g. move semantics, type deduction, lambda, variadic
Moreover, C++11 compliant compilers are now widely available, with gcc and clang. templates, etc.).
A major consequence of this evolution is that writing libraries A major consequence of this evolution is that writing libraries
has become much more accessible, at a *much* lower cost in development time, has become much more accessible, at a *much* lower cost in development time,
with clearer, shorter and more readable code, hence maintainable. with clearer, shorter and more readable code that is hence maintainable.
Efficient techniques which were considered before as complex and reserved to professional C++ experts Efficient techniques which were considered before as complex and reserved to professional C++ experts
are now becoming simple to implement, like e.g. expression templates. are now becoming simple to implement, such as e.g. expression templates.
The implementation of most of the TRIQS library (e.g. clef, arrays) would be either impossible or at least The implementation of most of the TRIQS library (e.g. clef, arrays) would be either impossible or at least
much more complex and time consuming (with a lot of abstruse boost-like constructions) much more complex and time consuming (with a lot of abstruse boost-like constructions)
in previous versions of C++. in previous versions of C++.
Besides, this evolution is not finished (in fact it seems to accelerate !). Besides, this evolution is not over (in fact it seems to accelerate !).
The new coming standard, C++14, expected to be adopted and implemented very soon, The new coming standard, C++14, expected to be adopted and implemented very soon,
will still make it a lot better. In particular, the concept support (template constraints TS) will bring significant improvements. In particular, the concept support (template constraints)
will hopefully solve the most problematic issue with metaprogramming techniques, i.e. the lack of concept will hopefully solve the most problematic issue with metaprogramming techniques, namely, the lack of concept
check at compile time, resulting in long and obscur error messages from the compiler when *using* the library, check at compile time, resulting in long and obscur error messages from the compiler when *using* the library,
which can leave the non-C++-expert user quite clueless... which can leave the non-C++-expert user quite clueless...
Hence, TRIQS will move to C++14 as soon as compilers are available. Hence, TRIQS will move to C++14 as soon as compilers are available.

View File

@ -21,8 +21,8 @@ classes will act. We write this class in a file :file:`configuration.hpp`::
// The configuration of the system // The configuration of the system
struct configuration { struct configuration {
// N is the length of the chain, M the total magnetization // N is the length of the chain, M the total magnetization,
// beta the inverse temperature, J the coupling , field the magnetic field and energy the energy of the configuration // beta the inverse temperature, J the coupling,
// field the magnetic field and energy the energy of the configuration // field the magnetic field and energy the energy of the configuration
int N, M; int N, M;
double beta, J, field, energy; double beta, J, field, energy;
@ -67,7 +67,7 @@ The move class should have three methods: `attempt()`, `accept()` and `reject()`
// pick a random site // pick a random site
site = RNG(config->N); site = RNG(config->N);
// find the neighbors with periodicity // find the neighbours with periodicity
int left = (site==0 ? config->N-1 : site-1); int left = (site==0 ? config->N-1 : site-1);
int right = (site==config->N-1 ? 0 : site+1); int right = (site==config->N-1 ? 0 : site+1);

View File

@ -228,7 +228,7 @@ In our example this ratio is
T = \frac{e^{\beta h -\sigma }}{e^{\beta h \sigma}} = e^{ - 2 \beta h \sigma } T = \frac{e^{\beta h -\sigma }}{e^{\beta h \sigma}} = e^{ - 2 \beta h \sigma }
With this ratio, the Monte Carlo loop decides wether this proposed move should With this ratio, the Monte Carlo loop decides whether this proposed move should
be rejected, or accepted. If the move is accepted, the Monte Carlo calls the be rejected, or accepted. If the move is accepted, the Monte Carlo calls the
``accept`` method of the move, otherwise it calls the ``reject`` method. The ``accept`` method of the move, otherwise it calls the ``reject`` method. The
``accept`` method should always return 1.0 unless you want to correct the sign ``accept`` method should always return 1.0 unless you want to correct the sign

View File

@ -17,23 +17,23 @@ Let us for example fit the Green function :
Note that `x_window` appears in the `x_data_view` method to clip the data on a given window Note that `x_window` appears in the `x_data_view` method to clip the data on a given window
and in the plot function, to clip the plot itself. and in the plot function, to clip the plot itself.
A more complex example ..
^^^^^^^^^^^^^^^^^^^^^^^^^^^ A more complex example
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To illustrate the use of python in a more complex situation, To illustrate the use of python in a more complex situation,
let us demonstrate a simple data analysis. let us demonstrate a simple data analysis.
This does not show any more TRIQS object, it is just a little exercise in python... This does not use any TRIQS objects, it is just a little exercise in python.
Imagine that we have 10 Green's function coming from a calculation in a hdf5 file.
Imagine that we have 10 Green function coming from a calculation in a hdf5 file. For the need of the demonstration, we will create them "manually" here, but it is just to make life easier.
For the need of the demonstration, we will create them "manually" here, but it is just to make life easier. STILL NEEDS TO BE WRITTEN
Reference Reference
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
The Fit class is very simple and is provided for convenience, but the reader The Fit class is very simple and is provided for convenience, but the reader
is encouraged to read it and adapt it (it is simply a call to scipy.leastsq). is encouraged to have a look through it and adapt it (it is simply a call to
scipy.leastsq).
.. autoclass:: pytriqs.fit.Fit .. autoclass:: pytriqs.fit.Fit
:members: :members:

View File

@ -12,7 +12,7 @@ The best picture of a hdf5 file is that of a **tree**, where :
* **Leaves** of the tree are basic types : scalars (int, long, double, string) and rectangular arrays of these scalars (any dimension : 1,2,3,4...). * **Leaves** of the tree are basic types : scalars (int, long, double, string) and rectangular arrays of these scalars (any dimension : 1,2,3,4...).
* Subtrees (branches) are called **groups** * Subtrees (branches) are called **groups**
* Groups and leaves have a name, so an element of the tree has naturally a **path** : * Groups and leaves have a name, so an element of the tree has naturally a **path** :
e.g. /group1/subgroup2/leave1 and so on. e.g. /group1/subgroup2/leaf1 and so on.
* Any path (groups, leaves) can be optionally tagged with an **attribute**, in addition to their name, * Any path (groups, leaves) can be optionally tagged with an **attribute**, in addition to their name,
typically a string (or any scalar) typically a string (or any scalar)
@ -33,8 +33,8 @@ Using HDF5 format has several advantages :
* Most basic objects of TRIQS, like Green function, are hdf-compliant. * Most basic objects of TRIQS, like Green function, are hdf-compliant.
* TRIQS provides a **simple and intuitive interface HDFArchive** to manipulate them. * TRIQS provides a **simple and intuitive interface HDFArchive** to manipulate them.
* HDF5 is **standard**, well maintained and widely used. * HDF5 is **standard**, well maintained and widely used.
* HDF5 is **portable** from various machines (32-bits, 64-bits, various OS, etc...) * HDF5 is **portable** from various machines (32-bits, 64-bits, various OSs, etc)
* HDF5 can be read and written in **many langages** (python, C/C++, F90, etc...), beyond TRIQS. One is not tied to a particular program. * HDF5 can be read and written in **many langages** (python, C/C++, F90, etc), beyond TRIQS. One is not tied to a particular program.
* Simple operations to explore and manipulate the tree are provided by simple unix shell commands (e.g. h5ls, h5diff). * Simple operations to explore and manipulate the tree are provided by simple unix shell commands (e.g. h5ls, h5diff).
* It is a binary format, hence it is compact and has compression options. * It is a binary format, hence it is compact and has compression options.
* It is to a large extent **auto-documented** : the structure of the data speaks for itself. * It is to a large extent **auto-documented** : the structure of the data speaks for itself.

View File

@ -137,19 +137,18 @@ HDFArchiveInert
.. class:: HDFArchiveInert .. class:: HDFArchiveInert
:class:`HDFArchive` and :class:`HDFArchiveGroup` do **NOT** handle parallelism. :class:`HDFArchive` and :class:`HDFArchiveGroup` do **NOT** handle parallelism.
In general, one wish to write/read only on master node, which is a good practice In general, it is good practive to write/read only on the master node. Reading from all nodes on a cluster may lead to communication problems.
cluster : reading from all nodes may lead to communication problems.
To simplify the writing of code, the simple HDFArchiveInert class may be useful. To simplify the writing of code, the simple HDFArchiveInert class may be useful.
It is basically inert but does not fail. It is basically inert but does not fail.
.. describe:: H[key] .. describe:: H[key]
Return H and never raise exception so e.g. H['a']['b'] never raise exception... Return H and never raise exception. E.g. H['a']['b'] never raises an exception.
.. describe:: H[key] = value .. describe:: H[key] = value
Does nothing Does nothing.
Usage in a mpi code, e.g. :: Usage in a mpi code, e.g. ::
@ -205,8 +204,8 @@ The function is
.. _HDF_Protocol_details: .. _HDF_Protocol_details:
How to become hdf-compliant ? How does a class become hdf-compliant ?
----------------------------------- ---------------------------------------
There are two ways in which a class can become hdf-compliant: There are two ways in which a class can become hdf-compliant:

View File

@ -24,9 +24,9 @@ This show the tree structure of the file. We see that :
* `mu` is stored at the root `/` * `mu` is stored at the root `/`
* `S` is a subgroup, containing `a` and `b`. * `S` is a subgroup, containing `a` and `b`.
* For each leave, the type (scalar or array) is given. * For each leaf, the type (scalar or array) is given.
To dump the content of the file use e.g. (Cf HDF5 documentation for more informations) :: To dump the content of the file use, for example, the following: (see the HDF5 documentation for more information) ::
MyComputer:~>h5dump myfile.h5 MyComputer:~>h5dump myfile.h5
HDF5 "myfile.h5" { HDF5 "myfile.h5" {

View File

@ -5,7 +5,7 @@ import numpy
R = HDFArchive('myfile.h5', 'w') R = HDFArchive('myfile.h5', 'w')
for D in range(1,10,2) : for D in range(1,10,2) :
g = GfReFreq(indices = [0], beta = 50, mesh_array = numpy.arange(-1.99,2.00,0.02) , name = "D=%s"%D) g = GfReFreq(indices = [0], window = (-2.00,2.00), name = "D=%s"%D)
g <<= SemiCircular(half_bandwidth = 0.1*D) g <<= SemiCircular(half_bandwidth = 0.1*D)
R[g.name]= g R[g.name]= g

View File

@ -5,11 +5,12 @@ from math import pi
R = HDFArchive('myfile.h5', 'r') R = HDFArchive('myfile.h5', 'r')
from pytriqs.plot.mpl_interface import oplot, plt from pytriqs.plot.mpl_interface import oplot, plt
plt.xrange(-1,1)
plt.yrange(0,7)
for name, g in R.items() : # iterate on the elements of R, like a dict ... for name, g in R.items() : # iterate on the elements of R, like a dict ...
oplot( (- 1/pi * g).imag, "-o", name = name) oplot( (- 1/pi * g).imag, "-o", name = name)
plt.xlim(-1,1)
plt.ylim(0,7)
p.savefig("./tut_ex3b.png") p.savefig("./tut_ex3b.png")

View File

@ -19,22 +19,22 @@ A thin layer above matplotlib
TRIQS defines a function *oplot*, similar to the standard matplotlib pyplot.plot function, TRIQS defines a function *oplot*, similar to the standard matplotlib pyplot.plot function,
but that can plot TRIQS objects (in fact *any* object, see below). but that can plot TRIQS objects (in fact *any* object, see below).
We can reproduce the first example of the Green function tutorial : We can reproduce the first example of the Green function tutorial:
.. plot:: reference/python/green/example.py .. plot:: reference/python/green/example.py
:include-source: :include-source:
:scale: 70 :scale: 70
The *oplot* function takes : The *oplot* function takes:
* as arguments any object that implements the :ref:`plot protocol <plot_protocol>`, * as arguments any object that implements the :ref:`plot protocol <plot_protocol>`,
for example Green function, Density of state : in fact any object where plotting is reasonable and has been defined ... for example Green functions, density of states, and in fact, any object where plotting is reasonable and has been defined ...
* string formats following objects, as in regular matplotlib, like in the example above. * string formats following objects, as in regular matplotlib, like in the example above.
* regular options of the matplotlib *pyplot.plot* function * regular options of the matplotlib *pyplot.plot* function
* options specific to the object to be plotted : here the `x_window` tells the Green function to plot itself in a reduced window of :math:`\omega_n`. * options specific to the object to be plotted: here the `x_window` tells the Green function to plot itself in a reduced window of :math:`\omega_n`.
Multiple panels figures Multiple panels figures
================================= =================================
@ -42,7 +42,7 @@ Multiple panels figures
`Only valid for matplotlib v>=1.0`. `Only valid for matplotlib v>=1.0`.
While one can use the regular matplotlib subfigure to make multi-panel figures, While one can use the regular matplotlib subfigure to make multi-panel figures,
subplots makes it a bit more pythonic : subplots makes it a bit more pythonic:
.. plot:: reference/python/data_analysis/plotting/example.py .. plot:: reference/python/data_analysis/plotting/example.py
:include-source: :include-source:
@ -64,7 +64,7 @@ See example below.
.. function:: _plot_( OptionsDict ) .. function:: _plot_( OptionsDict )
* OptionDict is a dictionnary of options. * OptionDict is a dictionary of options.
.. warning:: .. warning::
* The method _plot_ must consume the options it uses (using e.g. the pop method of dict). * The method _plot_ must consume the options it uses (using e.g. the pop method of dict).
@ -75,9 +75,9 @@ See example below.
* *xdata* : A 1-dimensional numpy array describing the x-axis points * *xdata* : A 1-dimensional numpy array describing the x-axis points
* *ydata* : A 1-dimensional numpy array describing the y-axis points * *ydata* : A 1-dimensional numpy array describing the y-axis points
* *label* : Label of the curve for the legend of the graph * *label* : Label of the curve for the legend of the graph
* *type* : a string : currently "XY" [ optional] * *type* : a string: currently "XY" [ optional]
and optionally : and optionally:
* *xlabel* : a label for the x axis. The last object plotted will overrule the previous ones. * *xlabel* : a label for the x axis. The last object plotted will overrule the previous ones.
* *ylabel* : a label for the y axis. The last object plotted will overrule the previous ones. * *ylabel* : a label for the y axis. The last object plotted will overrule the previous ones.

View File

@ -7,18 +7,19 @@ Hence, like any other kind of calculations, according to the basic principles of
everyone should be able to reproduce them, reuse or modify them. everyone should be able to reproduce them, reuse or modify them.
Therefore, the detailed instructions leading to results or figures Therefore, the detailed instructions leading to results or figures
should be published along with them. should be published along with them.
To achieve these goals, in practice we need to be able to do simply the following things : To achieve these goals, in practice we need to be able to do simply the following things:
* Store along with the data the version of the code used to produced them (or even the code itself !), * Store along with the data the version of the code used to produced them (or even the code itself!),
and the configuration options of this code. and the configuration options of this code.
* Keep with the figures all the instructions (i.e. the script) that have produced it. * Keep with the figures all the instructions (i.e. the script) that have produced it.
* We want to do that **easily, at no cost in human time**, hence * We want to do that **easily at no cost in human time**, and hence
without adding a new layer of tools (which means new things to learn, which takes time, etc...). without adding a new layer of tools (which means new things to learn,
Indeed this task is important but admittedly extremely boring for physicists... which takes time, etc.).
Indeed this task is important but admittedly extremely boring for physicists!
Fortunately, python helps solving these issues easily and efficiently. Fortunately, python helps solve these issues easily and efficiently.
TRIQS adds very little to the standard python tools here. TRIQS adds very little to the standard python tools here.
So this page should be viewed more as a wiki page of examples. So this page should be viewed more as a wiki page of examples.
@ -56,7 +57,7 @@ simply by putting it in the HDFArchive, e.g. ::
import sys, pytriqs.version as version import sys, pytriqs.version as version
Results.create_group("log") Results.create_group("log")
log = Results["log"] log = Results["log"]
log["code_version"] = version.revision log["code_version"] = version.release
log["script"] = open(sys.argv[0]).read() # read myself ! log["script"] = open(sys.argv[0]).read() # read myself !
The script that is currently being executed will be copied into the file `solution.h5`, under the subgroup `/log/script`. The script that is currently being executed will be copied into the file `solution.h5`, under the subgroup `/log/script`.
@ -75,7 +76,7 @@ In such situation, one can simply use the `inspect` module of the python standar
# Ok, I need to save common too ! # Ok, I need to save common too !
import inspect,sys, pytriqs.version as version import inspect,sys, pytriqs.version as version
log = Results.create_group("log") log = Results.create_group("log")
log["code_version"] = version.revision() log["code_version"] = version.release
log["script"] = open(sys.argv[0]).read() log["script"] = open(sys.argv[0]).read()
log["common"] = inspect.getsource(common) # This retrieves the source of the module in a string log["common"] = inspect.getsource(common) # This retrieves the source of the module in a string

View File

@ -124,7 +124,7 @@ can be evaluated, can compute the high-frequency expansion, and so on. For examp
shelve / pickle shelve / pickle
--------------- ---------------
Green's functions are `pickable`, i.e. they support the standard python serialization techniques. Green's functions are `picklable`, i.e. they support the standard python serialization techniques.
* It can be used with the `shelve <http://docs.python.org/library/shelve.html>`_ and `pickle <http://docs.python.org/library/pickle.html>`_ module:: * It can be used with the `shelve <http://docs.python.org/library/shelve.html>`_ and `pickle <http://docs.python.org/library/pickle.html>`_ module::
@ -169,7 +169,6 @@ Data points can be accessed via the properties ``data`` and ``tail`` respectivel
Be careful when manipulating data directly to keep consistency between Be careful when manipulating data directly to keep consistency between
the function and the tail. the function and the tail.
Basic operations do this automatically, so use them as much as possible. Basic operations do this automatically, so use them as much as possible.
The little _ header is there to remind you that maybe you should consider another option.
.. _greentails: .. _greentails:
@ -204,7 +203,7 @@ where :math:`M_i` are matrices with the same dimensions as :math:`g`.
* Fortunately, in all basic operations on the blocks, these tails are computed automatically. * Fortunately, in all basic operations on the blocks, these tails are computed automatically.
For example, when adding two Green functions, the tails are added, and so on. For example, when adding two Green functions, the tails are added, and so on.
* However, if you modify the ``data`` or the ``tail`` manually, you loose this guarantee. * However, if you modify the ``data`` or the ``tail`` manually, you lose this guarantee.
So you have to set the tail properly yourself (or be sure that you will not need it later). So you have to set the tail properly yourself (or be sure that you will not need it later).
For example:: For example::
@ -214,8 +213,9 @@ where :math:`M_i` are matrices with the same dimensions as :math:`g`.
g.tail[1] = numpy.array( [[3.0,0.0], [0.0,3.0]] ) g.tail[1] = numpy.array( [[3.0,0.0], [0.0,3.0]] )
The third line sets all the :math:`M_i` to zero, while the second puts :math:`M_1 = diag(3)`. With The third line sets all the :math:`M_i` to zero, while the second puts :math:`M_1 = diag(3)`. With
the tails set correctly, this Green's function can be used safely. the tail set correctly, this Green's function can be used safely.
.. warning:: .. warning::
The library will not be able detect, if tails are set wrong. Calculations may also be wrong in this case. The library will not be able detect tails that are incorrectly set.
Calculations *may* be wrong in this case.

View File

@ -14,7 +14,7 @@ the blocks it is made of.
Most properties of this object can be remembered by the simple sentence: Most properties of this object can be remembered by the simple sentence:
`A full Green's function is an ordered dictionary name -> block, or equivalently a list of tuples (name, block).` `A full Green's function is an ordered dictionary {name -> block}, or equivalently a list of tuples (name, block).`
The blocks can be any of the matrix-valued Green's functions described :ref:`above<blockgreen>`. The blocks can be any of the matrix-valued Green's functions described :ref:`above<blockgreen>`.
The role of this object is to gather them, and simplify the code writing The role of this object is to gather them, and simplify the code writing
@ -110,12 +110,12 @@ In the example above ::
As a result :: As a result ::
BlockGf( name_block_generator= G, copy=False) BlockGf( name_block_generator= G, make_copies=False)
generates a new Green's function `G`, viewing the same blocks. generates a new Green's function `G`, viewing the same blocks.
More interestingly :: More interestingly ::
BlockGf( name_block_generator= [ (index,g) for (index,g) in G if Test(index), copy=False)] BlockGf( name_block_generator= [ (index,g) for (index,g) in G if Test(index), make_copies=False)]
makes a partial view of some of the blocks selected by the `Test` condition. makes a partial view of some of the blocks selected by the `Test` condition.
@ -131,8 +131,8 @@ View or copies?
The Green's function is to be thought like a dict, hence accessing the The Green's function is to be thought like a dict, hence accessing the
block returns references. When constructing the Green's function BlockGf, block returns references. When constructing the Green's function BlockGf,
the parameter `make_copies` tells whether a copy of the block must be made before the parameter `make_copies` determines if a copy of the blocks must be made
putting them in the Green function or not. before putting them in the Green's function.
.. note:: .. note::
This is the standard behaviour in python for a list of a dict. This is the standard behaviour in python for a list of a dict.
@ -145,9 +145,9 @@ Example:
.. note:: .. note::
Copy is optional, False is the default value. We keep it here for clarity. `make_copies` is optional; its default value is False. We keep it here for clarity.
The ``Copy = False`` implies that the blocks of ``G`` are *references* ``g1`` and ``g2``. The ``make_copies = False`` implies that the blocks of ``G`` are *references* ``g1`` and ``g2``.
So, if you modify ``g1``, say by putting it to zero with ``g1.zero()``, then the So, if you modify ``g1``, say by putting it to zero with ``g1.zero()``, then the
first block of G will also be put to zero. Similarly, imagine you define two first block of G will also be put to zero. Similarly, imagine you define two
Green's functions like this:: Green's functions like this::
@ -155,7 +155,7 @@ Example:
G1 = BlockGf(name_list = ('eg','t2g'), block_list = (g1,g2), make_copies = False) G1 = BlockGf(name_list = ('eg','t2g'), block_list = (g1,g2), make_copies = False)
G2 = BlockGf(name_list = ('eg','t2g'), block_list = (g1,g2), make_copies = False) G2 = BlockGf(name_list = ('eg','t2g'), block_list = (g1,g2), make_copies = False)
Here G1 and G2 are exactly the same object, because they both have blocks Then, G1 and G2 are exactly the same object, because they both have blocks
which are views of ``g1`` and ``g2``. which are views of ``g1`` and ``g2``.
* Instead, if you write:: * Instead, if you write::
@ -172,7 +172,7 @@ Example:
Here ``G1`` and ``G2`` are different objects, both having made copies Here ``G1`` and ``G2`` are different objects, both having made copies
of ``g1`` and ``g2`` for their blocks. of ``g1`` and ``g2`` for their blocks.
An equivalent writing is :: An equivalent definition would be ::
G1 = BlockGf(name_list = ('eg','t2g'), block_list = (g1.copy(),g2.copy())) G1 = BlockGf(name_list = ('eg','t2g'), block_list = (g1.copy(),g2.copy()))
G2 = BlockGf(name_list = ('eg','t2g'), block_list = (g1.copy(),g2.copy())) G2 = BlockGf(name_list = ('eg','t2g'), block_list = (g1.copy(),g2.copy()))
@ -180,7 +180,7 @@ Example:
shelve / pickle shelve / pickle
--------------------- ---------------------
Green's functions are `pickable`, i.e. they support the standard python serialization techniques. Green's functions are `picklable`, i.e. they support the standard python serialization techniques.
* It can be used with the `shelve <http://docs.python.org/library/shelve.html>`_ and `pickle <http://docs.python.org/library/pickle.html>`_ module:: * It can be used with the `shelve <http://docs.python.org/library/shelve.html>`_ and `pickle <http://docs.python.org/library/pickle.html>`_ module::

View File

@ -21,7 +21,3 @@ Here is a complete program doing this plain-vanilla DMFT on a half-filled one-b
.. literalinclude:: ./dmft.py .. literalinclude:: ./dmft.py
A general introduction to DMFT calculations with TRIQS can be found :ref:`here <dmftloop>`.
Chapter :ref:`Wien2TRIQS <Wien2k>` discusses the TRIQS implementation for DMFT calculations of real materials and the interface between TRIQS and the Wien2k band structure code.

View File

@ -29,7 +29,6 @@ class DOS :
* Stores a density of state of fermions * Stores a density of state of fermions
.. math:: .. math::
:center:
\rho (\epsilon) \equiv \sum'_k \delta( \epsilon - \epsilon_k) \rho (\epsilon) \equiv \sum'_k \delta( \epsilon - \epsilon_k)

View File

@ -61,12 +61,12 @@ class Function (Base):
r""" r"""
Stores a python function and a tail. Stores a python function and a tail.
If the Green's function is defined on an array of points:math:`x_i`, then it will be initialized to:math:`F(x_i)`. If the Green's function is defined on an array of points :math:`x_i`, then it will be initialized to :math:`F(x_i)`.
""" """
def __init__ (self, function, tail=None): def __init__ (self, function, tail=None):
r""" r"""
:param function: the function:math:`\omega \rightarrow function(\omega)` :param function: the function :math:`\omega \rightarrow function(\omega)`
:param tail: The tail. Use None if you don't use any tail (will be put to 0) :param tail: The tail. Use None if you do not wish to use a tail (will be put to 0)
""" """
Base.__init__(self, function=function, tail=tail) Base.__init__(self, function=function, tail=tail)
@ -193,17 +193,18 @@ def semi(x):
################################################## ##################################################
class SemiCircular (Base): class SemiCircular (Base):
r"""Hilbert transform of a semi circular density of state, i.e. r"""Hilbert transform of a semicircular density of states, i.e.
.. math:: .. math::
g(z) = \int \frac{A(\omega)}{z-\omega} d\omega g(z) = \int \frac{A(\omega)}{z-\omega} d\omega
where :math:`A(\omega) = \theta( D - |\omega|) 2 \sqrt{ D^2 - \omega^2}/(\pi D^2)` where :math:`A(\omega) = \theta( D - |\omega|) 2 \sqrt{ D^2 - \omega^2}/(\pi D^2)`.
(only works in combination with frequency Green's functions). (Only works in combination with frequency Green's functions.)
""" """
def __init__ (self, half_bandwidth): def __init__ (self, half_bandwidth):
""":param half_bandwidth: :math:`D`, the half bandwidth of the semicircular""" """:param half_bandwidth: :math:`D`, the half bandwidth of the
semicircular density of states"""
Base.__init__(self, half_bandwidth=half_bandwidth) Base.__init__(self, half_bandwidth=half_bandwidth)
def __str__(self): return "SemiCircular(%s)"%self.half_bandwidth def __str__(self): return "SemiCircular(%s)"%self.half_bandwidth
@ -242,9 +243,9 @@ class Wilson (Base):
.. math:: .. math::
g(z) = \int \frac{A(\omega)}{z-\omega} d\omega g(z) = \int \frac{A(\omega)}{z-\omega} d\omega
where :math:`A(\omega) = \theta( D^2 - \omega^2)/(2D)` where :math:`A(\omega) = \theta( D^2 - \omega^2)/(2D)`.
(only works in combination with frequency Green's functions). (Only works in combination with frequency Green's functions.)
""" """
def __init__ (self, half_bandwidth): def __init__ (self, half_bandwidth):
""":param half_bandwidth: :math:`D`, the half bandwidth """ """:param half_bandwidth: :math:`D`, the half bandwidth """