diff --git a/doc/install.rst b/doc/install.rst index 14c5fbce..9bdf3347 100644 --- a/doc/install.rst +++ b/doc/install.rst @@ -14,8 +14,8 @@ various TRIQS-based applications: impurity solvers, realistic DMFT tools, ... This page describes the installation of the TRIQS toolkit itself. The installation of the applications is described in their respective documentation. -Prerequisite ------------- +Prerequisites +------------- The TRIQS library relies on a certain number of standard libraries and tools described in the :ref:`list of requirements `. Beware in particular to the :ref:`C++ compilers` diff --git a/doc/reference/c++/arrays/concepts.rst b/doc/reference/c++/arrays/concepts.rst index 56eb3a8c..307672dd 100644 --- a/doc/reference/c++/arrays/concepts.rst +++ b/doc/reference/c++/arrays/concepts.rst @@ -12,19 +12,22 @@ returning the element type of the array, e.g. int, double. Indeed, if a is an two dimensionnal array of int, it is expected that a(i,j) returns an int or a reference to an int, for i,j integers in some domain. -We distinguish two separate notions, whether this function is `pure` or not, -i.e. whether one can or not modify a(i,j). +We distinguish two separate notions based on whether this function is `pure` +or not, i.e. whether one can or not modify a(i,j). -* An `Immutable` array is just a pure function on the domain of definition. +* An `Immutable` array is simply a pure function on the domain of definition. a(i,j) returns a int, or a int const &, that can not be modified (hence immutable). -* A `Mutable` array is an Immutable array, which can also be modified. Non const object return a reference, - e.g. a(i,j) can return a int &. Typically this is a piece of memory, with a integer coordinate system on it. +* A `Mutable` array is an Immutable array that *can* be modified. The non-const +object returns a reference, e.g. a(i,j) can return a int &. Typically this is +a piece of memory, with a integer coordinate system on it. -The main point here is that `Immutable` array is a much more general notion : -a formal expression made of array (E.g. A + 2*B) models this concept, but not the `Mutable` one. -Most algorithms only use the `Immutable` notion, when then are pure (mathematical) function -that returns something depending on the value of an object, without side effects. +The main point here is that an `Immutable` array is a much more general notion: +a formal expression consisting of arrays (e.g. A + 2*B) models this concept, +but not the `Mutable` one. +Most algorithms only use the `Immutable` array notion, where they are pure +(mathematical) functions that return something depending on the value of an +object, without side effects. .. _ImmutableCuboidArray: @@ -39,7 +42,7 @@ ImmutableCuboidArray * it has a cuboid domain (hence a rank). * it can be evaluated on any value of the indices in the domain - * NB : It does not need to be stored in memory. A formal expression, e.g. model this concept. + * NB : It does not need to be stored in memory. For example, a formal expression models this concept. * **Definition** ([...] denotes something optional). @@ -64,7 +67,7 @@ ImmutableCuboidArray MutableCuboidArray ------------------------- -* **Purpose** : An array where the data can be modified... +* **Purpose** : An array where the data can be modified. * **Refines** : :ref:`ImmutableCuboidArray`. * **Definition** @@ -85,7 +88,7 @@ ImmutableArray * Refines :ref:`ImmutableCuboidArray` -* If X is the type : +* If X is the type: * ImmutableArray == true_type @@ -101,7 +104,7 @@ ImmutableMatrix * If A is the type : * ImmutableMatrix == true_type - * A::domain_type::rank ==2 + * A::domain_type::rank == 2 NB : this traits marks the fact that X belongs to the MatrixVector algebra. @@ -115,7 +118,7 @@ ImmutableVector * If A is the type : * ImmutableMatrix == true_type - * A::domain_type::rank ==1 + * A::domain_type::rank == 1 NB : this traits marks the fact that X belongs to the MatrixVector algebra. @@ -168,17 +171,18 @@ NB : this traits marks the fact that X belongs to the MatrixVector algebra. Why concepts ? [Advanced] ----------------------------- -Why is it useful to define those concepts ? +Why is it useful to define these concepts ? -Simply because of lot of the library algorithms only use those concepts, and can be used -for an array, or any custom class that model the concept. +Simply because of lot of the library algorithms only use these concepts, +and such algorithms can be used for any array or custom class that models +the concept. -Example : +For example: * Problem: we want to quickly assemble a small class to store a diagonal matrix. - We want this class to operate with other matrices, e.g. be part of expression, be printed, - or whatever. - But we only want to store the diagonal element. + We want this class to operate with other matrices, e.g. be part of an + expression, be printed, etc. + However, we only want to store the diagonal element. * A simple solution : diff --git a/doc/reference/c++/arrays/introduction.rst b/doc/reference/c++/arrays/introduction.rst index f202906a..0943b22a 100644 --- a/doc/reference/c++/arrays/introduction.rst +++ b/doc/reference/c++/arrays/introduction.rst @@ -10,12 +10,13 @@ for numerical computations with the following characteristics/goals : * **Simplicity of use** : Arrays must be as simple to use as in python (numpy) or fortran. - This library is designed to be used by physicists, not by professionnal programmers, - We do *a lot* of array manipulations, and we want to maintain *readable* codes. + This library is designed to be used by physicists, not by professionnal + programmers. We do *a lot* of array manipulations, and we want to maintain + *readable* codes. * **Genericity, abstraction and performance** : - We want to have simple, readeable code, with the same (or better) speed than manually written low level code. + We want to have simple, readeable code, with the same (or better) speed than manually written low-level code. Most optimisations should be delegated to the library and the compiler. (Some) LAPACK and BLAS operations are interfaced. @@ -28,7 +29,7 @@ for numerical computations with the following characteristics/goals : * create a array in C++, and return it as a numpy. * mix the various kind of arrays transparently in C++ expressions and in cython code. -* **HDF5** : simple interface to hdf5 library for an easy storing/retrieving into/from HDF5 files. +* **HDF5** : simple interface to hdf5 library to ease storing/retrieving into/from HDF5 files. * **MPI** : compatibility with boost::mpi interface. diff --git a/doc/reference/c++/clef/expressions_form.rst b/doc/reference/c++/clef/expressions_form.rst index 1f81d5b1..289d1454 100644 --- a/doc/reference/c++/clef/expressions_form.rst +++ b/doc/reference/c++/clef/expressions_form.rst @@ -18,8 +18,8 @@ Example :: placeholder <1> x_; placeholder <2> y_; -Note that the only thing of significance in a placeholder is its type (i.e. Number). -A placeholder is **empty** : it contains **no value** at runtime. +Note that the only thing of significance in a placeholder is its type (i.e. +a number). A placeholder is **empty** : it contains **no value** at runtime. .. warning:: @@ -87,11 +87,11 @@ at compile time:: Note that : -* As a user, one *never* has to write such a type +* As a user, one *never* has to write such a type. One always use expression "on the fly", or use auto. * Having the whole structure of the expression at compile time allows - efficient evaluation (it is the principle of expression template : add a ref here). + efficient evaluation (it is the principle of expression template: add a ref here). * Declaring an expression does not do any computation. It just stores the expression tree (its structure in the type, and the leaves of the tree). diff --git a/doc/reference/c++/clef/introduction.rst b/doc/reference/c++/clef/introduction.rst index a8382d62..7de9a173 100644 --- a/doc/reference/c++/clef/introduction.rst +++ b/doc/reference/c++/clef/introduction.rst @@ -4,7 +4,7 @@ Motivation : a little tour of CLEF ===================================== -A usual, the best is to start with a few examples, to show the library in action. +As usual, the best is to start with a few examples, to show the library in action. .. compileblock:: @@ -43,7 +43,7 @@ A usual, the best is to start with a few examples, to show the library in action auto time_consuming_function=[](double x){std::cout<<"call time_consuming_function"<> W(3, std::vector(5)); triqs::clef::make_expr(W)[i_] [j_] << i_ + cos( time_consuming_function(10) * j_ + i_); @@ -60,7 +60,7 @@ A usual, the best is to start with a few examples, to show the library in action std::cout<< "h(1)(2) = " << h(1)(2) << std::endl; // You an also use this to quickly write some lambda, as an alternative syntax to the C++ lambda - // with e.g. STL algorithms (with the advantage that the function is polymorphic !). + // with e.g. STL algorithms (with the advantage that the function is polymorphic!). std::vector v = {0,-1,2,-3,4,5,-6}; // replace all negative elements (i.e. those for which i -> (i<0) return true), by 0 std::replace_if(begin(v), end(v), i_ >> (i_<0), 0); diff --git a/doc/reference/c++/conventions.rst b/doc/reference/c++/conventions.rst index 655608ff..e4c4c520 100644 --- a/doc/reference/c++/conventions.rst +++ b/doc/reference/c++/conventions.rst @@ -4,27 +4,28 @@ C++11/14 & notations C++11/C++14 --------------- -TRIQS is a C++11 library. It *requires* a last generation C++ compiler (Cf :ref:`require_cxx_compilers`). +TRIQS is a C++11 library, and as such, it *requires* a last generation C++ compiler (Cf :ref:`require_cxx_compilers`). +C++11 compliant compilers (amongst which gcc and clang) are now widely available. -Indeed, the development of C++ is very dynamic these years. -The language and its usage is changing very profoundly with the introduction of several -notions (e.g. move semantics, type deduction, lambda, variadic templates ...). -Moreover, C++11 compliant compilers are now widely available, with gcc and clang. +Indeed, the development of C++ has been very dynamic recently. +The language and its usage is changing very profoundly with the introduction +of several notions (e.g. move semantics, type deduction, lambda, variadic +templates, etc.). A major consequence of this evolution is that writing libraries has become much more accessible, at a *much* lower cost in development time, -with clearer, shorter and more readable code, hence maintainable. +with clearer, shorter and more readable code that is hence maintainable. Efficient techniques which were considered before as complex and reserved to professional C++ experts -are now becoming simple to implement, like e.g. expression templates. +are now becoming simple to implement, such as e.g. expression templates. The implementation of most of the TRIQS library (e.g. clef, arrays) would be either impossible or at least much more complex and time consuming (with a lot of abstruse boost-like constructions) in previous versions of C++. -Besides, this evolution is not finished (in fact it seems to accelerate !). +Besides, this evolution is not over (in fact it seems to accelerate !). The new coming standard, C++14, expected to be adopted and implemented very soon, -will still make it a lot better. In particular, the concept support (template constraints TS) -will hopefully solve the most problematic issue with metaprogramming techniques, i.e. the lack of concept +will bring significant improvements. In particular, the concept support (template constraints) +will hopefully solve the most problematic issue with metaprogramming techniques, namely, the lack of concept check at compile time, resulting in long and obscur error messages from the compiler when *using* the library, which can leave the non-C++-expert user quite clueless... Hence, TRIQS will move to C++14 as soon as compilers are available. diff --git a/doc/reference/c++/mctools/ising.rst b/doc/reference/c++/mctools/ising.rst index df2a3401..a68cf1e7 100644 --- a/doc/reference/c++/mctools/ising.rst +++ b/doc/reference/c++/mctools/ising.rst @@ -21,8 +21,8 @@ classes will act. We write this class in a file :file:`configuration.hpp`:: // The configuration of the system struct configuration { - // N is the length of the chain, M the total magnetization - // beta the inverse temperature, J the coupling , field the magnetic field and energy the energy of the configuration + // N is the length of the chain, M the total magnetization, + // beta the inverse temperature, J the coupling, // field the magnetic field and energy the energy of the configuration int N, M; double beta, J, field, energy; @@ -67,7 +67,7 @@ The move class should have three methods: `attempt()`, `accept()` and `reject()` // pick a random site site = RNG(config->N); - // find the neighbors with periodicity + // find the neighbours with periodicity int left = (site==0 ? config->N-1 : site-1); int right = (site==config->N-1 ? 0 : site+1); diff --git a/doc/reference/c++/mctools/overview.rst b/doc/reference/c++/mctools/overview.rst index c8ccb7a7..5f6bdcb2 100644 --- a/doc/reference/c++/mctools/overview.rst +++ b/doc/reference/c++/mctools/overview.rst @@ -228,7 +228,7 @@ In our example this ratio is T = \frac{e^{\beta h -\sigma }}{e^{\beta h \sigma}} = e^{ - 2 \beta h \sigma } -With this ratio, the Monte Carlo loop decides wether this proposed move should +With this ratio, the Monte Carlo loop decides whether this proposed move should be rejected, or accepted. If the move is accepted, the Monte Carlo calls the ``accept`` method of the move, otherwise it calls the ``reject`` method. The ``accept`` method should always return 1.0 unless you want to correct the sign diff --git a/doc/reference/python/data_analysis/fit/fit.rst b/doc/reference/python/data_analysis/fit/fit.rst index 99097b90..56838c9a 100644 --- a/doc/reference/python/data_analysis/fit/fit.rst +++ b/doc/reference/python/data_analysis/fit/fit.rst @@ -17,23 +17,23 @@ Let us for example fit the Green function : Note that `x_window` appears in the `x_data_view` method to clip the data on a given window and in the plot function, to clip the plot itself. -A more complex example -^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. + A more complex example + ^^^^^^^^^^^^^^^^^^^^^^^^^^^ + To illustrate the use of python in a more complex situation, + let us demonstrate a simple data analysis. + This does not use any TRIQS objects, it is just a little exercise in python. + Imagine that we have 10 Green's function coming from a calculation in a hdf5 file. + For the need of the demonstration, we will create them "manually" here, but it is just to make life easier. + STILL NEEDS TO BE WRITTEN -To illustrate the use of python in a more complex situation, -let us demonstrate a simple data analysis. -This does not show any more TRIQS object, it is just a little exercise in python... - -Imagine that we have 10 Green function coming from a calculation in a hdf5 file. -For the need of the demonstration, we will create them "manually" here, but it is just to make life easier. - - Reference ^^^^^^^^^^^^^^^ The Fit class is very simple and is provided for convenience, but the reader -is encouraged to read it and adapt it (it is simply a call to scipy.leastsq). +is encouraged to have a look through it and adapt it (it is simply a call to +scipy.leastsq). .. autoclass:: pytriqs.fit.Fit :members: diff --git a/doc/reference/python/data_analysis/hdf5/contents.rst b/doc/reference/python/data_analysis/hdf5/contents.rst index f2f56f11..a72ff669 100644 --- a/doc/reference/python/data_analysis/hdf5/contents.rst +++ b/doc/reference/python/data_analysis/hdf5/contents.rst @@ -12,7 +12,7 @@ The best picture of a hdf5 file is that of a **tree**, where : * **Leaves** of the tree are basic types : scalars (int, long, double, string) and rectangular arrays of these scalars (any dimension : 1,2,3,4...). * Subtrees (branches) are called **groups** * Groups and leaves have a name, so an element of the tree has naturally a **path** : - e.g. /group1/subgroup2/leave1 and so on. + e.g. /group1/subgroup2/leaf1 and so on. * Any path (groups, leaves) can be optionally tagged with an **attribute**, in addition to their name, typically a string (or any scalar) @@ -33,8 +33,8 @@ Using HDF5 format has several advantages : * Most basic objects of TRIQS, like Green function, are hdf-compliant. * TRIQS provides a **simple and intuitive interface HDFArchive** to manipulate them. * HDF5 is **standard**, well maintained and widely used. -* HDF5 is **portable** from various machines (32-bits, 64-bits, various OS, etc...) -* HDF5 can be read and written in **many langages** (python, C/C++, F90, etc...), beyond TRIQS. One is not tied to a particular program. +* HDF5 is **portable** from various machines (32-bits, 64-bits, various OSs, etc) +* HDF5 can be read and written in **many langages** (python, C/C++, F90, etc), beyond TRIQS. One is not tied to a particular program. * Simple operations to explore and manipulate the tree are provided by simple unix shell commands (e.g. h5ls, h5diff). * It is a binary format, hence it is compact and has compression options. * It is to a large extent **auto-documented** : the structure of the data speaks for itself. diff --git a/doc/reference/python/data_analysis/hdf5/ref.rst b/doc/reference/python/data_analysis/hdf5/ref.rst index 52652b13..078675c0 100644 --- a/doc/reference/python/data_analysis/hdf5/ref.rst +++ b/doc/reference/python/data_analysis/hdf5/ref.rst @@ -137,19 +137,18 @@ HDFArchiveInert .. class:: HDFArchiveInert :class:`HDFArchive` and :class:`HDFArchiveGroup` do **NOT** handle parallelism. - In general, one wish to write/read only on master node, which is a good practice - cluster : reading from all nodes may lead to communication problems. + In general, it is good practive to write/read only on the master node. Reading from all nodes on a cluster may lead to communication problems. To simplify the writing of code, the simple HDFArchiveInert class may be useful. It is basically inert but does not fail. .. describe:: H[key] - Return H and never raise exception so e.g. H['a']['b'] never raise exception... + Return H and never raise exception. E.g. H['a']['b'] never raises an exception. .. describe:: H[key] = value - Does nothing + Does nothing. Usage in a mpi code, e.g. :: @@ -205,8 +204,8 @@ The function is .. _HDF_Protocol_details: -How to become hdf-compliant ? ------------------------------------ +How does a class become hdf-compliant ? +--------------------------------------- There are two ways in which a class can become hdf-compliant: diff --git a/doc/reference/python/data_analysis/hdf5/tut_ex1.rst b/doc/reference/python/data_analysis/hdf5/tut_ex1.rst index 766bfdb9..ca02fb73 100644 --- a/doc/reference/python/data_analysis/hdf5/tut_ex1.rst +++ b/doc/reference/python/data_analysis/hdf5/tut_ex1.rst @@ -24,9 +24,9 @@ This show the tree structure of the file. We see that : * `mu` is stored at the root `/` * `S` is a subgroup, containing `a` and `b`. -* For each leave, the type (scalar or array) is given. +* For each leaf, the type (scalar or array) is given. -To dump the content of the file use e.g. (Cf HDF5 documentation for more informations) :: +To dump the content of the file use, for example, the following: (see the HDF5 documentation for more information) :: MyComputer:~>h5dump myfile.h5 HDF5 "myfile.h5" { diff --git a/doc/reference/python/data_analysis/hdf5/tut_ex3.py b/doc/reference/python/data_analysis/hdf5/tut_ex3.py index 45229e32..83f47708 100644 --- a/doc/reference/python/data_analysis/hdf5/tut_ex3.py +++ b/doc/reference/python/data_analysis/hdf5/tut_ex3.py @@ -5,7 +5,7 @@ import numpy R = HDFArchive('myfile.h5', 'w') for D in range(1,10,2) : - g = GfReFreq(indices = [0], beta = 50, mesh_array = numpy.arange(-1.99,2.00,0.02) , name = "D=%s"%D) + g = GfReFreq(indices = [0], window = (-2.00,2.00), name = "D=%s"%D) g <<= SemiCircular(half_bandwidth = 0.1*D) R[g.name]= g diff --git a/doc/reference/python/data_analysis/hdf5/tut_ex3b.py b/doc/reference/python/data_analysis/hdf5/tut_ex3b.py index 878e7def..64d20508 100644 --- a/doc/reference/python/data_analysis/hdf5/tut_ex3b.py +++ b/doc/reference/python/data_analysis/hdf5/tut_ex3b.py @@ -5,11 +5,12 @@ from math import pi R = HDFArchive('myfile.h5', 'r') from pytriqs.plot.mpl_interface import oplot, plt -plt.xrange(-1,1) -plt.yrange(0,7) for name, g in R.items() : # iterate on the elements of R, like a dict ... oplot( (- 1/pi * g).imag, "-o", name = name) +plt.xlim(-1,1) +plt.ylim(0,7) + p.savefig("./tut_ex3b.png") diff --git a/doc/reference/python/data_analysis/plotting/plotting.rst b/doc/reference/python/data_analysis/plotting/plotting.rst index 39efc773..341275a9 100644 --- a/doc/reference/python/data_analysis/plotting/plotting.rst +++ b/doc/reference/python/data_analysis/plotting/plotting.rst @@ -19,22 +19,22 @@ A thin layer above matplotlib TRIQS defines a function *oplot*, similar to the standard matplotlib pyplot.plot function, but that can plot TRIQS objects (in fact *any* object, see below). -We can reproduce the first example of the Green function tutorial : +We can reproduce the first example of the Green function tutorial: .. plot:: reference/python/green/example.py :include-source: :scale: 70 -The *oplot* function takes : +The *oplot* function takes: * as arguments any object that implements the :ref:`plot protocol `, - for example Green function, Density of state : in fact any object where plotting is reasonable and has been defined ... + for example Green functions, density of states, and in fact, any object where plotting is reasonable and has been defined ... * string formats following objects, as in regular matplotlib, like in the example above. * regular options of the matplotlib *pyplot.plot* function -* options specific to the object to be plotted : here the `x_window` tells the Green function to plot itself in a reduced window of :math:`\omega_n`. +* options specific to the object to be plotted: here the `x_window` tells the Green function to plot itself in a reduced window of :math:`\omega_n`. Multiple panels figures ================================= @@ -42,7 +42,7 @@ Multiple panels figures `Only valid for matplotlib v>=1.0`. While one can use the regular matplotlib subfigure to make multi-panel figures, -subplots makes it a bit more pythonic : +subplots makes it a bit more pythonic: .. plot:: reference/python/data_analysis/plotting/example.py :include-source: @@ -64,7 +64,7 @@ See example below. .. function:: _plot_( OptionsDict ) - * OptionDict is a dictionnary of options. + * OptionDict is a dictionary of options. .. warning:: * The method _plot_ must consume the options it uses (using e.g. the pop method of dict). @@ -75,9 +75,9 @@ See example below. * *xdata* : A 1-dimensional numpy array describing the x-axis points * *ydata* : A 1-dimensional numpy array describing the y-axis points * *label* : Label of the curve for the legend of the graph - * *type* : a string : currently "XY" [ optional] + * *type* : a string: currently "XY" [ optional] - and optionally : + and optionally: * *xlabel* : a label for the x axis. The last object plotted will overrule the previous ones. * *ylabel* : a label for the y axis. The last object plotted will overrule the previous ones. diff --git a/doc/reference/python/data_analysis/provenance.rst b/doc/reference/python/data_analysis/provenance.rst index 1d006f56..97fadf23 100644 --- a/doc/reference/python/data_analysis/provenance.rst +++ b/doc/reference/python/data_analysis/provenance.rst @@ -7,18 +7,19 @@ Hence, like any other kind of calculations, according to the basic principles of everyone should be able to reproduce them, reuse or modify them. Therefore, the detailed instructions leading to results or figures should be published along with them. -To achieve these goals, in practice we need to be able to do simply the following things : +To achieve these goals, in practice we need to be able to do simply the following things: -* Store along with the data the version of the code used to produced them (or even the code itself !), +* Store along with the data the version of the code used to produced them (or even the code itself!), and the configuration options of this code. * Keep with the figures all the instructions (i.e. the script) that have produced it. -* We want to do that **easily, at no cost in human time**, hence - without adding a new layer of tools (which means new things to learn, which takes time, etc...). - Indeed this task is important but admittedly extremely boring for physicists... +* We want to do that **easily at no cost in human time**, and hence + without adding a new layer of tools (which means new things to learn, + which takes time, etc.). + Indeed this task is important but admittedly extremely boring for physicists! -Fortunately, python helps solving these issues easily and efficiently. +Fortunately, python helps solve these issues easily and efficiently. TRIQS adds very little to the standard python tools here. So this page should be viewed more as a wiki page of examples. @@ -56,7 +57,7 @@ simply by putting it in the HDFArchive, e.g. :: import sys, pytriqs.version as version Results.create_group("log") log = Results["log"] - log["code_version"] = version.revision + log["code_version"] = version.release log["script"] = open(sys.argv[0]).read() # read myself ! The script that is currently being executed will be copied into the file `solution.h5`, under the subgroup `/log/script`. @@ -75,7 +76,7 @@ In such situation, one can simply use the `inspect` module of the python standar # Ok, I need to save common too ! import inspect,sys, pytriqs.version as version log = Results.create_group("log") - log["code_version"] = version.revision() + log["code_version"] = version.release log["script"] = open(sys.argv[0]).read() log["common"] = inspect.getsource(common) # This retrieves the source of the module in a string diff --git a/doc/reference/python/green/block.rst b/doc/reference/python/green/block.rst index 00945426..9e3f9f61 100644 --- a/doc/reference/python/green/block.rst +++ b/doc/reference/python/green/block.rst @@ -124,7 +124,7 @@ can be evaluated, can compute the high-frequency expansion, and so on. For examp shelve / pickle --------------- -Green's functions are `pickable`, i.e. they support the standard python serialization techniques. +Green's functions are `picklable`, i.e. they support the standard python serialization techniques. * It can be used with the `shelve `_ and `pickle `_ module:: @@ -169,7 +169,6 @@ Data points can be accessed via the properties ``data`` and ``tail`` respectivel Be careful when manipulating data directly to keep consistency between the function and the tail. Basic operations do this automatically, so use them as much as possible. - The little _ header is there to remind you that maybe you should consider another option. .. _greentails: @@ -204,7 +203,7 @@ where :math:`M_i` are matrices with the same dimensions as :math:`g`. * Fortunately, in all basic operations on the blocks, these tails are computed automatically. For example, when adding two Green functions, the tails are added, and so on. -* However, if you modify the ``data`` or the ``tail`` manually, you loose this guarantee. +* However, if you modify the ``data`` or the ``tail`` manually, you lose this guarantee. So you have to set the tail properly yourself (or be sure that you will not need it later). For example:: @@ -214,8 +213,9 @@ where :math:`M_i` are matrices with the same dimensions as :math:`g`. g.tail[1] = numpy.array( [[3.0,0.0], [0.0,3.0]] ) The third line sets all the :math:`M_i` to zero, while the second puts :math:`M_1 = diag(3)`. With - the tails set correctly, this Green's function can be used safely. + the tail set correctly, this Green's function can be used safely. .. warning:: - The library will not be able detect, if tails are set wrong. Calculations may also be wrong in this case. + The library will not be able detect tails that are incorrectly set. +Calculations *may* be wrong in this case. diff --git a/doc/reference/python/green/full.rst b/doc/reference/python/green/full.rst index 6e745e30..ac99d803 100644 --- a/doc/reference/python/green/full.rst +++ b/doc/reference/python/green/full.rst @@ -14,7 +14,7 @@ the blocks it is made of. Most properties of this object can be remembered by the simple sentence: -`A full Green's function is an ordered dictionary name -> block, or equivalently a list of tuples (name, block).` +`A full Green's function is an ordered dictionary {name -> block}, or equivalently a list of tuples (name, block).` The blocks can be any of the matrix-valued Green's functions described :ref:`above`. The role of this object is to gather them, and simplify the code writing @@ -110,12 +110,12 @@ In the example above :: As a result :: - BlockGf( name_block_generator= G, copy=False) + BlockGf( name_block_generator= G, make_copies=False) generates a new Green's function `G`, viewing the same blocks. More interestingly :: - BlockGf( name_block_generator= [ (index,g) for (index,g) in G if Test(index), copy=False)] + BlockGf( name_block_generator= [ (index,g) for (index,g) in G if Test(index), make_copies=False)] makes a partial view of some of the blocks selected by the `Test` condition. @@ -131,8 +131,8 @@ View or copies? The Green's function is to be thought like a dict, hence accessing the block returns references. When constructing the Green's function BlockGf, -the parameter `make_copies` tells whether a copy of the block must be made before -putting them in the Green function or not. +the parameter `make_copies` determines if a copy of the blocks must be made +before putting them in the Green's function. .. note:: This is the standard behaviour in python for a list of a dict. @@ -145,9 +145,9 @@ Example: .. note:: - Copy is optional, False is the default value. We keep it here for clarity. + `make_copies` is optional; its default value is False. We keep it here for clarity. - The ``Copy = False`` implies that the blocks of ``G`` are *references* ``g1`` and ``g2``. + The ``make_copies = False`` implies that the blocks of ``G`` are *references* ``g1`` and ``g2``. So, if you modify ``g1``, say by putting it to zero with ``g1.zero()``, then the first block of G will also be put to zero. Similarly, imagine you define two Green's functions like this:: @@ -155,7 +155,7 @@ Example: G1 = BlockGf(name_list = ('eg','t2g'), block_list = (g1,g2), make_copies = False) G2 = BlockGf(name_list = ('eg','t2g'), block_list = (g1,g2), make_copies = False) - Here G1 and G2 are exactly the same object, because they both have blocks + Then, G1 and G2 are exactly the same object, because they both have blocks which are views of ``g1`` and ``g2``. * Instead, if you write:: @@ -172,7 +172,7 @@ Example: Here ``G1`` and ``G2`` are different objects, both having made copies of ``g1`` and ``g2`` for their blocks. - An equivalent writing is :: + An equivalent definition would be :: G1 = BlockGf(name_list = ('eg','t2g'), block_list = (g1.copy(),g2.copy())) G2 = BlockGf(name_list = ('eg','t2g'), block_list = (g1.copy(),g2.copy())) @@ -180,7 +180,7 @@ Example: shelve / pickle --------------------- -Green's functions are `pickable`, i.e. they support the standard python serialization techniques. +Green's functions are `picklable`, i.e. they support the standard python serialization techniques. * It can be used with the `shelve `_ and `pickle `_ module:: diff --git a/doc/tutorials/python/dmft.rst b/doc/tutorials/python/dmft.rst index 2ef09930..2405c6b9 100644 --- a/doc/tutorials/python/dmft.rst +++ b/doc/tutorials/python/dmft.rst @@ -21,7 +21,3 @@ Here is a complete program doing this plain-vanilla DMFT on a half-filled one-b .. literalinclude:: ./dmft.py -A general introduction to DMFT calculations with TRIQS can be found :ref:`here `. - -Chapter :ref:`Wien2TRIQS ` discusses the TRIQS implementation for DMFT calculations of real materials and the interface between TRIQS and the Wien2k band structure code. - diff --git a/pytriqs/dos/dos.py b/pytriqs/dos/dos.py index 71e26083..e8f51bc0 100644 --- a/pytriqs/dos/dos.py +++ b/pytriqs/dos/dos.py @@ -29,7 +29,6 @@ class DOS : * Stores a density of state of fermions .. math:: - :center: \rho (\epsilon) \equiv \sum'_k \delta( \epsilon - \epsilon_k) diff --git a/pytriqs/gf/local/descriptors.py b/pytriqs/gf/local/descriptors.py index ac463a60..7a02c5ae 100644 --- a/pytriqs/gf/local/descriptors.py +++ b/pytriqs/gf/local/descriptors.py @@ -61,12 +61,12 @@ class Function (Base): r""" Stores a python function and a tail. - If the Green's function is defined on an array of points:math:`x_i`, then it will be initialized to:math:`F(x_i)`. + If the Green's function is defined on an array of points :math:`x_i`, then it will be initialized to :math:`F(x_i)`. """ def __init__ (self, function, tail=None): r""" - :param function: the function:math:`\omega \rightarrow function(\omega)` - :param tail: The tail. Use None if you don't use any tail (will be put to 0) + :param function: the function :math:`\omega \rightarrow function(\omega)` + :param tail: The tail. Use None if you do not wish to use a tail (will be put to 0) """ Base.__init__(self, function=function, tail=tail) @@ -193,17 +193,18 @@ def semi(x): ################################################## class SemiCircular (Base): - r"""Hilbert transform of a semi circular density of state, i.e. + r"""Hilbert transform of a semicircular density of states, i.e. .. math:: g(z) = \int \frac{A(\omega)}{z-\omega} d\omega - where :math:`A(\omega) = \theta( D - |\omega|) 2 \sqrt{ D^2 - \omega^2}/(\pi D^2)` + where :math:`A(\omega) = \theta( D - |\omega|) 2 \sqrt{ D^2 - \omega^2}/(\pi D^2)`. - (only works in combination with frequency Green's functions). + (Only works in combination with frequency Green's functions.) """ def __init__ (self, half_bandwidth): - """:param half_bandwidth: :math:`D`, the half bandwidth of the semicircular""" + """:param half_bandwidth: :math:`D`, the half bandwidth of the +semicircular density of states""" Base.__init__(self, half_bandwidth=half_bandwidth) def __str__(self): return "SemiCircular(%s)"%self.half_bandwidth @@ -242,9 +243,9 @@ class Wilson (Base): .. math:: g(z) = \int \frac{A(\omega)}{z-\omega} d\omega - where :math:`A(\omega) = \theta( D^2 - \omega^2)/(2D)` + where :math:`A(\omega) = \theta( D^2 - \omega^2)/(2D)`. - (only works in combination with frequency Green's functions). + (Only works in combination with frequency Green's functions.) """ def __init__ (self, half_bandwidth): """:param half_bandwidth: :math:`D`, the half bandwidth """