From 8e08a58a81c9520281f389d21141fd8700ac58c3 Mon Sep 17 00:00:00 2001 From: Priyanka Seth Date: Mon, 22 Sep 2014 19:21:10 +0200 Subject: [PATCH] Changes to doc --- doc/Ce-HI.rst | 55 +++++++++++++++-------- doc/Ce-gamma.py | 12 ++--- doc/LDADMFTmain.rst | 43 +++++++++--------- doc/advanced.rst | 6 +-- doc/analysis.rst | 104 ++++++++++++++++++++------------------------ doc/install.rst | 4 +- doc/interface.rst | 23 +++++----- doc/selfcons.rst | 34 +++++++-------- 8 files changed, 146 insertions(+), 135 deletions(-) diff --git a/doc/Ce-HI.rst b/doc/Ce-HI.rst index a8f8f8a9..18432c61 100644 --- a/doc/Ce-HI.rst +++ b/doc/Ce-HI.rst @@ -51,7 +51,7 @@ specify the energy window for Wannier functions' construction. For a more comple To prepaire input data for :program:`dmftproj` we execute :program:`lapw2` with the `-almd` option :: - lapw2 -almd + x lapw2 -almd Then :program:`dmftproj` is executed in its default mode (i.e. without spin-polarization or spin-orbit included) :: @@ -94,18 +94,29 @@ where the solver is initialized with the value of `beta`, and the orbital quantu The Hubbard-I initialization `Solver` has also optional parameters one may use: - * `n_msb`: is the number of Matsubara frequencies used (default is `n_msb=1025`) - * `use_spin_orbit`: if set 'True' the solver is run with spin-orbit coupling included. To perform actual LDA+DMFT calculations with spin-orbit one should also run :program:`Wien2k` and :program:`dmftproj` in spin-polarized mode and with spin-orbit included. By default `use_spin_orbit=False` + * `n_msb`: the number of Matsubara frequencies used. The default is `n_msb=1025`. + * `use_spin_orbit`: if set 'True' the solver is run with spin-orbit coupling +included. To perform actual LDA+DMFT calculations with spin-orbit one should +also run :program:`Wien2k` and :program:`dmftproj` in spin-polarized mode and +with spin-orbit included. By default, `use_spin_orbit=False`. -The `Solver.solve(U_int, J_hund)` statement has two necessary parameters, the `U` parameter (`U_int`) and Hund's rule coupling `J_hund`. Notice that the solver constructs the full 4-index `U`-matrix by default, and the `U` parameter is in fact the Slatter `F0` integral. Other optional parameters are: +The `Solver.solve(U_int, J_hund)` statement has two necessary parameters, the +Hubbard U parameter `U_int` and Hund's rule coupling `J_hund`. Notice that the +solver constructs the full 4-index `U`-matrix by default, and the `U_int` parameter +is in fact the Slatter `F0` integral. Other optional parameters are: - * `T`: A matrix that transforms the interaction matrix from complex spherical harmonics to a symmetry adapted basis. By default complex spherical harmonics basis is used and `T=None` - * `verbosity` tunes output from the solver. If `verbosity=0` only basic information is printed, if `verbosity=1` the ground state atomic occupancy and its energy are printed, if `verbosity=2` additional information is printed for all occupancies that were diagonalized. By default `verbosity=0` + * `T`: matrix that transforms the interaction matrix from complex spherical +harmonics to a symmetry adapted basis. By default, the complex spherical harmonics +basis is used and `T=None`. + * `verbosity`: tunes output from the solver. If `verbosity=0` only basic +information is printed, if `verbosity=1` the ground state atomic occupancy and +its energy are printed, if `verbosity=2` additional information is printed for +all occupancies that were diagonalized. By default, `verbosity=0`. -We need also to introduce some changes in the DMFT loop with respect to the ones used for CT-QMC calculations in :ref:`advanced`. +We need also to introduce some changes in the DMFT loop with respect that used for CT-QMC calculations in :ref:`advanced`. The hybridization function is neglected in the Hubbard-I approximation, and only non-interacting level positions (:math:`\hat{\epsilon}=-\mu+\langle H^{ff} \rangle - \Sigma_{DC}`) are required. -Hence, instead of computing `S.G0` as in :ref:`advanced` we set the level positions :: +Hence, instead of computing `S.G0` as in :ref:`advanced` we set the level positions:: # set atomic levels: eal = SK.eff_atomic_levels()[0] @@ -123,28 +134,35 @@ Finally, we compute the modified charge density and save it as well as correlati Running LDA+DMFT calculations ----------------------------- -After having prepaired the script one may run one-shot DMFT calculations by executing :ref:`Ce-gamma-script` with :program:`pytriqs` in one-processor :: +After having prepared the script one may run one-shot DMFT calculations by +executing :ref:`Ce-gamma-script` with :program:`pytriqs` on a single processor:: pytriqs Ce-gamma.py -or parallel mode :: +or in parallel mode:: mpirun pytriqs Ce-gamma.py -where :program:`mpirun` launches these calculations in parallel mode and enables MPI. The exact form of this command will, of course, depend on mpi-launcher installed in your system. +where :program:`mpirun` launches these calculations in parallel mode and +enables MPI. The exact form of this command will, of course, depend on +mpi-launcher installed in your system. -Instead of doing one-shot run one may also perform fully self-consistent LDA+DMFT calculations, as we will do in this tutorial. We launch these calculations as follows :: +Instead of doing one-shot run one may also perform fully self-consistent +LDA+DMFT calculations, as we will do in this tutorial. We launch these +calculations as follows :: run_triqs -qdmft -where `-qdmft` flag turns on LDA+DMFT calculations with :program:`Wien2k`. We use here the default convergence criterion in :program:`Wien2k` (convergence to 0.1 mRy in energy). +where `-qdmft` flag turns on LDA+DMFT calculations with :program:`Wien2k`. We +use here the default convergence criterion in :program:`Wien2k` (convergence to +0.1 mRy in energy). -After calculations are done we may check the value of correlational ('Hubbard') energy correction to the total energy:: +After calculations are done we may check the value of correlation ('Hubbard') energy correction to the total energy:: >grep HUBBARD Ce-gamma.scf|tail -n 1 HUBBARD ENERGY(included in SUM OF EIGENVALUES): -0.220502 -and the band("kinetic") energy with DMFT correction:: +and the band ("kinetic") energy with DMFT correction:: >grep DMFT Ce-gamma.scf |tail -n 1 KINETIC ENERGY with DMFT correction: -5.329087 @@ -162,8 +180,11 @@ as well as the convergence in total energy:: Calculating DOS with Hubbard-I ------------------------------ -Within Hubbard-I one may also easily obtain the spectral function ("band structure") and integrated spectral function ("density of states, DOS"). -In difference with the CT-QMC approach one does not need to provide the real-frequency self-energy (see :ref:`analysis`) it can be calculated directly by the Hubbard-I solver. +Within Hubbard-I one may also easily obtain the angle-resolved spectral function (band +structure) and integrated spectral function (density of states or DOS). In +difference with the CT-QMC approach one does not need to provide the +real-frequency self-energy (see :ref:`analysis`) as it can be calculated directly +in the Hubbard-I solver. The corresponding script :ref:`Ce-gamma_DOS-script` contains several new parameters :: diff --git a/doc/Ce-gamma.py b/doc/Ce-gamma.py index 76313d2a..dbee5538 100644 --- a/doc/Ce-gamma.py +++ b/doc/Ce-gamma.py @@ -8,15 +8,15 @@ U_int = 6.00 J_hund = 0.70 Loops = 2 # Number of DMFT sc-loops Mix = 0.7 # Mixing factor in QMC + # 1.0 ... all from imp; 0.0 ... all from Gloc DC_type = 0 # 0...FLL, 1...Held, 2... AMF, 3...Lichtenstein -DC_Mix = 1.0 # 1.0 ... all from imp; 0.0 ... all from Gloc useBlocs = False # use bloc structure from LDA input useMatrix = True # use the U matrix calculated from Slater coefficients instead of (U+2J, U, U-J) Natomic = 1 HDFfilename = lda_filename+'.h5' -use_val= U_int * (Natomic - 0.5) - J_hund * (Natomic*0.5 - 0.5) +use_val= U_int * (Natomic - 0.5) - J_hund * (Natomic * 0.5 - 0.5) # Convert DMFT input: # Can be commented after the first run @@ -55,7 +55,7 @@ if (previous_present): mpi.report("Using stored data for initialisation") if (mpi.is_master_node()): ar = HDFArchive(HDFfilename,'a') - S.Sigma <<= ar['SigmaF'] + S.Sigma <<= ar['SigmaImFreq'] del ar S.Sigma = mpi.bcast(S.Sigma) SK.load() @@ -103,8 +103,8 @@ for Iteration_Number in range(1,Loops+1): if ((itn>1)or(previous_present)): if (mpi.is_master_node()and (Mix<1.0)): mpi.report("Mixing Sigma and G with factor %s"%Mix) - if ('SigmaF' in ar): - S.Sigma <<= Mix * S.Sigma + (1.0-Mix) * ar['SigmaF'] + if ('SigmaImFreq' in ar): + S.Sigma <<= Mix * S.Sigma + (1.0-Mix) * ar['SigmaImFreq'] if ('GF' in ar): S.G <<= Mix * S.G + (1.0-Mix) * ar['GF'] @@ -114,7 +114,7 @@ for Iteration_Number in range(1,Loops+1): if (mpi.is_master_node()): - ar['SigmaF'] = S.Sigma + ar['SigmaImFreq'] = S.Sigma ar['GF'] = S.G # after the Solver has finished, set new double counting: diff --git a/doc/LDADMFTmain.rst b/doc/LDADMFTmain.rst index b540a5b8..b1185e9e 100644 --- a/doc/LDADMFTmain.rst +++ b/doc/LDADMFTmain.rst @@ -22,16 +22,16 @@ to get the local quantities used in DMFT. It is initialized by:: The only necessary parameter is the filename of the hdf5 archive. In addition, there are some optional parameters: - * `mu`: The chemical potential at initialization. This value is only used, if there is no other value found in the hdf5 arxive. Standard is 0.0 - * `h_field`: External magnetic field, standard is 0.0 + * `mu`: The chemical potential at initialization. This value is only used if no other value is found in the hdf5 arxive. The default value is 0.0. + * `h_field`: External magnetic field. The default value is 0.0. * `use_lda_blocks`: If true, the structure of the density matrix is analysed at initialisation, and non-zero matrix elements are identified. The DMFT calculation is then restricted to these matrix elements, yielding a more efficient solution of the - local interaction problem. Also degeneracies in orbital and spin space are recognised, and stored for later use. Standard value is `False`. + local interaction problem. Degeneracies in orbital and spin space are also identified and stored for later use. The default value is `False`. * `lda_data`, `symm_corr_data`, `par_proj_data`, `symm_par_data`, `bands_data`: These string variables define the subgroups in the hdf5 arxive, - where the corresponding information is stored. The standard values are consistent with the standard values in :ref:`interfacetowien`. + where the corresponding information is stored. The default values are consistent with those in :ref:`interfacetowien`. -At initialisation, the necessary data is read from the hdf5 file. If we restart a calculation from a previous one, also the information on -the degenerate shells, the block structure of the density matrix, the chemical potential, and double counting correction are read. +At initialisation, the necessary data is read from the hdf5 file. If a calculation is restarted based on a previous hdf5 file, information on +degenerate shells, the block structure of the density matrix, the chemical potential, and double counting correction is also read in. .. index:: Multiband solver @@ -44,37 +44,36 @@ There is a module that helps setting up the multiband CTQMC solver. It is loaded S = SolverMultiBand(beta, n_orb, gf_struct = SK.gf_struct_solver[0], map=SK.map[0]) The necessary parameters are the inverse temperature `beta`, the Coulomb interaction `U_interact`, the Hund's rule coupling `J_hund`, -and the number of orbitals `n_orb`. There are again several optional parameters that allow to modify the local Hamiltonian to +and the number of orbitals `n_orb`. There are again several optional parameters that allow the tailoring of the local Hamiltonian to specific needs. They are: - * `gf_struct`: Contains the block structure of the local density matrix. Has to be given in the format as calculated by :class:`SumkLDA`. - * `map`: If `gf_struct` is given as parameter, also `map` has to be given. This is the mapping from the block structure to a general + * `gf_struct`: The block structure of the local density matrix given in the format calculated by :class:`SumkLDA`. + * `map`: If `gf_struct` is given as parameter, `map` also must be given. This is the mapping from the block structure to a general up/down structure. The solver method is called later by this statement:: S.solve(U_interact,J_hund,use_spinflip=False,use_matrix=True, - l=2,T=None, dim_reps=None, irep=None, deg_orbs=[],n_cycles =10000, + l=2,T=None, dim_reps=None, irep=None, n_cycles =10000, length_cycle=200,n_warmup_cycles=1000) -The parameters for the Coulomb interaction `U_interact` and the Hunds coupling `J_hund` are necessary parameters. The rest are optional parameters, for which default values are set. -They denerally should be reset for a given problem. Their meaning is as follows: +The parameters for the Coulomb interaction `U_interact` and the Hund's coupling `J_hund` are necessary input parameters. The rest are optional +parameters for which default values are set. Generally, they should be reset for the problem at hand. Here is a description of the parameters: - * `use_matrix`: If `True`, the interaction matrix is calculated from Slater integrals, which are calculated from `U_interact` and + * `use_matrix`: If `True`, the interaction matrix is calculated from Slater integrals, which are computed from `U_interact` and `J_hund`. Otherwise, a Kanamori representation is used. Attention: We define the intraorbital interaction as `U_interact`, the interorbital interaction for opposite spins as `U_interact-2*J_hund`, and interorbital for equal spins as - `U_interact-3*J_hund`! - * `T`: A matrix that transforms the interaction matrix from spherical harmonics, to a symmetry adapted basis. Only effective, if - `use_matrix=True`. - * `l`: Orbital quantum number. Again, only effective for Slater parametrisation. - * `deg_orbs`: A list that gives the degeneracies of the orbitals. It is used to set up a global move of the CTQMC solver. + `U_interact-3*J_hund`. + * `T`: The matrix that transforms the interaction matrix from spherical harmonics to a symmetry-adapted basis. Only effective for Slater + parametrisation, i.e. `use_matrix=True`. + * `l`: The orbital quantum number. Again, only effective for Slater parametrisation, i.e. `use_matrix=True`. * `use_spinflip`: If `True`, the full rotationally-invariant interaction is used. Otherwise, only density-density terms are kept in the local Hamiltonian. - * `dim_reps`: If only a subset of the full d-shell is used a correlated orbtials, one can specify here the dimensions of all the subspaces + * `dim_reps`: If only a subset of the full d-shell is used as correlated orbtials, one can specify here the dimensions of all the subspaces of the d-shell, i.e. t2g and eg. Only effective for Slater parametrisation. * `irep`: The index in the list `dim_reps` of the subset that is used. Only effective for Slater parametrisation. - * `n_cycles`: Number of CTQMC cycles (a sequence of moves followed by a measurement) per core. The default value of 10000 is the minimum, and generally should be incresed - * `length_cycle`: Number of CTQMC moves per one cycle + * `n_cycles`: Number of CTQMC cycles (a sequence of moves followed by a measurement) per core. The default value of 10000 is the minimum, and generally should be increased. + * `length_cycle`: Number of CTQMC moves per one cycle. * `n_warmup_cycles`: Number of initial CTQMC cycles before measurements start. Usually of order of 10000, sometimes needs to be increased significantly. Most of above parameters can be taken directly from the :class:`SumkLDA` class, without defining them by hand. We will see a specific example @@ -98,7 +97,7 @@ set up the loop over DMFT iterations and the self-consistency condition:: S.G0 <<= inverse(S.Sigma + inverse(S.G)) # finally get G0, the input for the Solver S.solve(U_interact,J_hund,use_spinflip=False,use_matrix=True, # now solve the impurity problem - l=2,T=None, dim_reps=None, irep=None, deg_orbs=[],n_cycles =10000, + l=2,T=None, dim_reps=None, irep=None, n_cycles =10000, length_cycle=200,n_warmup_cycles=1000) dm = S.G.density() # density matrix of the impurity problem diff --git a/doc/advanced.rst b/doc/advanced.rst index 6a1e49d2..74754837 100644 --- a/doc/advanced.rst +++ b/doc/advanced.rst @@ -76,7 +76,7 @@ of the last iteration:: if (previous_present): if (mpi.is_master_node()): ar = HDFArchive(lda_filename+'.h5','a') - S.Sigma <<= ar['SigmaF'] + S.Sigma <<= ar['SigmaImFreq'] del ar S.Sigma = mpi.bcast(S.Sigma) @@ -103,7 +103,7 @@ previous section, with some additional refinement:: # Solve the impurity problem: S.solve(U_interact=U,J_hund=J,use_spinflip=use_spinflip,use_matrix=use_matrix, - l=l,T=SK.T[0], dim_reps=SK.dim_reps[0], irep=2, deg_orbs=SK.deg_shells[0],n_cycles =qmc_cycles, + l=l,T=SK.T[0], dim_reps=SK.dim_reps[0], irep=2, n_cycles=qmc_cycles, length_cycle=length_cycle,n_warmup_cycles=warming_iterations) # solution done, do the post-processing: @@ -112,7 +112,7 @@ previous section, with some additional refinement:: S.Sigma <<=(inverse(S.G0)-inverse(S.G)) # Solve the impurity problem: S.solve(U_interact=U,J_hund=J,use_spinflip=use_spinflip,use_matrix=use_matrix, - l=l,T=SK.T[0], dim_reps=SK.dim_reps[0], irep=2, deg_orbs=SK.deg_shells[0],n_cycles =qmc_cycles, + l=l,T=SK.T[0], dim_reps=SK.dim_reps[0], irep=2, n_cycles=qmc_cycles, length_cycle=length_cycle,n_warmup_cycles=warming_iterations) # solution done, do the post-processing: diff --git a/doc/analysis.rst b/doc/analysis.rst index 7eefd0ca..03fdfdb0 100644 --- a/doc/analysis.rst +++ b/doc/analysis.rst @@ -1,39 +1,37 @@ .. _analysis: -Analysing tools -=============== +Tools for analysis +================== This section explains how to use some tools of the package in order to analyse the data. .. warning:: - The package does NOT provide an explicit method to do an analytic continuation of the + The package does NOT provide an explicit method to do an **analytic continuation** of the self energies and Green functions from Matsubara frequencies to the real frequancy axis! - There are methods included e.g. in the ALPS package, which can be used for these purposes. But - be careful: All these methods have to be used very carefully!! + There are methods included e.g. in the :program:`ALPS` package, which can be used for these purposes. But + be careful: All these methods have to be used very carefully! -The analysing tools can be found in an extension of the :class:`SumkLDA` class, they are -loaded by:: +The tools for analysis can be found in an extension of the :class:`SumkLDA` class and are +loaded by importing the module :class:`SumkLDATools`:: from pytriqs.applications.dft.sumk_lda_tools import * -This import the module ``SumkLDATools``. There are two practical tools, for which you don't -need a self energy on the real axis: +There are two practical tools for which you do not need a self energy on the real axis, namely the: - * The density of states of the Wannier orbitals. - * Partial charges according to the Wien2k definition. + * density of states of the Wannier orbitals, + * partial charges according to the Wien2k definition. -Other routines need the self energy on the real frequency axis. If you managed to get them, you can -calculate +The self energy on the real frequency axis is necessary in computing the: - * the momentum-integrated spectral function including self-energy effects. - * the momentum-resolved spectral function (i.e. ARPES) + * momentum-integrated spectral function including self-energy effects, + * momentum-resolved spectral function (i.e. ARPES). -The initialisation of the class is completely equivalent to the initialisation of the :class:`SumkLDA` +The initialisation of the class is equivalent to that of the :class:`SumkLDA` class:: SK = SumkLDATools(hdf_file = filename) -By the way, all routines available in :class:`SumkLDA` are also available here. +Note that all routines available in :class:`SumkLDA` are also available here. Routines without real-frequency self energy ------------------------------------------- @@ -43,22 +41,22 @@ density of states of the Wannier orbitals, you simply type:: SK.check_input_dos(om_min, om_max, n_om) -which produces plots between real frequencies `om_min` and `om_max`, using a mesh of `n_om` points. There -is an optional parameter, `broadening`, which defines an additional Lorentzian broadening, and is set to `0.01` -by default. +which produces plots between the real frequencies `om_min` and `om_max`, using a mesh of `n_om` points. There +is an optional parameter `broadening` which defines an additional Lorentzian broadening, and has the default value of +`0.01` by default. -Since we can calculate the partial charges directly from the Matsubara Green's functions, we also don't need a -real frequency self energy for this purpose. The calculation is done by:: +Since we can calculate the partial charges directly from the Matsubara Green's functions, we also do not need a +real-frequency self energy for this purpose. The calculation is done by:: ar = HDFArchive(SK.hdf_file) - SK.put_Sigma([ ar['SigmaF'] ]) + SK.put_Sigma([ ar['SigmaImFreq'] ]) del ar dm = SK.partial_charges() -which calculates the partial charges using the input that is stored in the hdf5 file (self energy, double counting, -chemical potential). Here we assumed that the final self energy is stored as `SigmaF` in the archive. -On return, dm is a list, where the list items correspond to the density matrices of all shells -defined in the list ``SK.shells``. This list is constructed by the Wien2k converter routines and stored automatically +which calculates the partial charges using the data stored in the hdf5 file, namely the self energy, double counting, and +chemical potential. Here we assumed that the final self energy is stored as `SigmaImFreq` in the archive. +On return, `dm` is a list, where the list items correspond to the density matrices of all shells +defined in the list `SK.shells`. This list is constructed by the Wien2k converter routines and stored automatically in the hdf5 archive. For the detailed structure of `dm`, see the reference manual. @@ -69,11 +67,10 @@ In order to plot data including correlation effects on the real axis, one has to Most conveniently, it is stored as a real frequency :class:`BlockGf` object in the hdf5 file:: ar = HDFArchive(filename+'.h5','a') - ar['SigmaReFreq'] = Sigma_real + ar['SigmaReFreq'] = SigmaReFreq del ar -You may also store it in text files. If all blocks of your self energy are of dimension 1x1 you store them in `fname_(block)0.dat` files. Here `(block)` is a block name (`up`, `down`, or combined `ud`). In the case when you have matrix blocks, you store them in `(i)_(j).dat` files (where `(i)` and `(j)` are the orbital indices) in the `fname_(block)` directory - +You may also store it in text files. If all blocks of your self energy are of dimension 1x1, you store them in `fname_(block)0.dat` files. Here `(block)` is a block name (`up`, `down`, or combined `ud`). In the case when you have matrix blocks, you store them in `(i)_(j).dat` files, where `(i)` and `(j)` are the orbital indices, in the `fname_(block)` directory. This self energy is loaded and put into the :class:`SumkLDA` class by the function:: @@ -81,47 +78,42 @@ This self energy is loaded and put into the :class:`SumkLDA` class by the functi where: - * `filename` is the name of the hdf5 archive file or the `fname` pattern in text files names as described above. - * `hdf=True`: the real-axis self energy will be read from the hdf5 file, `hdf=False`: from the text files - * `hdf_dataset` the name of dataset where the self energy is stored in the hdf5 file - * `n_om` number of points in the real-axis mesh (used only if `hdf=False`) + * `filename`: the name of the hdf5 archive file or the `fname` pattern in text files names as described above, + * `hdf`: if `True`, the real-axis self energy will be read from the hdf5 file, otherwise from the text files, + * `hdf_dataset`: the name of dataset where the self energy is stored in the hdf5 file, + * `n_om`: the number of points in the real-axis mesh (used only if `hdf=False`). +The chemical potential as well as the double counting correction were already read in the initialisation process. -The chemical potential as well as the double -counting correction were already read in the initialisation process. - -With this self energy, we can do now:: +With this self energy, we can now execute:: SK.dos_partial(broadening=broadening) -This produces the momentum-integrated spectral functions (density of states, DOS), also orbitally resolved. -The variable `broadening` is an additional Lorentzian broadening that is added to the resulting spectra. +This produces both the momentum-integrated (total density of states or DOS) and orbitally-resolved (partial/projected DOS) spectral functions. +The variable `broadening` is an additional Lorentzian broadening applied to the resulting spectra. The output is printed into the files - * `DOScorr(sp).dat`: The total DOS. `(sp)` stands for `up`, `down`, or combined `ud`. The latter case + * `DOScorr(sp).dat`: The total DOS, where `(sp)` stands for `up`, `down`, or combined `ud`. The latter case is relevant for calculations including spin-orbit interaction. * `DOScorr(sp)_proj(i).dat`: The DOS projected to an orbital with index `(i)`. The index `(i)` refers to the indices given in ``SK.shells``. - * `DOScorr(sp)_proj(i)_(m)_(n).dat`: Sames as above, but printed as orbitally resolved matrix in indices - `(m)` and `(n)`. For `d` orbitals, it gives separately the DOS - for, e.g., :math:`d_{xy}`, :math:`d_{x^2-y^2}`, and so on. + * `DOScorr(sp)_proj(i)_(m)_(n).dat`: As above, but printed as orbitally-resolved matrix in indices + `(m)` and `(n)`. For `d` orbitals, it gives the DOS seperately for, e.g., :math:`d_{xy}`, :math:`d_{x^2-y^2}`, and so on. Another quantity of interest is the momentum-resolved spectral function, which can directly be compared to ARPES experiments. We assume here that we already converted the output of the :program:`dmftproj` program with the -converter routines, see :ref:`interfacetowien`. The spectral function is calculated by:: +converter routines (see :ref:`interfacetowien`). The spectral function is calculated by:: SK.spaghettis(broadening) -The output is -written as the 3-column files ``Akw(sp).dat``, where `(sp)` has the same meaning as above. The output format is -`k`, :math:`\omega`, `value`. Optional parameters are +Optional parameters are - * `shift`: An additional shift, added as `(ik-1)*shift`, where `ik` is the index of the `k` point. Useful for plotting purposes, - standard value is 0.0. - * `plotrange`: A python list with two entries, first being :math:`\omega_{min}`, the second :math:`\omega_{max}`, setting the plot - range for the output. Standard value is `None`, in this case the momentum range as given in the self energy is plotted. - * `ishell`: If this is not `None` (standard value), but an integer, the spectral function projected to the orbital with index `ishell` - is plotted to the files. Attention: The spectra are not rotated to the local coordinate system as used in the :program:`Wien2k` - program (For experts). - + * `shift`: An additional shift added as `(ik-1)*shift`, where `ik` is the index of the `k` point. This is useful for plotting purposes. + The default value is 0.0. + * `plotrange`: A list with two entries, :math:`\omega_{min}` and :math:`\omega_{max}`, which set the plot + range for the output. The default value is `None`, in which case the full momentum range as given in the self energy is used. + * `ishell`: An integer denoting the orbital index `ishell` onto which the spectral function is projected. The resulting function is saved in + the files. The default value is `None`. Note for experts: The spectra are not rotated to the local coordinate system used in :program:`Wien2k`. +The output is written as the 3-column files ``Akw(sp).dat``, where `(sp)` is defined as above. The output format is +`k`, :math:`\omega`, `value`. diff --git a/doc/install.rst b/doc/install.rst index 9ff47392..20e8aeab 100644 --- a/doc/install.rst +++ b/doc/install.rst @@ -57,7 +57,9 @@ Installation steps In addition, :file:`path_to_Wien2k/SRC_templates` also contains :program:`run_triqs` and :program:`runsp_triqs` scripts for running Wien2k+DMFT fully self-consistent calculations. These files should be copied to - :file:`path_to_Wien2k`. + :file:`path_to_Wien2k`, and set as executables by running:: + + $ chmod +x run*_triqs You will also need to insert manually a correct call of :file:`pytriqs` into these scripts using an appropriate for your system MPI wrapper (mpirun, diff --git a/doc/interface.rst b/doc/interface.rst index 21759dfb..256ad789 100644 --- a/doc/interface.rst +++ b/doc/interface.rst @@ -33,14 +33,14 @@ an hdf5 arxive, named :file:`material_of_interest.h5`, where all the data is sto There are three optional parameters to the Constructor: - * `lda_subgrp`: We store all data in sub groups of the hdf5 arxive. For the main data - that is needed for the DMFT loop, we use the sub group specified by this optional parameter. - If it is not given, the standard value `SumK_LDA` is used as sub group name. - * `symm_subgrp`: In this sub group we store all the data for applying the symmetry - operations in the DMFT loop. Standard value is `SymmCorr`. + * `lda_subgrp`: We store all data in subgroups of the hdf5 arxive. For the main data + that is needed for the DMFT loop, we use the subgroup specified by this optional parameter. + The default value `SumK_LDA` is used as the subgroup name. + * `symm_subgrp`: In this subgroup we store all the data for applying the symmetry + operations in the DMFT loop. The default value is `SymmCorr`. * `repacking`: If true, and the hdf5 file already exists, the system command :program:`h5repack` is invoked. This command ensures a minimal file size of the hdf5 - file. Standard value is `False`. If you want to use this, be sure + file. The default value is `False`. If you wish to use this, ensure that :program:`h5repack` is in your path variable! After initialising the interface module, we can now convert the input text files into the @@ -48,7 +48,7 @@ hdf5 arxive by:: Converter.convert_dmft_input() -This reads all the data, and stores it in the sub group `lda_subgrp`, as discussed above. +This reads all the data, and stores it in the subgroup `lda_subgrp`, as discussed above. In this step, the files :file:`material_of_interest.ctqmcout` and :file:`material_of_interest.symqmc` have to be present in the working directory. @@ -70,17 +70,16 @@ of :program:`Wien2k`, you have to use:: This reads the files :file:`material_of_interest.parproj` and :file:`material_of_interest.sympar`. Again, there are two optional parameters - * `par_proj_subgrp`: The sub group, where the data for the partial projectors is stored. Standard - is `SumK_LDA_ParProj`. - * `symm_par_subgrp`: Sub group for the symmetry operations, standard value is `SymmPar`. + * `par_proj_subgrp`: The subgroup for partial projectors data. The default value is `SumK_LDA_ParProj`. + * `symm_par_subgrp`: The subgroup for symmetry operations data. The default value is `SymmPar`. Another routine of the class allows to read the input for plotting the momentum-resolved spectral function. It is done by:: Converter.convert_bands_input() -The optional parameter, which tells the routine where to store the data is here `bands_subgrp`, -and its standard value is `SumK_LDA_Bands`. +The optional parameter that controls where the data is stored is `bands_subgrp`, +with the default value `SumK_LDA_Bands`. After having converted this input, you can further proceed with the :ref:`analysis`. diff --git a/doc/selfcons.rst b/doc/selfcons.rst index ec7e5bbf..986df3a1 100644 --- a/doc/selfcons.rst +++ b/doc/selfcons.rst @@ -15,14 +15,13 @@ codes is also possible. We can use the DMFT script as introduced in sections :ref:`LDADMFTmain` and :ref:`advanced`, with a few simple modifications. First, in order to be compatible with the :program:`Wien2k` standards, the DMFT script has to be named ``case.py``, where `case` is the name of the :program:`Wien2k` calculation, see the section -:ref:`interfacetowien` for details. Then we set the variable -`lda_filename` dynamically:: +:ref:`interfacetowien` for details. We can then set the variable `lda_filename` dynamically:: import os lda_filename = os.getcwd().rpartition('/')[2] -This sets the `lda_filename` to the name of the current directory. The reminder of the scripts is completely the -same as in one-shot calculations. Only at the very end we have to calculate the modified charge density, +This sets the `lda_filename` to the name of the current directory. The remainder of the script is identical to +that for one-shot calculations. Only at the very end do we have to calculate the modified charge density, and store it in a format such that :program:`Wien2k` can read it. Therefore, after the DMFT loop that we saw in the previous section, we symmetrise the self energy, and recalculate the impurity Green function:: @@ -30,28 +29,28 @@ previous section, we symmetrise the self energy, and recalculate the impurity Gr S.G <<= inverse(S.G0) - S.Sigma S.G.invert() -These steps are not necessary, but can help to reduce fluctuation of the total energy. +These steps are not necessary, but can help to reduce fluctuations in the total energy. Now we calculate the modified charge density:: # find exact chemical potential SK.put_Sigma(Sigma_imp = [ S.Sigma ]) chemical_potential = SK.find_mu( precision = 0.000001 ) - dN,d = SK.calc_density_correction(filename = lda_filename+'.qdmft') + dN, d = SK.calc_density_correction(filename = lda_filename+'.qdmft') SK.save() First we find the chemical potential with high precision, and after that the routine ``SK.calc_density_correction(filename)`` calculates the density matrix including correlation effects. The result -is stored in the file `Filename`, which is later read by the :program:`Wien2k` program. The last statement saves +is stored in the file `lda_filename.qdmft`, which is later read by the :program:`Wien2k` program. The last statement saves the chemical potential into the hdf5 archive. We need also the correlation energy, which we evaluate by the Migdal formula:: correnerg = 0.5 * (S.G * S.Sigma).total_density() -From this value, we have to substract the double counting energy:: +From this value, we substract the double counting energy:: correnerg -= SK.dc_energ[0] -and save this value into the file:: +and save this value too:: if (mpi.is_master_node()): f=open(lda_filename+'.qdmft','a') @@ -61,23 +60,23 @@ and save this value into the file:: The above steps are valid for a calculation with only one correlated atom in the unit cell, the most likely case where you will apply this method. That is the reason why we give the index `0` in the list `SK.dc_energ`. If you have more than one correlated atom in the unit cell, but all of them -are equivalent atoms, you have to multiply the `correnerg` by their multiplicity, before writing it to the file. +are equivalent atoms, you have to multiply the `correnerg` by their multiplicity before writing it to the file. The multiplicity is easily found in the main input file of the :program:`Wien2k` package, i.e. `case.struct`. In case of non-equivalent atoms, the correlation energy has to be calculated for all of them separately (FOR EXPERTS ONLY). As mentioned above, the calculation is controlled by the :program:`Wien2k` scripts and not by :program:`python` -routines. Therefore, you start your calculation for instance by:: +routines. Therefore, at the command line, you start your calculation for instance by:: me@home $ run -qdmft -i 10 -The flag `-qdmft` tells the script, that the density matrix including correlation effects is read from the `case.qdmft` -file, and 10 self-consitency iterations are done. If you run the code on a parallel machine, you can specify the number of -nodes that are used:: +The flag `-qdmft` tells the script that the density matrix including correlation effects is to be read in from the `case.qdmft` +file and that 10 self-consistency iterations are to be done. If you run the code on a parallel machine, you can specify the number of +nodes to be used with the `-np` flag:: me@home $ run -qdmft -np 64 -i 10 -with the `-np` flag. In that case, you have to give the proper `MPI` execution statement, e.g. `mpiexec`, in the `run_lapw` script, -see the corresponding :program:`Wien2k` documentation. In many cases it is advisable to start from a converged one-shot +In that case, you have to give the proper `MPI` execution statement, e.g. `mpiexec`, in the `run_lapw` script (see the +corresponding :program:`Wien2k` documentation). In many cases it is advisable to start from a converged one-shot calculation. For practical purposes, you keep the number of DMFT loops within one DFT cycle low, or even to `loops=1`. If you encouter @@ -85,5 +84,4 @@ unstable convergence, you have to adjust the parameters such as `loops`, `mix`, or `Delta_mix` to improve the convergence. In the next section, :ref:`LDADMFTtutorial`, we will see in a detailed -example, how such a self consistent calculation is performed. - +example how such a self consistent calculation is performed.