3
0
mirror of https://github.com/triqs/dft_tools synced 2025-04-16 13:39:25 +02:00

Merge pull request #118 from malte-schueler/merge_vasp2dmft

Merge vasp2dmft functionalities into plovasp
This commit is contained in:
Markus Aichhorn 2019-07-19 14:26:05 +02:00 committed by GitHub
commit 6111461b64
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
45 changed files with 1546 additions and 74 deletions

View File

@ -18,6 +18,6 @@ add_custom_target(doc_sphinx ALL DEPENDS ${sphinx_top} ${CMAKE_CURRENT_BINARY_DI
# ---------------------------------
install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/html/ COMPONENT documentation DESTINATION share/doc/triqs_dft_tools
FILES_MATCHING
REGEX "\\.(html|pdf|png|gif|jpg|js|xsl|css|py|txt|inv|bib)$"
REGEX "\\.(html|pdf|png|gif|jpg|js|xsl|css|py|txt|inv|bib|cfg)$"
PATTERN "_*"
)

View File

@ -5,9 +5,7 @@ Interface with VASP
===================
.. warning::
The VASP interface is in the alpha-version and the VASP part of it is not
yet publicly released. The documentation may, thus, be subject to changes
before the final release.
The VASP interface is in the alpha-version. The documentation may, thus, be subject to changes before the final release.
*Limitations of the alpha-version:*
@ -26,7 +24,8 @@ in the :ref:`PLOVasp User's Guide <plovasp>`. Here, a quick-start guide is prese
The VASP interface relies on new options introduced since version
5.4.x. In particular, a new INCAR-option `LOCPROJ`
and new `LORBIT` modes 13 and 14 have been added.
and new `LORBIT` modes 13 and 14 have been added as well as the new ICHARG
mode 5 for charge self-consistent calculations
Option `LOCPROJ` selects a set of localized projectors that will
be written to file `LOCPROJ` after a successful VASP run.
@ -41,7 +40,7 @@ with the indices corresponding to the site position in the POSCAR file;
`<projector type>` chooses a particular type of the local basis function.
The recommended projector type is `Pr 2`. The formalism for this type
of projectors is presented in
`M. Schüler et al. 2018 J. Phys.: Condens. Matter 30 475901 <https://doi.org/10.1088/1361-648X/aae80a>`_.
`M. Schüler et al. 2018 J. Phys.: Condens. Matter 30 475901 <https://doi.org/10.1088/1361-648X/aae80a>`_. For details on `LOCPROJ` also have a look in the `VASP wiki <https://cms.mpi.univie.ac.at/wiki/index.php/LOCPROJ>`_
The allowed labels of the local states defined in terms of cubic
harmonics are:

View File

@ -55,21 +55,25 @@ A PLOVasp input file can contain three types of sections:
Section [General]
"""""""""""""""""
The entire section is optional and it contains three parameters:
The entire section is optional and it contains four parameters:
* **BASENAME** (string): provides a base name for output files.
Default filenames are :file:`vasp.*`.
* **DOSMESH** ([float float] integer): if this parameter is given,
the projected density of states for each projected orbital will be
evaluated and stored to files :file:`pdos_<n>.dat`, where `n` is the
orbital index. The energy
mesh is defined by three numbers: `EMIN` `EMAX` `NPOINTS`. The first two
evaluated and stored to files :file:`pdos_<s>_<n>.dat`, where `s` is the
shell index and `n` the ion index. The energy mesh is defined by three
numbers: `EMIN` `EMAX` `NPOINTS`. The first two
can be omitted in which case they are taken to be equal to the projector
energy window. **Important note**: at the moment this option works
only if the tetrahedron integration method (`ISMEAR = -4` or `-5`)
is used in VASP to produce `LOCPROJ`.
* **EFERMI** (float): provides the Fermi level. This value overrides
the one extracted from VASP output files.
* **HK** (True/False): If True, the projectors are applied the the Kohn-Sham
eigenvalues which results in a Hamitlonian H(k) in orbital basis. The H(k)
is written for each group to a file :file:`Basename.hk<Ng>`. It is recommended
to also set `COMPLEMENT = True` (see below). Default is False.
There are no required parameters in this section.
@ -94,6 +98,8 @@ given by `LSHELL` must be present in the LOCPROJ file.
There are additional optional parameters that allow one to transform
the local states:
* **CORR** (True/False): Determines if shell is correlated or not. At least one
shell has to be correlated. Default is True.
* **TRANSFORM** (matrix): local transformation matrix applied to all states
in the projector shell. The matrix is defined by a (multiline) block
of floats, with each line corresponding to a row. The number of columns
@ -131,6 +137,13 @@ Optional group parameters:
condition will be enforced on each site separately but the Wannier functions
on different sites will not be orthogonal. If `NORMION = False`, the Wannier functions
on different sites included in the group will be orthogonal to each other.
* **BANDS** (int int): the energy window specified by two ints: band index of
lowest band and band index of highest band. Using this overrides the selection
in `EWINDOW`.
* **COMPLEMENT** (True/False). If True, the orthogonal complement is calculated
resulting in unitary (quadratic) projectors, i.e., the same number of orbitals
as bands. It is required to have an equal number of bands in the energy window
at each k-point. Default is False.
.. _transformation_file:

View File

@ -22,6 +22,12 @@ Wannier90 Converter
:members:
:special-members:
VASP Converter
-------------------
.. autoclass:: triqs_dft_tools.converters.vasp_converter.VaspConverter
:members:
:special-members:
Converter Tools
---------------
.. autoclass:: triqs_dft_tools.converters.converter_tools.ConverterTools

View File

@ -23,3 +23,12 @@ Full charge self consistency with Wien2k: :math:`\gamma`-Ce
tutorials/ce-gamma-fscs_wien2k
A full example with VASP: NiO
-----------------------------------------------------------
.. toctree::
:maxdepth: 2
tutorials/nio_csc

View File

@ -0,0 +1,18 @@
System = NiO
ISMEAR = -5
# the energy window to optimize projector channels
EMIN = -3
EMAX = 7
NBANDS = 16
LMAXMIX = 6
# switch off all symmetries
ISYM = -1
# project to Ni d and O p states
LORBIT = 14
LOCPROJ = 1 : d : Pr 1
LOCPROJ = 2 : p : Pr 1

View File

@ -0,0 +1,7 @@
.. _INCAR:
INCAR
-----------
.. literalinclude:: INCAR
:language: bash

View File

@ -0,0 +1,4 @@
Automatically generated mesh
0
Gamma
6 6 6

View File

@ -0,0 +1,7 @@
.. _KPOINTS:
KPOINTS
-----------
.. literalinclude:: KPOINTS
:language: bash

View File

@ -0,0 +1,90 @@
from itertools import *
import numpy as np
import pytriqs.utility.mpi as mpi
from pytriqs.archive import *
from pytriqs.gf import *
from triqs_dft_tools.sumk_dft import *
from triqs_dft_tools.sumk_dft_tools import *
from pytriqs.operators.util.hamiltonians import *
from pytriqs.operators.util.U_matrix import *
from triqs_cthyb import *
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
filename = 'nio'
SK = SumkDFT(hdf_file = filename+'.h5', use_dft_blocks = False)
beta = 5.0
# We analyze the block structure of the Hamiltonian
Sigma = SK.block_structure.create_gf(beta=beta)
SK.put_Sigma([Sigma])
G = SK.extract_G_loc()
SK.analyse_block_structure_from_gf(G, threshold = 1e-3)
# Setup CTQMC Solver
n_orb = SK.corr_shells[0]['dim']
spin_names = ['up','down']
orb_names = [i for i in range(0,n_orb)]
# Print some information on the master node
iteration_offset = 0
Sigma_iw = 0
if mpi.is_master_node():
ar = HDFArchive(filename+'.h5','a')
if not 'DMFT_results' in ar: ar.create_group('DMFT_results')
if not 'Iterations' in ar['DMFT_results']: ar['DMFT_results'].create_group('Iterations')
if 'iteration_count' in ar['DMFT_results']:
iteration_offset = ar['DMFT_results']['iteration_count']+1
print('offset',iteration_offset)
Sigma_iw = ar['DMFT_results']['Iterations']['Sigma_it'+str(iteration_offset-1)]
SK.dc_imp = ar['DMFT_results']['Iterations']['dc_imp'+str(iteration_offset-1)]
SK.dc_energ = ar['DMFT_results']['Iterations']['dc_energ'+str(iteration_offset-1)]
SK.chemical_potential = ar['DMFT_results']['Iterations']['chemical_potential'+str(iteration_offset-1)].real
iteration_offset = mpi.bcast(iteration_offset)
Sigma_iw = mpi.bcast(Sigma_iw)
SK.dc_imp = mpi.bcast(SK.dc_imp)
SK.dc_energ = mpi.bcast(SK.dc_energ)
SK.chemical_potential = mpi.bcast(SK.chemical_potential)
SK.put_Sigma(Sigma_imp = [Sigma_iw])
ikarray = numpy.array(range(SK.n_k))
# set up the orbitally resolved local lattice greens function:
n_orbs = SK.proj_mat_csc.shape[2]
spn = SK.spin_block_names[SK.SO]
mesh = Sigma_iw.mesh
block_structure = [range(n_orbs) for sp in spn]
gf_struct = [(spn[isp], block_structure[isp])
for isp in range(SK.n_spin_blocks[SK.SO])]
block_ind_list = [block for block, inner in gf_struct]
glist = lambda: [GfImFreq(indices=inner, mesh=mesh)
for block, inner in gf_struct]
G_latt_orb = BlockGf(name_list=block_ind_list,
block_list=glist(), make_copies=False)
G_latt_orb.zero()
for ik in mpi.slice_array(ikarray):
G_latt_KS = SK.lattice_gf(ik=ik, beta=beta)
G_latt_KS *= SK.bz_weights[ik]
for bname, gf in G_latt_orb:
add_g_ik = gf.copy()
add_g_ik.zero()
add_g_ik << SK.downfold(ik, 0, bname, G_latt_KS[bname], gf, shells='csc', ir=None)
gf << gf + add_g_ik
G_latt_orb << mpi.all_reduce(
mpi.world, G_latt_orb, lambda x, y: x + y)
mpi.barrier()
if mpi.is_master_node():
ar['DMFT_results']['Iterations']['G_latt_orb_it'+str(iteration_offset-1)] = G_latt_orb
if mpi.is_master_node(): del ar

View File

@ -0,0 +1,9 @@
.. _NiO_local_lattice_GF.py:
NiO_local_lattice_GF.py
-----------
Download :download:`NiO_local_lattice_GF.py <./NiO_local_lattice_GF.py>`.
.. literalinclude:: NiO_local_lattice_GF.py
:language: python

View File

@ -0,0 +1,10 @@
NiO
4.17 #taken from 9x9x9 with sigma=0.2 ismear=2
+0.0000000000 +0.5000000000 +0.5000000000
+0.5000000000 +0.0000000000 +0.5000000000
+0.5000000000 +0.5000000000 +0.0000000000
Ni O
1 1
Direct
+0.0000000000 +0.0000000000 +0.0000000000
+0.5000000000 +0.5000000000 +0.5000000000

View File

@ -0,0 +1,7 @@
.. _POSCAR:
POSCAR
-----------
.. literalinclude:: POSCAR
:language: bash

View File

@ -0,0 +1,3 @@
from triqs_dft_tools.converters.vasp_converter import *
Converter = VaspConverter(filename = 'nio')
Converter.convert_dft_input()

View File

@ -0,0 +1,10 @@
.. _converter.py:
converter.py
-------------------
Download :download:`converter.py <./converter.py>`.
.. literalinclude:: converter.py
:language: python

View File

@ -0,0 +1,51 @@
from pytriqs.gf import *
from pytriqs.archive import *
from triqs_maxent import *
filename = 'nio'
ar = HDFArchive(filename+'.h5','a')
if 'iteration_count' in ar['DMFT_results']:
iteration_offset = ar['DMFT_results']['iteration_count']+1
G_latt = ar['DMFT_results']['Iterations']['G_latt_orb_it'+str(iteration_offset-1)]
tm = TauMaxEnt(cost_function='bryan', probability='normal')
print(G_latt['up'][0,0])
t2g_orbs = [0,1,3]
eg_orbs = [2,4]
op_orbs = [5,6,7]
orbs = [t2g_orbs, eg_orbs, op_orbs]
#orbs = [t2g_orbs]
for orb in orbs:
print '\n'+str(orb[0])+'\n'
gf = 0*G_latt['up'][0,0]
for iO in orb:
gf = gf + G_latt['up'][iO,iO]
tm.set_G_iw(gf)
tm.omega =LinearOmegaMesh(omega_min=-20, omega_max=20, n_points=201)
tm.alpha_mesh = LogAlphaMesh(alpha_min=0.01, alpha_max=20000, n_points=60)
tm.set_error(1.e-3)
result=tm.run()
result.get_A_out('LineFitAnalyzer')
if 'iteration_count' in ar['DMFT_results']:
iteration_offset = ar['DMFT_results']['iteration_count']+1
for oo in orb:
ar['DMFT_results']['Iterations']['G_latt_orb_w_o'+str(oo)+'_it'+str(iteration_offset-1)] = result.analyzer_results['LineFitAnalyzer']['A_out']
ar['DMFT_results']['Iterations']['w_it'+str(iteration_offset-1)] = result.omega
# you may be interested in the details of the line analyzer:
# from pytriqs.plot.mpl_interface import oplot
#plt.figure(2)
#result.analyzer_results['LineFitAnalyzer'].plot_linefit()
#plt.savefig('ana'+str(orb[0])+'.pdf',fmt='pdf')
del ar

View File

@ -0,0 +1,9 @@
.. _maxent.py:
maxent.py
-----------
Download :download:`maxent.py <./maxent.py>`.
.. literalinclude:: maxent.py
:language: python

View File

@ -0,0 +1,167 @@
from itertools import *
import numpy as np
import pytriqs.utility.mpi as mpi
from pytriqs.archive import *
from pytriqs.gf import *
import sys, pytriqs.version as triqs_version
from triqs_dft_tools.sumk_dft import *
from triqs_dft_tools.sumk_dft_tools import *
from pytriqs.operators.util.hamiltonians import *
from pytriqs.operators.util.U_matrix import *
from triqs_cthyb import *
import triqs_cthyb.version as cthyb_version
import triqs_dft_tools.version as dft_tools_version
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
filename = 'nio'
SK = SumkDFT(hdf_file = filename+'.h5', use_dft_blocks = False)
beta = 5.0
Sigma = SK.block_structure.create_gf(beta=beta)
SK.put_Sigma([Sigma])
G = SK.extract_G_loc()
SK.analyse_block_structure_from_gf(G, threshold = 1e-3)
for i_sh in range(len(SK.deg_shells)):
num_block_deg_orbs = len(SK.deg_shells[i_sh])
mpi.report('found {0:d} blocks of degenerate orbitals in shell {1:d}'.format(num_block_deg_orbs, i_sh))
for iblock in range(num_block_deg_orbs):
mpi.report('block {0:d} consists of orbitals:'.format(iblock))
for keys in SK.deg_shells[i_sh][iblock].keys():
mpi.report(' '+keys)
# Setup CTQMC Solver
n_orb = SK.corr_shells[0]['dim']
spin_names = ['up','down']
orb_names = [i for i in range(0,n_orb)]
#gf_struct = set_operator_structure(spin_names, orb_names, orb_hyb)
gf_struct = SK.gf_struct_solver[0]
mpi.report('Sumk to Solver: %s'%SK.sumk_to_solver)
mpi.report('GF struct sumk: %s'%SK.gf_struct_sumk)
mpi.report('GF struct solver: %s'%SK.gf_struct_solver)
S = Solver(beta=beta, gf_struct=gf_struct)
# Construct the Hamiltonian and save it in Hamiltonian_store.txt
H = Operator()
U = 8.0
J = 1.0
U_sph = U_matrix(l=2, U_int=U, J_hund=J)
U_cubic = transform_U_matrix(U_sph, spherical_to_cubic(l=2, convention=''))
Umat, Upmat = reduce_4index_to_2index(U_cubic)
H = h_int_density(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U=Umat, Uprime=Upmat)
# Print some information on the master node
mpi.report('Greens function structure is: %s '%gf_struct)
mpi.report('U Matrix set to:\n%s'%Umat)
mpi.report('Up Matrix set to:\n%s'%Upmat)
# Parameters for the CTQMC Solver
p = {}
p["max_time"] = -1
p["random_name"] = ""
p["random_seed"] = 123 * mpi.rank + 567
p["length_cycle"] = 100
p["n_warmup_cycles"] = 20000
p["n_cycles"] = 200000
p["fit_max_moment"] = 4
p["fit_min_n"] = 30
p["fit_max_n"] = 50
p["perform_tail_fit"] = True
# Double Counting: 0 FLL, 1 Held, 2 AMF
DC_type = 0
DC_value = 59.0
# Prepare hdf file and and check for previous iterations
n_iterations = 10
iteration_offset = 0
if mpi.is_master_node():
ar = HDFArchive(filename+'.h5','a')
if not 'DMFT_results' in ar: ar.create_group('DMFT_results')
if not 'Iterations' in ar['DMFT_results']: ar['DMFT_results'].create_group('Iterations')
if not 'DMFT_input' in ar: ar.create_group('DMFT_input')
if not 'Iterations' in ar['DMFT_input']: ar['DMFT_input'].create_group('Iterations')
if not 'code_versions' in ar['DMFT_input']: ar['DMFT_input'].create_group('code_versio\
ns')
ar['DMFT_input']['code_versions']["triqs_version"] = triqs_version.version
ar['DMFT_input']['code_versions']["triqs_git"] = triqs_version.git_hash
ar['DMFT_input']['code_versions']["cthyb_version"] = cthyb_version.version
ar['DMFT_input']['code_versions']["cthyb_git"] = cthyb_version.cthyb_hash
ar['DMFT_input']['code_versions']["dft_tools_version"] = dft_tools_version.version
ar['DMFT_input']['code_versions']["dft_tools_version"] = dft_tools_version.dft_tools_hash
if 'iteration_count' in ar['DMFT_results']:
iteration_offset = ar['DMFT_results']['iteration_count']+1
S.Sigma_iw = ar['DMFT_results']['Iterations']['Sigma_it'+str(iteration_offset-1)]
SK.dc_imp = ar['DMFT_results']['Iterations']['dc_imp'+str(iteration_offset-1)]
SK.dc_energ = ar['DMFT_results']['Iterations']['dc_energ'+str(iteration_offset-1)]
SK.chemical_potential = ar['DMFT_results']['Iterations']['chemical_potential'+str(iteration_offset-1)].real
ar['DMFT_input']["dmft_script_it"+str(iteration_offset)] = open(sys.argv[0]).read()
iteration_offset = mpi.bcast(iteration_offset)
S.Sigma_iw = mpi.bcast(S.Sigma_iw)
SK.dc_imp = mpi.bcast(SK.dc_imp)
SK.dc_energ = mpi.bcast(SK.dc_energ)
SK.chemical_potential = mpi.bcast(SK.chemical_potential)
# Calc the first G0
SK.symm_deg_gf(S.Sigma_iw,orb=0)
SK.put_Sigma(Sigma_imp = [S.Sigma_iw])
SK.calc_mu(precision=0.01)
S.G_iw << SK.extract_G_loc()[0]
SK.symm_deg_gf(S.G_iw, orb=0)
#Init the DC term and the self-energy if no previous iteration was found
if iteration_offset == 0:
dm = S.G_iw.density()
SK.calc_dc(dm, U_interact=U, J_hund=J, orb=0, use_dc_formula=DC_type,use_dc_value=DC_value)
S.Sigma_iw << SK.dc_imp[0]['up'][0,0]
mpi.report('%s DMFT cycles requested. Starting with iteration %s.'%(n_iterations,iteration_offset))
# The infamous DMFT self consistency cycle
for it in range(iteration_offset, iteration_offset + n_iterations):
mpi.report('Doing iteration: %s'%it)
# Get G0
S.G0_iw << inverse(S.Sigma_iw + inverse(S.G_iw))
# Solve the impurity problem
S.solve(h_int = H, **p)
if mpi.is_master_node():
ar['DMFT_input']['Iterations']['solver_dict_it'+str(it)] = p
ar['DMFT_results']['Iterations']['Gimp_it'+str(it)] = S.G_iw
ar['DMFT_results']['Iterations']['Gtau_it'+str(it)] = S.G_tau
ar['DMFT_results']['Iterations']['Sigma_uns_it'+str(it)] = S.Sigma_iw
# Calculate double counting
dm = S.G_iw.density()
SK.calc_dc(dm, U_interact=U, J_hund=J, orb=0, use_dc_formula=DC_type,use_dc_value=DC_value)
# Get new G
SK.symm_deg_gf(S.Sigma_iw,orb=0)
SK.put_Sigma(Sigma_imp=[S.Sigma_iw])
SK.calc_mu(precision=0.01)
S.G_iw << SK.extract_G_loc()[0]
# print densities
for sig,gf in S.G_iw:
mpi.report("Orbital %s density: %.6f"%(sig,dm[sig][0,0]))
mpi.report('Total charge of Gloc : %.6f'%S.G_iw.total_density())
if mpi.is_master_node():
ar['DMFT_results']['iteration_count'] = it
ar['DMFT_results']['Iterations']['Sigma_it'+str(it)] = S.Sigma_iw
ar['DMFT_results']['Iterations']['Gloc_it'+str(it)] = S.G_iw
ar['DMFT_results']['Iterations']['G0loc_it'+str(it)] = S.G0_iw
ar['DMFT_results']['Iterations']['dc_imp'+str(it)] = SK.dc_imp
ar['DMFT_results']['Iterations']['dc_energ'+str(it)] = SK.dc_energ
ar['DMFT_results']['Iterations']['chemical_potential'+str(it)] = SK.chemical_potential
if mpi.is_master_node(): del ar

View File

@ -0,0 +1,9 @@
.. _nio.py:
nio.py
-----------
Download :download:`nio.py <./nio.py>`.
.. literalinclude:: nio.py
:language: python

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View File

@ -0,0 +1,201 @@
from itertools import *
import numpy as np
import pytriqs.utility.mpi as mpi
from pytriqs.archive import *
from pytriqs.gf import *
import sys, pytriqs.version as triqs_version
from triqs_dft_tools.sumk_dft import *
from triqs_dft_tools.sumk_dft_tools import *
from pytriqs.operators.util.hamiltonians import *
from pytriqs.operators.util.U_matrix import *
from triqs_cthyb import *
import triqs_cthyb.version as cthyb_version
import triqs_dft_tools.version as dft_tools_version
from triqs_dft_tools.converters.vasp_converter import *
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
def dmft_cycle():
filename = 'nio'
Converter = VaspConverter(filename=filename)
Converter.convert_dft_input()
SK = SumkDFT(hdf_file = filename+'.h5', use_dft_blocks = False)
beta = 5.0
Sigma = SK.block_structure.create_gf(beta=beta)
SK.put_Sigma([Sigma])
G = SK.extract_G_loc()
SK.analyse_block_structure_from_gf(G, threshold = 1e-2)
for i_sh in range(len(SK.deg_shells)):
num_block_deg_orbs = len(SK.deg_shells[i_sh])
mpi.report('found {0:d} blocks of degenerate orbitals in shell {1:d}'.format(num_block_deg_orbs, i_sh))
for iblock in range(num_block_deg_orbs):
mpi.report('block {0:d} consists of orbitals:'.format(iblock))
for keys in SK.deg_shells[i_sh][iblock].keys():
mpi.report(' '+keys)
# Setup CTQMC Solver
n_orb = SK.corr_shells[0]['dim']
spin_names = ['up','down']
orb_names = [i for i in range(0,n_orb)]
#gf_struct = set_operator_structure(spin_names, orb_names, orb_hyb)
gf_struct = SK.gf_struct_solver[0]
mpi.report('Sumk to Solver: %s'%SK.sumk_to_solver)
mpi.report('GF struct sumk: %s'%SK.gf_struct_sumk)
mpi.report('GF struct solver: %s'%SK.gf_struct_solver)
S = Solver(beta=beta, gf_struct=gf_struct)
# Construct the Hamiltonian and save it in Hamiltonian_store.txt
H = Operator()
U = 8.0
J = 1.0
U_sph = U_matrix(l=2, U_int=U, J_hund=J)
U_cubic = transform_U_matrix(U_sph, spherical_to_cubic(l=2, convention=''))
Umat, Upmat = reduce_4index_to_2index(U_cubic)
H = h_int_density(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U=Umat, Uprime=Upmat)
# Print some information on the master node
mpi.report('Greens function structure is: %s '%gf_struct)
mpi.report('U Matrix set to:\n%s'%Umat)
mpi.report('Up Matrix set to:\n%s'%Upmat)
# Parameters for the CTQMC Solver
p = {}
p["max_time"] = -1
p["random_name"] = ""
p["random_seed"] = 123 * mpi.rank + 567
p["length_cycle"] = 100
p["n_warmup_cycles"] = 2000
p["n_cycles"] = 20000
p["fit_max_moment"] = 4
p["fit_min_n"] = 30
p["fit_max_n"] = 50
p["perform_tail_fit"] = True
# Double Counting: 0 FLL, 1 Held, 2 AMF
DC_type = 0
DC_value = 59.0
# Prepare hdf file and and check for previous iterations
n_iterations = 1
iteration_offset = 0
if mpi.is_master_node():
ar = HDFArchive(filename+'.h5','a')
if not 'DMFT_results' in ar: ar.create_group('DMFT_results')
if not 'Iterations' in ar['DMFT_results']: ar['DMFT_results'].create_group('Iterations')
if not 'DMFT_input' in ar: ar.create_group('DMFT_input')
if not 'Iterations' in ar['DMFT_input']: ar['DMFT_input'].create_group('Iterations')
if not 'code_versions' in ar['DMFT_input']: ar['DMFT_input'].create_group('code_versio\
ns')
ar['DMFT_input']['code_versions']["triqs_version"] = triqs_version.version
ar['DMFT_input']['code_versions']["triqs_git"] = triqs_version.git_hash
ar['DMFT_input']['code_versions']["cthyb_version"] = cthyb_version.version
ar['DMFT_input']['code_versions']["cthyb_git"] = cthyb_version.cthyb_hash
ar['DMFT_input']['code_versions']["dft_tools_version"] = dft_tools_version.version
ar['DMFT_input']['code_versions']["dft_tools_version"] = dft_tools_version.dft_tools_hash
if 'iteration_count' in ar['DMFT_results']:
iteration_offset = ar['DMFT_results']['iteration_count']+1
S.Sigma_iw = ar['DMFT_results']['Iterations']['Sigma_it'+str(iteration_offset-1)]
SK.dc_imp = ar['DMFT_results']['Iterations']['dc_imp'+str(iteration_offset-1)]
SK.dc_energ = ar['DMFT_results']['Iterations']['dc_energ'+str(iteration_offset-1)]
SK.chemical_potential = ar['DMFT_results']['Iterations']['chemical_potential'+str(iteration_offset-1)].real
ar['DMFT_input']["dmft_script_it"+str(iteration_offset)] = open(sys.argv[0]).read()
iteration_offset = mpi.bcast(iteration_offset)
S.Sigma_iw = mpi.bcast(S.Sigma_iw)
SK.dc_imp = mpi.bcast(SK.dc_imp)
SK.dc_energ = mpi.bcast(SK.dc_energ)
SK.chemical_potential = mpi.bcast(SK.chemical_potential)
# Calc the first G0
SK.symm_deg_gf(S.Sigma_iw,orb=0)
SK.put_Sigma(Sigma_imp = [S.Sigma_iw])
SK.calc_mu(precision=0.01)
S.G_iw << SK.extract_G_loc()[0]
SK.symm_deg_gf(S.G_iw, orb=0)
#Init the DC term and the self-energy if no previous iteration was found
if iteration_offset == 0:
dm = S.G_iw.density()
SK.calc_dc(dm, U_interact=U, J_hund=J, orb=0, use_dc_formula=DC_type,use_dc_value=DC_value)
S.Sigma_iw << SK.dc_imp[0]['up'][0,0]
mpi.report('%s DMFT cycles requested. Starting with iteration %s.'%(n_iterations,iteration_offset))
# The infamous DMFT self consistency cycle
for it in range(iteration_offset, iteration_offset + n_iterations):
mpi.report('Doing iteration: %s'%it)
# Get G0
S.G0_iw << inverse(S.Sigma_iw + inverse(S.G_iw))
# Solve the impurity problem
S.solve(h_int = H, **p)
if mpi.is_master_node():
ar['DMFT_input']['Iterations']['solver_dict_it'+str(it)] = p
ar['DMFT_results']['Iterations']['Gimp_it'+str(it)] = S.G_iw
ar['DMFT_results']['Iterations']['Gtau_it'+str(it)] = S.G_tau
ar['DMFT_results']['Iterations']['Sigma_uns_it'+str(it)] = S.Sigma_iw
# Calculate double counting
dm = S.G_iw.density()
SK.calc_dc(dm, U_interact=U, J_hund=J, orb=0, use_dc_formula=DC_type,use_dc_value=DC_value)
# Get new G
SK.symm_deg_gf(S.Sigma_iw,orb=0)
SK.put_Sigma(Sigma_imp=[S.Sigma_iw])
SK.calc_mu(precision=0.01)
S.G_iw << SK.extract_G_loc()[0]
# print densities
for sig,gf in S.G_iw:
mpi.report("Orbital %s density: %.6f"%(sig,dm[sig][0,0]))
mpi.report('Total charge of Gloc : %.6f'%S.G_iw.total_density())
if mpi.is_master_node():
ar['DMFT_results']['iteration_count'] = it
ar['DMFT_results']['Iterations']['Sigma_it'+str(it)] = S.Sigma_iw
ar['DMFT_results']['Iterations']['Gloc_it'+str(it)] = S.G_iw
ar['DMFT_results']['Iterations']['G0loc_it'+str(it)] = S.G0_iw
ar['DMFT_results']['Iterations']['dc_imp'+str(it)] = SK.dc_imp
ar['DMFT_results']['Iterations']['dc_energ'+str(it)] = SK.dc_energ
ar['DMFT_results']['Iterations']['chemical_potential'+str(it)] = SK.chemical_potential
if mpi.is_master_node():
print 'calculating mu...'
SK.chemical_potential = SK.calc_mu( precision = 0.000001 )
if mpi.is_master_node():
print 'calculating GAMMA'
SK.calc_density_correction(dm_type='vasp')
if mpi.is_master_node():
print 'calculating energy corrections'
correnerg = 0.5 * (S.G_iw * S.Sigma_iw).total_density()
dm = S.G_iw.density() # compute the density matrix of the impurity problem
SK.calc_dc(dm, U_interact=U, J_hund=J, orb=0, use_dc_formula=DC_type,use_dc_value=DC_value)
dc_energ = SK.dc_energ[0]
if mpi.is_master_node():
ar['DMFT_results']['Iterations']['corr_energy_it'+str(it)] = correnerg
ar['DMFT_results']['Iterations']['dc_energy_it'+str(it)] = dc_energ
if mpi.is_master_node(): del ar
return correnerg, dc_energ

View File

@ -0,0 +1,9 @@
.. _nio_csc.py:
nio_csc.py
-------------
Download :download:`nio_csc.py <./nio_csc.py>`.
.. literalinclude:: nio_csc.py
:language: python

View File

@ -0,0 +1,28 @@
[General]
BASENAME = nio
DOSMESH = -21 55 400
[Group 1]
SHELLS = 1 2
EWINDOW = -9 2
NORMION = False
NORMALIZE = True
BANDS = 2 10
[Shell 1] # Ni d shell
LSHELL = 2
IONS = 1
CORR = True
TRANSFORM = 1.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0
0.0 0.0 0.0 0.0 1.0
[Shell 2] # O p shell
LSHELL = 1
IONS = 2
CORR = False
TRANSFORM = 1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 1.0

View File

@ -0,0 +1,9 @@
.. _plo.cfg:
plo.cfg
-----------
Download :download:`plo.cfg<./plo.cfg>`.
.. literalinclude:: plo.cfg
:language: bash

99
doc/tutorials/nio_csc.rst Normal file
View File

@ -0,0 +1,99 @@
.. _nio_csc:
DFT and projections
==================================================
We will perform DFT+DMFT calcluations for the charge-transfer insulator NiO. We start from scratch and provide all necessary input files to do the calculations: First for doing a single-shot calculation.
.. (and then for charge-selfconsistency).
VASP setup
-------------------------------
We start by running a simple VASP calculation to converge the charge density initially.
Use the :ref:`INCAR`, :ref:`POSCAR`, and :ref:`KPOINTS` provided and use your
own :file:`POTCAR` file.
Let us take a look in the :file:`INCAR`, where we have to specify local orbitals as basis
for our many-body calculation.
.. literalinclude:: images_scripts/INCAR
`LORBIT = 14` takes care of optimizing the projectors in the energy window defined
by `EMIN` and `EMAX`. We switch off all symmetries with `ISYM=-1` since symmetries
are not implemented in the later DMFT scripts. Finally, we select the relevant orbitals
for atom 1 (Ni, d-orbitals) and 2 (O, p-orbitals) by the two `LOCPROJ` lines.
For details refer to the VASP wiki on the `LOCPROJ <https://cms.mpi.univi
e.ac.at/wiki/index.php/LOCPROJ>`_ flag. The projectors are stored in the file `LOCPROJ`.
plovasp
------------------------------
Next, we postprocess the projectors, which VASP stored in the file `LOCPROJ`.
We do this by invoking :program:`plovasp plo.cfg` which is configured by an input file, e.g., named :ref:`plo.cfg`.
.. literalinclude:: images_scripts/plo.cfg
Here, in `[General]` we set the basename and the grid for calculating the density of
states. In `[Group 1]` we define a group of two shells which are orthonormalized with
respect to states in an energy window from `-9` to `2` for all ions simultanously
(`NORMION = False`). We define the two shells, which correspond to the Ni d states
and the O p states. Only the Ni shell is treated as correlated (`CORR = True`), i.e.,
is supplemented with a Coulomb interaction later in the DMFT calculation.
Converting to hdf5 file
-------------------------------
We gather the output generated by :program:`plovasp` into a hdf5 archive which :program:`dft_tools` is able to read. We do this by running :program:`python converter.py` on the script :ref:`converter.py`:
.. literalinclude:: images_scripts/converter.py
Now we are all set to perform a dmft calculation.
DMFT
==================================================
dmft script
------------------------------
Since the python script for performing the dmft loop pretty much resembles that presented in the tutorial on :ref:`srvo3`, we will not go into detail here but simply provide the script :ref:`nio.py`. Following Kunes et al. in `PRB 75 165115 (2007) <https://journals.aps.org/prb/abstract/10.1103/PhysRevB.75.165115>`_ we use :math:`U=8` and :math:`J=1`. We se;ect :math:`\beta=5` instead of :math:`\beta=10` to ease the problem slightly. For simplicity we fix the double-counting potential to :math:`\mu_{DC}=59` eV by::
DC_value = 59.0
SK.calc_dc(dm, U_interact=U, J_hund=J, orb=0, use_dc_value=DC_value)
For sensible results run this script in parallel on at least 20 cores. As a quick check of the results, we can compare the orbital occupation from the paper cited above (:math:`n_{eg} = 0.54` and :math:`n_{t2g}=1.0`) and those from the cthyb output (check lines `Orbital up_0 density:` for a t2g and `Orbital up_2 density:` for an eg orbital). They should coincide well.
Local lattice Green's function for all projected orbitals
----------------------
We calculate the local lattice Green's function - now also for the uncorrelated orbitals, i.e., the O p states, for what we use the script :ref:`NiO_local_lattice_GF.py`. The result is saved in the h5 file as `G_latt_orb_it<n_it>`, where `<n_it>` is the number of the last DMFT iteration.
Spectral function on real axis: MaxEnt
----------------------
To compare to results from literature we make use of the `maxent triqs application <https://triqs.github.io/maxent/master/>`_ and calculate the spectral function on real axis. Use this script to perform a crude but quick calculation: :ref:`maxent.py` using a linear real axis and a line-fit analyzer to determine the optimal :math:`\alpha`. The output is saved in the h5 file in `DMFT_results/Iterations/G_latt_orb_w_o<n_o>_it<n_it>`, where `<n_o>` is the number of the orbital and `n_it` is again the number of the last iteration. The real axis information is stored in `DMFT_results/Iterations/w_it<n_it>`.
.. image:: images_scripts/nio_Aw.png
:width: 400
:align: center
.. Charge self-consistent DMFT
.. ==================================================
.. In this part we will perform charge self-consistent DMFT calculations. To do so we have to adapt the VASP `INCAR` such that :program:`VASP` reads the updated charge density after each step. We add the lines
.. ICHARG = 5
.. NELM = 1000
.. NELMIN = 1000
.. which makes VASP wait after each step of its iterative diagonalization until the file vasp.lock is created. It then reads the update of the charge density in the file `GAMMA`. It is terminated by an external script after a desired amount of steps, such that we deactivate all automatic stoping criterion by setting the number of steps to a very high number.
.. We take the respective converged DFT and DMFT calculations from before as a starting point. I.e., we copy the `CHGCAR` and `nio.h5` together with the other :program:`VASP` input files and :file:`plo.cfg` in a new directory. We use a script called :program:`vasp_dmft` to invoke :program:`VASP` in the background and start the DMFT calculation together with :program:`plovasp` and the converter. This script assumes that the dmft sript contains a function `dmft_cycle()` and also the conversion from text files to the h5 file. Additionally we have to add a few lines to calculate the density correction and calculate the correlation energy. We adapt the script straightforardly and call it :ref:`nio_csc.py`. The most important new lines are
.. SK.chemical_potential = SK.calc_mu( precision = 0.000001 )
.. SK.calc_density_correction(dm_type='vasp')
.. correnerg = 0.5 * (S.G_iw * S.Sigma_iw).total_density()
.. where the chemical potential is determined to a greater precision than before, the correction to the dft density matrix is calculated and output to the file :file:`GAMMA`. The correlation energy is calculated via Migdal-Galitzki formula. We also slightly increase the tolerance for the detection of blocks since the DFT calculation now includes some QMC noise.
.. We can start the whole machinery by excectuing
.. vasp_dmft -n <n_procs> -i <n_iters> -p <vasp_exec> nio_csc.py

View File

@ -84,7 +84,8 @@ class ConfigParameters:
self.sh_optional = {
'transform': ('tmatrix', lambda s: self.parse_string_tmatrix(s, real=True)),
'transfile': ('tmatrices', self.parse_file_tmatrix)}
'transfile': ('tmatrices', self.parse_file_tmatrix),
'corr': ('corr', self.parse_string_logical, True)}
self.gr_required = {
'shells': ('shells', lambda s: map(int, s.split())),
@ -92,12 +93,16 @@ class ConfigParameters:
self.gr_optional = {
'normalize' : ('normalize', self.parse_string_logical, True),
'normion' : ('normion', self.parse_string_logical, True)}
'normion' : ('normion', self.parse_string_logical, True),
'complement' : ('complement', self.parse_string_logical, False),
'bands': ('bands', self.parse_band_window)}
self.gen_optional = {
'basename' : ('basename', str, 'vasp'),
'efermi' : ('efermi', float),
'dosmesh': ('dosmesh', self.parse_string_dosmesh)}
'dosmesh': ('dosmesh', self.parse_string_dosmesh),
'hk': ('hk', self.parse_string_logical, False)}
#
# Special parsers
@ -205,6 +210,21 @@ class ConfigParameters:
assert ftmp[0] < ftmp[1], "The first float in EWINDOW must be smaller than the second one"
return tuple(ftmp)
################################################################################
#
# parse_band_window()
#
################################################################################
def parse_band_window(self, par_str):
"""
Band window is given by two ints, with the first one being smaller
than the second one.
"""
ftmp = map(int, par_str.split())
assert len(ftmp) == 2, "BANDS must be specified by exactly two ints"
assert ftmp[0] < ftmp[1], "The first int in BANDS must be smaller than the second one"
return tuple(ftmp)
################################################################################
#
# parse_string_tmatrix()

View File

@ -38,6 +38,7 @@ import itertools as it
import numpy as np
from proj_group import ProjectorGroup
from proj_shell import ProjectorShell
from proj_shell import ComplementShell
np.set_printoptions(suppress=True)
@ -80,6 +81,7 @@ def check_data_consistency(pars, el_struct):
else:
errmsg = "Projector for isite = %s, l = %s does not match PROJCAR"%(ion + 1, lshell)
raise Exception(errmsg)
################################################################################
#
@ -105,7 +107,8 @@ def generate_plo(conf_pars, el_struct):
# eigvals(nktot, nband, ispin) are defined with respect to the Fermi level
eigvals = el_struct.eigvals - efermi
# check if at least one shell is correlated
assert np.any([shell['corr'] for shell in conf_pars.shells]), 'at least one shell has be CORR = True'
nshell = len(conf_pars.shells)
print
print " Generating %i shell%s..."%(nshell, '' if nshell == 1 else 's')
@ -117,39 +120,58 @@ def generate_plo(conf_pars, el_struct):
print " Orbital l : %i"%(pshell.lorb)
print " Number of ions: %i"%(pshell.nion)
print " Dimension : %i"%(pshell.ndim)
print " Correlated : %r"%(pshell.corr)
pshells.append(pshell)
pgroups = []
for gr_par in conf_pars.groups:
pgroup = ProjectorGroup(gr_par, pshells, eigvals)
pgroup.orthogonalize()
if pgroup.complement:
pgroup.calc_complement(eigvals)
if conf_pars.general['hk']:
pgroup.calc_hk(eigvals)
#testout = 'hk.out.h5'
#from pytriqs.archive import HDFArchive
#with HDFArchive(testout, 'w') as h5test:
# h5test['hk'] = pgroup.hk
# DEBUG output
print "Density matrix:"
dm_all, ov_all = pshells[pgroup.ishells[0]].density_matrix(el_struct)
nimp = 0.0
spin_fac = 2 if dm_all.shape[0] == 1 else 1
for io in xrange(dm_all.shape[1]):
print " Site %i"%(io + 1)
dm = spin_fac * dm_all[:, io, : ,:].sum(0)
for row in dm:
print ''.join(map("{0:12.7f}".format, row))
ndm = dm.trace()
nimp += ndm
print " trace: ", ndm
ov_all = []
for ish in pgroup.ishells:
if not isinstance(pshells[pgroup.ishells[ish]],ComplementShell):
print " Shell %i"%(ish + 1)
dm_all, ov_all_ = pshells[ish].density_matrix(el_struct)
ov_all.append(ov_all_[0])
spin_fac = 2 if dm_all.shape[0] == 1 else 1
for io in xrange(dm_all.shape[1]):
print " Site %i"%(io + 1)
dm = spin_fac * dm_all[:, io, : ,:].sum(0)
for row in dm:
print ''.join(map("{0:14.7f}".format, row))
ndm = dm.trace()
if pshells[ish].corr:
nimp += ndm
print " trace: ", ndm
print
print " Impurity density:", nimp
print
print "Overlap:"
for io, ov in enumerate(ov_all[0]):
for io, ov in enumerate(ov_all):
print " Site %i"%(io + 1)
print ov
print ov[0,...]
print
print "Local Hamiltonian:"
loc_ham = pshells[pgroup.ishells[0]].local_hamiltonian(el_struct)
for io in xrange(loc_ham.shape[1]):
print " Site %i"%(io + 1)
for row in loc_ham[:, io, :, :].sum(0):
print ''.join(map("{0:12.7f}".format, row))
for ish in pgroup.ishells:
if not isinstance(pshells[pgroup.ishells[ish]],ComplementShell):
print " Shell %i"%(ish + 1)
loc_ham = pshells[pgroup.ishells[ish]].local_hamiltonian(el_struct)
for io in xrange(loc_ham.shape[1]):
print " Site %i"%(io + 1)
for row in loc_ham[:, io, :, :].sum(0):
print ''.join(map("{0:14.7f}".format, row))
# END DEBUG output
if 'dosmesh' in conf_pars.general:
print
@ -164,12 +186,15 @@ def generate_plo(conf_pars, el_struct):
n_points = mesh_pars['n_points']
emesh = np.linspace(dos_emin, dos_emax, n_points)
dos = pshells[pgroup.ishells[0]].density_of_states(el_struct, emesh)
de = emesh[1] - emesh[0]
ntot = (dos[1:,...] + dos[:-1,...]).sum(0) / 2 * de
print " Total number of states:", ntot
for io in xrange(dos.shape[2]):
np.savetxt('pdos_%i.dat'%(io), np.vstack((emesh.T, dos[:, 0, io, :].T)).T)
for ish in pgroup.ishells:
if not isinstance(pshells[pgroup.ishells[ish]],ComplementShell) or True:
print " Shell %i"%(ish + 1)
dos = pshells[pgroup.ishells[ish]].density_of_states(el_struct, emesh)
de = emesh[1] - emesh[0]
ntot = (dos[1:,...] + dos[:-1,...]).sum(0) / 2 * de
print " Total number of states:", ntot
for io in xrange(dos.shape[2]):
np.savetxt('pdos_%i_%i.dat'%(ish,io), np.vstack((emesh.T, dos[:, 0, io, :].T)).T)
pgroups.append(pgroup)
@ -186,6 +211,8 @@ def output_as_text(pars, el_struct, pshells, pgroups):
"""
ctrl_output(pars, el_struct, len(pgroups))
plo_output(pars, el_struct, pshells, pgroups)
if pars.general['hk']:
hk_output(pars, el_struct, pgroups)
# TODO: k-points with weights should be stored once and for all
@ -308,8 +335,13 @@ def plo_output(conf_pars, el_struct, pshells, pgroups):
print " Storing PLO-group file '%s'..."%(plo_fname)
head_dict = {}
head_dict['ewindow'] = (pgroup.emin, pgroup.emax)
head_dict['nb_max'] = pgroup.nb_max
if 'bands' in conf_pars.groups[ig]:
head_dict['bandwindow'] = (pgroup.ib_min, pgroup.ib_max)
else:
head_dict['ewindow'] = (pgroup.emin, pgroup.emax)
# Number of electrons within the window
head_dict['nelect'] = pgroup.nelect_window(el_struct)
@ -322,6 +354,7 @@ def plo_output(conf_pars, el_struct, pshells, pgroups):
sh_dict['shell_index'] = ish
sh_dict['lorb'] = shell.lorb
sh_dict['ndim'] = shell.ndim
sh_dict['corr'] = shell.corr
# Convert ion indices from the internal representation (starting from 0)
# to conventional VASP representation (starting from 1)
ion_output = [io + 1 for io in shell.ion_list]
@ -342,7 +375,11 @@ def plo_output(conf_pars, el_struct, pshells, pgroups):
f.write("#END OF HEADER\n")
# Eigenvalues within the window
f.write("# Eigenvalues within the energy window: %s, %s\n"%(pgroup.emin, pgroup.emax))
if 'bands' in conf_pars.groups[ig]:
f.write("# Eigenvalues within the band window: %s, %s\n"%(pgroup.ib_min+1, pgroup.ib_max+1))
else:
f.write("# Eigenvalues within the energy window: %s, %s\n"%(pgroup.emin, pgroup.emax))
nk, nband, ns_band = el_struct.eigvals.shape
for isp in xrange(ns_band):
f.write("# is = %i\n"%(isp + 1))
@ -376,4 +413,79 @@ def plo_output(conf_pars, el_struct, pshells, pgroups):
f.write("{0:16.10f}{1:16.10f}\n".format(p.real, p.imag))
f.write("\n")
################################################################################
#
# plo_output
#
################################################################################
def hk_output(conf_pars, el_struct, pgroups):
"""
Outputs HK into text file.
Filename is defined by <basename> that is passed from config-file.
The Hk for each groups is stored in a '<basename>.hk<Ng>' file. The format is
similar as defined in the Hk dft_tools format, but does not store info
about correlated shells and irreps
nk # number of k-points
n_el # electron density
n_sh # number of total atomic shells
at sort l dim # atom, sort, l, dim
at sort l dim # atom, sort, l, dim
After these header lines, the file has to contain the Hamiltonian matrix
in orbital space. The standard convention is that you give for each k-point
first the matrix of the real part, then the matrix of the imaginary part,
and then move on to the next k-point.
"""
for ig, pgroup in enumerate(pgroups):
hk_fname = conf_pars.general['basename'] + '.hk%i'%(ig + 1)
print " Storing HK-group file '%s'..."%(hk_fname)
head_shells = []
for ish in pgroup.ishells:
shell = pgroup.shells[ish]
ion_output = [io + 1 for io in shell.ion_list]
for iion in ion_output:
sh_dict = {}
sh_dict['shell_index'] = ish
sh_dict['lorb'] = shell.lorb
sh_dict['ndim'] = shell.ndim
# Convert ion indices from the internal representation (starting from 0)
# to conventional VASP representation (starting from 1)
# Derive sorts from equivalence classes
sh_dict['ion_list'] = ion_output
sh_dict['ion_sort'] = shell.ion_sort
head_shells.append(sh_dict)
with open(hk_fname, 'wt') as f:
# Eigenvalues within the window
nk, nband, ns_band = el_struct.eigvals.shape
f.write('%i # number of kpoints\n'%nk)
f.write('{0:0.4f} # electron density\n'.format(pgroup.nelect_window(el_struct)))
f.write('%i # number of shells\n'%len(head_shells))
for head in head_shells:
f.write('%i %i %i %i # atom sort l dim\n'%(head['ion_list'][0],head['ion_sort'][0],head['lorb'],head['ndim']))
norbs = pgroup.hk.shape[2]
for isp in xrange(ns_band):
for ik in xrange(nk):
for io in xrange(norbs):
for iop in xrange(norbs):
f.write(" {0:14.10f}".format(pgroup.hk[isp,ik,io,iop].real))
f.write("\n")
for io in xrange(norbs):
for iop in xrange(norbs):
f.write(" {0:14.10f}".format(pgroup.hk[isp,ik,io,iop].imag))
f.write("\n")

View File

@ -30,7 +30,7 @@ r"""
Storage and manipulation of projector groups.
"""
import numpy as np
from proj_shell import ComplementShell
np.set_printoptions(suppress=True)
################################################################################
@ -62,15 +62,37 @@ class ProjectorGroup:
self.ishells = gr_pars['shells']
self.ortho = gr_pars['normalize']
self.normion = gr_pars['normion']
self.complement = gr_pars['complement']
self.shells = shells
# Determine the minimum and maximum band numbers
ib_win, ib_min, ib_max = self.select_bands(eigvals)
if 'bands' in gr_pars:
nk, nband, ns_band = eigvals.shape
ib_win = np.zeros((nk, ns_band, 2), dtype=np.int32)
ib_win[:,:,0] = gr_pars['bands'][0]-1
ib_win[:,:,1] = gr_pars['bands'][1]-1
ib_min = gr_pars['bands'][0] - 1
ib_max = gr_pars['bands'][1] - 1
else:
ib_win, ib_min, ib_max = self.select_bands(eigvals)
self.ib_win = ib_win
self.ib_min = ib_min
self.ib_max = ib_max
self.nb_max = ib_max - ib_min + 1
if self.complement:
n_bands = self.ib_win[:,:,1] - self.ib_win[:,:,0]+1
n_orbs = sum([x.ndim for x in self.shells])
assert np.all( n_bands == n_bands[0,0] ), "At each band the same number of bands has to be selected for calculating the complement (to end up with an equal number of orbitals at each k-point)."
if n_orbs == n_bands[0,0]:
self.complement = False
print "\nWARNING: The total number of orbitals in this group is "
print "equal to the number of bands. Setting COMPLEMENT to FALSE!\n"
# Select projectors within the energy window
for ish in self.ishells:
@ -155,7 +177,131 @@ class ProjectorGroup:
nlm = i2 - i1 + 1
shell = self.shells[ish]
shell.proj_win[ion, isp, ik, :nlm, :nb] = p_orth[i1:i2, :nb]
################################################################################
#
# calc_hk
#
################################################################################
def calc_hk(self, eigvals):
"""
Calculate H(k) for a group by applying the projectors P
to the eigenvalues eps.
H_ij(k) = sum_l P*_il eps_l P_lj
"""
# here we abuse the get_block_matrix_map(), however, it only works
# if self.normion is false
temp = self.normion
self.normion = False
block_maps, ndim = self.get_block_matrix_map()
self.normion = temp
_, ns, nk, _, _ = self.shells[0].proj_win.shape
self.hk = np.zeros((ns,nk,ndim,ndim), dtype=np.complex128)
# Note that 'ns' and 'nk' are the same for all shells
for isp in xrange(ns):
for ik in xrange(nk):
bmin = self.ib_win[ik, isp, 0]
bmax = self.ib_win[ik, isp, 1]+1
nb = bmax - bmin
p_mat = np.zeros((ndim, nb), dtype=np.complex128)
#print(bmin,bmax,nb)
# Combine all projectors of the group to one block projector
for bl_map in block_maps:
p_mat[:, :] = 0.0j # !!! Clean-up from the last k-point and block!
for ibl, block in enumerate(bl_map):
i1, i2 = block['bmat_range']
ish, ion = block['shell_ion']
nlm = i2 - i1 + 1
shell = self.shells[ish]
p_mat[i1:i2, :nb] = shell.proj_win[ion, isp, ik, :nlm, :nb]
self.hk[isp,ik,:,:] = np.dot(p_mat*eigvals[ik,bmin:bmax,isp],
p_mat.transpose().conjugate())
################################################################################
#
# complement
#
################################################################################
def calc_complement(self,eigvals):
"""
Calculate the complement for a group of projectors.
This leads to quadtratic projectors P = <l|n> by using a Gram-Schmidt.
The projector on the orthogonal complement of the existing projectors
{|l>} is P^u = 1 - sum_l |l><l|
We get candidates for complement projectors by applying P^u to a Bloch
state |n>: |l*> = P^u |n>. For numerical stability we select that Bloch
state which leads to the |l*> with the largest norm (that corresponds to
that Bloch state with the smallest overlap with the space spanned by {|l>})
We normalize |l*> and add it to {|l>}. We do so untill we have as many
|l> states as we have {|n>} states.
"""
print '\nCalculating complement\n'
block_maps, ndim = self.get_block_matrix_map()
_, ns, nk, _, _ = self.shells[0].proj_win.shape
p_mat = np.zeros((ndim, self.nb_max), dtype=np.complex128)
p_full = np.zeros((1,ns,nk,self.nb_max, self.nb_max), dtype=np.complex128)
# Note that 'ns' and 'nk' are the same for all shells
for isp in xrange(ns):
for ik in xrange(nk):
bmin = self.ib_win[ik, isp, 0]
bmax = self.ib_win[ik, isp, 1]+1
nb = bmax - bmin
# Combine all projectors of the group to one block projector
for bl_map in block_maps:
p_mat[:, :] = 0.0j # !!! Clean-up from the last k-point and block!
for ibl, block in enumerate(bl_map):
i1, i2 = block['bmat_range']
ish, ion = block['shell_ion']
nlm = i2 - i1 + 1
shell = self.shells[ish]
p_mat[i1:i2, :nb] = shell.proj_win[ion, isp, ik, :nlm, :nb]
orbs_done = 1*ndim
p_full[0,isp,ik,:ndim,:] = p_mat
while orbs_done < self.nb_max:
#We calculate the overlap of all bloch states: sum_l <n|l><l|m>
overlap = np.dot(p_full[0,isp,ik,:orbs_done,:].transpose().conjugate(),p_full[0,isp,ik,:orbs_done,:])
# work is the projector onto the orthogonal complment <n| ( 1 - sum_l |l><l| ) |m>
work = np.eye(self.nb_max) - overlap
# calculate the norm of the projected bloch function
norm = np.sqrt(np.sum(work*work.transpose(),axis=1))
# select the bloch function leading to the largest norm
max_ind = np.argmax(norm)
# normalize and put it to the projectors
p_full[0,isp,ik,orbs_done,:] = work[:,max_ind].conjugate()/norm[max_ind]
orbs_done += 1
sh_pars = {}
sh_pars['lshell'] = -1
sh_pars['ions'] = {'nion':1,'ion_list':[[1]]}
sh_pars['user_index'] = 'complement'
sh_pars['corr'] = False
sh_pars['ib_min'] = bmin
sh_pars['ib_max'] = bmax
sh_pars['ib_win'] = self.ib_win
self.shells.append(ComplementShell(sh_pars,p_full[:,:,:,ndim:,:],False))
self.ishells.append(self.ishells[-1]+1)
################################################################################
#
# gen_block_matrix_map

View File

@ -72,7 +72,8 @@ class ProjectorShell:
self.lorb = sh_pars['lshell']
self.ions = sh_pars['ions']
self.user_index = sh_pars['user_index']
self.nc_flag = nc_flag
self.corr = sh_pars['corr']
self.nc_flag = nc_flag
# try:
# self.tmatrix = sh_pars['tmatrix']
# except KeyError:
@ -449,5 +450,61 @@ class ProjectorShell:
return dos
################################################################################
################################################################################
#
# class ProjectorShell
#
################################################################################
################################################################################
class ComplementShell(ProjectorShell):
"""
Container of projectors related to a complement shell.
Parameters:
- sh_pars (dict) : shell parameters from the config-file
- proj_compl (numpy.array) : array of complement projectors
"""
def __init__(self, sh_pars, proj_compl, nc_flag):
self.lorb = sh_pars['lshell']
self.ions = sh_pars['ions']
self.user_index = sh_pars['user_index']
self.corr = sh_pars['corr']
self.nc_flag = nc_flag
self.ib_min = sh_pars['ib_min']
self.ib_max = sh_pars['ib_max']
self.ib_win = sh_pars['ib_win']
#self.lm1 = self.lorb**2
#self.lm2 = (self.lorb+1)**2
self.nion = self.ions['nion']
# Extract ion list and equivalence classes (ion sorts)
self.ion_list = sorted(it.chain(*self.ions['ion_list']))
self.ion_sort = []
for ion in self.ion_list:
for icl, eq_cl in enumerate(self.ions['ion_list']):
if ion in eq_cl:
self.ion_sort.append(icl + 1) # Enumerate classes starting from 1
break
self.ndim = proj_compl.shape[3]
self.proj_win = proj_compl
def extract_tmatrices(self, sh_pars):
raise Exception('not implemented')
def local_hamiltonian(self, el_struct, site_diag=True, spin_diag=True):
raise Exception('not implemented')
def density_matrix(self, el_struct, site_diag=True, spin_diag=True):
raise Exception('not implemented')
#def density_of_states(self, el_struct, emesh):
# raise Exception('not implemented')

View File

@ -350,7 +350,7 @@ class Poscar:
"""
# Convenince local function
def readline_remove_comments():
return f.next().split('!')[0].strip()
return f.next().split('!')[0].split('#')[0].strip()
# Add a slash to the path name if necessary
if vasp_dir[-1] != '/':

View File

@ -43,9 +43,36 @@ class VaspConverter(ConverterTools):
dft_subgrp = 'dft_input', symmcorr_subgrp = 'dft_symmcorr_input',
parproj_subgrp='dft_parproj_input', symmpar_subgrp='dft_symmpar_input',
bands_subgrp = 'dft_bands_input', misc_subgrp = 'dft_misc_input',
transp_subgrp = 'dft_transp_input', repacking = False):
transp_subgrp = 'dft_transp_input', repacking = False,
proj_or_hk='proj'):
"""
Init of the class. Variable filename gives the root of all filenames, e.g. case.ctqmcout, case.h5, and so on.
Parameters
----------
filename : string
Base name of DFT files.
hdf_filename : string, optional
Name of hdf5 archive to be created.
dft_subgrp : string, optional
Name of subgroup storing necessary DFT data.
symmcorr_subgrp : string, optional
Name of subgroup storing correlated-shell symmetry data.
parproj_subgrp : string, optional
Name of subgroup storing partial projector data.
symmpar_subgrp : string, optional
Name of subgroup storing partial-projector symmetry data.
bands_subgrp : string, optional
Name of subgroup storing band data.
misc_subgrp : string, optional
Name of subgroup storing miscellaneous DFT data.
transp_subgrp : string, optional
Name of subgroup storing transport data.
repacking : boolean, optional
Does the hdf5 archive need to be repacked to save space?
proj_or_hk : string, optional
Select scheme to convert between KS bands and localized orbitals.
"""
assert type(filename)==StringType, "Please provide the DFT files' base name as a string."
@ -61,15 +88,22 @@ class VaspConverter(ConverterTools):
self.bands_subgrp = bands_subgrp
self.misc_subgrp = misc_subgrp
self.transp_subgrp = transp_subgrp
assert (proj_or_hk == 'proj') or (proj_or_hk == 'hk'), "proj_or_hk has to be 'proj' of 'hk'"
self.proj_or_hk = proj_or_hk
# Checks if h5 file is there and repacks it if wanted:
if (os.path.exists(self.hdf_file) and repacking):
ConverterTools.repack(self)
# this is to test pull request
def read_data(self, fh):
"""
Generator for reading plain data.
Parameters
----------
fh : file object
file object which is read in.
"""
for line in fh:
line_ = line.strip()
@ -82,6 +116,11 @@ class VaspConverter(ConverterTools):
def read_header_and_data(self, filename):
"""
Opens a file and returns a JSON-header and the generator for the plain data.
Parameters
----------
filename : string
file name of the file to read.
"""
fh = open(filename, 'rt')
header = ""
@ -147,18 +186,24 @@ class VaspConverter(ConverterTools):
gr_file = self.basename + '.pg%i'%(ig + 1)
jheader, rf = self.read_header_and_data(gr_file)
gr_head = json.loads(jheader)
e_win = gr_head['ewindow']
nb_max = gr_head['nb_max']
p_shells = gr_head['shells']
density_required = gr_head['nelect']
charge_below = 0.0 # This is not defined in VASP interface
# Note that in the DftTools convention each site gives a separate correlated shell!
n_shells = sum([len(sh['ion_list']) for sh in p_shells])
n_corr_shells = sum([len(sh['ion_list']) for sh in p_shells])
shells = []
corr_shells = []
shion_to_corr_shell = [[] for ish in xrange(len(p_shells))]
shion_to_shell = [[] for ish in xrange(len(p_shells))]
cr_shion_to_shell = [[] for ish in xrange(len(p_shells))]
shorbs_to_globalorbs = [[] for ish in xrange(len(p_shells))]
last_dimension = 0
crshorbs_to_globalorbs = []
icsh = 0
for ish, sh in enumerate(p_shells):
ion_list = sh['ion_list']
@ -168,16 +213,25 @@ class VaspConverter(ConverterTools):
# We set all sites inequivalent
pars['sort'] = sh['ion_sort'][i]
pars['l'] = sh['lorb']
#pars['corr'] = sh['corr']
pars['dim'] = sh['ndim']
#pars['ion_list'] = sh['ion_list']
pars['SO'] = SO
# TODO: check what 'irep' entry does (it seems to be very specific to dmftproj)
pars['irep'] = 0
corr_shells.append(pars)
shion_to_corr_shell[ish].append(i)
shells.append(pars)
shion_to_shell[ish].append(i)
shorbs_to_globalorbs[ish].append([last_dimension,
last_dimension+sh['ndim']])
last_dimension = last_dimension+sh['ndim']
if sh['corr']:
corr_shells.append(pars)
# TODO: generalize this to the case of multiple shell groups
n_shells = n_corr_shells # No non-correlated shells at the moment
shells = corr_shells
n_corr_shells = len(corr_shells)
n_orbs = sum([sh['dim'] for sh in shells])
# FIXME: atomic sorts in Wien2K are not the same as in VASP.
# A symmetry analysis from OUTCAR or symmetry file should be used
@ -217,6 +271,7 @@ class VaspConverter(ConverterTools):
band_window = [numpy.zeros((n_k, 2), dtype=int) for isp in xrange(n_spin_blocs)]
n_orbitals = numpy.zeros([n_k, n_spin_blocs], numpy.int)
for isp in xrange(n_spin_blocs):
for ik in xrange(n_k):
ib1, ib2 = int(rf.next()), int(rf.next())
@ -226,11 +281,34 @@ class VaspConverter(ConverterTools):
for ib in xrange(nb):
hopping[ik, isp, ib, ib] = rf.next()
f_weights[ik, isp, ib] = rf.next()
if self.proj_or_hk == 'hk':
hopping = numpy.zeros([n_k, n_spin_blocs, n_orbs, n_orbs], numpy.complex_)
# skip header lines
hk_file = self.basename + '.hk%i'%(ig + 1)
f_hk = open(hk_file, 'rt')
# skip the header (1 line for n_kpoints, n_electrons, n_shells)
# and one line per shell
count = 0
while count < 3 + n_shells:
f_hk.readline()
count += 1
rf_hk = self.read_data(f_hk)
for isp in xrange(n_spin_blocs):
for ik in xrange(n_k):
n_orbitals[ik, isp] = n_orbs
for ib in xrange(n_orbs):
for jb in xrange(n_orbs):
hopping[ik, isp, ib, jb] = rf_hk.next()
for ib in xrange(n_orbs):
for jb in xrange(n_orbs):
hopping[ik, isp, ib, jb] += 1j*rf_hk.next()
rf_hk.close()
# Projectors
# print n_orbitals
# print [crsh['dim'] for crsh in corr_shells]
proj_mat = numpy.zeros([n_k, n_spin_blocs, n_corr_shells, max([crsh['dim'] for crsh in corr_shells]), numpy.max(n_orbitals)], numpy.complex_)
proj_mat_csc = numpy.zeros([n_k, n_spin_blocs, sum([sh['dim'] for sh in shells]), numpy.max(n_orbitals)], numpy.complex_)
# TODO: implement reading from more than one projector group
# In 'dmftproj' each ion represents a separate correlated shell.
@ -244,19 +322,43 @@ class VaspConverter(ConverterTools):
#
# At the moment I choose i.2 for its simplicity. But one should consider possible
# use cases and decide which solution is to be made permanent.
#
#
for ish, sh in enumerate(p_shells):
for isp in xrange(n_spin_blocs):
for ik in xrange(n_k):
for ion in xrange(len(sh['ion_list'])):
icsh = shion_to_corr_shell[ish][ion]
for ilm in xrange(sh['ndim']):
for ilm in xrange(shorbs_to_globalorbs[ish][ion][0],shorbs_to_globalorbs[ish][ion][1]):
for ib in xrange(n_orbitals[ik, isp]):
# This is to avoid confusion with the order of arguments
pr = rf.next()
pi = rf.next()
proj_mat[ik, isp, icsh, ilm, ib] = complex(pr, pi)
proj_mat_csc[ik, isp, ilm, ib] = complex(pr, pi)
# now save only projectors with flag 'corr' to proj_mat
proj_mat = numpy.zeros([n_k, n_spin_blocs, n_corr_shells, max([crsh['dim'] for crsh in corr_shells]), numpy.max(n_orbitals)], numpy.complex_)
if self.proj_or_hk == 'proj':
for ish, sh in enumerate(p_shells):
if sh['corr']:
for isp in xrange(n_spin_blocs):
for ik in xrange(n_k):
for ion in xrange(len(sh['ion_list'])):
icsh = shion_to_shell[ish][ion]
for iclm,ilm in enumerate(xrange(shorbs_to_globalorbs[ish][ion][0],shorbs_to_globalorbs[ish][ion][1])):
for ib in xrange(n_orbitals[ik, isp]):
proj_mat[ik,isp,icsh,iclm,ib] = proj_mat_csc[ik,isp,ilm,ib]
elif self.proj_or_hk == 'hk':
for ish, sh in enumerate(p_shells):
if sh['corr']:
for ion in xrange(len(sh['ion_list'])):
icsh = shion_to_shell[ish][ion]
for isp in xrange(n_spin_blocs):
for ik in xrange(n_k):
for iclm,ilm in enumerate(xrange(shorbs_to_globalorbs[ish][ion][0],shorbs_to_globalorbs[ish][ion][1])):
proj_mat[ik,isp,icsh,iclm,ilm] = 1.0
#corr_shell.pop('ion_list')
things_to_set = ['n_shells','shells','n_corr_shells','corr_shells','n_spin_blocs','n_orbitals','n_k','SO','SP','energy_unit']
for it in things_to_set:
# print "%s:"%(it), locals()[it]
@ -267,6 +369,8 @@ class VaspConverter(ConverterTools):
rf.close()
proj_or_hk = self.proj_or_hk
# Save it to the HDF:
with HDFArchive(self.hdf_file,'a') as ar:
@ -275,7 +379,9 @@ class VaspConverter(ConverterTools):
things_to_save = ['energy_unit','n_k','k_dep_projection','SP','SO','charge_below','density_required',
'symm_op','n_shells','shells','n_corr_shells','corr_shells','use_rotations','rot_mat',
'rot_mat_time_inv','n_reps','dim_reps','T','n_orbitals','proj_mat','bz_weights','hopping',
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr']
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr','proj_or_hk']
if self.proj_or_hk == 'hk' or True:
things_to_save.append('proj_mat_csc')
for it in things_to_save: ar[self.dft_subgrp][it] = locals()[it]
# Store Fermi weights to 'dft_misc_input'
@ -296,6 +402,24 @@ class VaspConverter(ConverterTools):
Reads input for the band window from bandwin_file, which is case.oubwin,
structure from struct_file, which is case.struct,
symmetries from outputs_file, which is case.outputs.
Parameters
----------
bandwin_file : string
filename of .oubwin/up/dn file.
struct_file : string
filename of .struct file.
outputs_file : string
filename of .outputs file.
misc_subgrp : string
name of the subgroup in which to save
SO : boolean
spin-orbit switch
SP : int
spin
n_k : int
number of k-points
"""
if not (mpi.is_master_node()): return
@ -390,6 +514,15 @@ class VaspConverter(ConverterTools):
def convert_symmetry_input(self, ctrl_head, orbits, symm_subgrp):
"""
Reads input for the symmetrisations from symm_file, which is case.sympar or case.symqmc.
Parameters
----------
ctrl_head : dict
dictionary of header of .ctrl file
orbits : list of shells
contains all shells
symm_subgrp : name of symmetry group in h5 archive
"""
# In VASP interface the symmetries are read directly from *.ctrl file

View File

@ -97,10 +97,13 @@ class SumkDFT(object):
# Read input from HDF:
things_to_read = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
'symm_op', 'n_shells', 'shells', 'n_corr_shells', 'corr_shells', 'use_rotations', 'rot_mat',
'rot_mat_time_inv', 'n_reps', 'dim_reps', 'T', 'n_orbitals', 'proj_mat', 'bz_weights', 'hopping',
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr']
'rot_mat_time_inv', 'n_reps', 'dim_reps', 'T', 'n_orbitals', 'proj_mat', 'proj_mat_csc', 'bz_weights', 'hopping',
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr','proj_or_hk']
self.subgroup_present, self.value_read = self.read_input_from_hdf(
subgrp=self.dft_data, things_to_read=things_to_read)
# if self.proj_or_hk == 'hk':
# self.subgroup_present, self.value_read = self.read_input_from_hdf(
# subgrp=self.dft_data, things_to_read=['proj_mat_csc'])
if self.symm_op:
self.symmcorr = Symmetry(hdf_file, subgroup=self.symmcorr_data)
@ -294,6 +297,7 @@ class SumkDFT(object):
- if shells='corr': orthonormalized projectors for correlated shells are used for the downfolding.
- if shells='all': non-normalized projectors for all included shells are used for the downfolding.
- if shells='csc': orthonormalized projectors for all shells are used for the downfolding. Used for H(k).
ir : integer, optional
Index of equivalent site in the non-correlated shell 'ish', only used if shells='all'.
@ -316,6 +320,8 @@ class SumkDFT(object):
raise ValueError, "downfold: provide ir if treating all shells."
dim = self.shells[ish]['dim']
projmat = self.proj_mat_all[ik, isp, ish, ir, 0:dim, 0:n_orb]
elif shells == 'csc':
projmat = self.proj_mat_csc[ik, isp, 0:n_orb, 0:n_orb]
gf_downfolded.from_L_G_R(
projmat, gf_to_downfold, projmat.conjugate().transpose())
@ -346,6 +352,7 @@ class SumkDFT(object):
- if shells='corr': orthonormalized projectors for correlated shells are used for the upfolding.
- if shells='all': non-normalized projectors for all included shells are used for the upfolding.
- if shells='csc': orthonormalized projectors for all shells are used for the upfolding. Used for H(k).
ir : integer, optional
Index of equivalent site in the non-correlated shell 'ish', only used if shells='all'.
@ -368,6 +375,8 @@ class SumkDFT(object):
raise ValueError, "upfold: provide ir if treating all shells."
dim = self.shells[ish]['dim']
projmat = self.proj_mat_all[ik, isp, ish, ir, 0:dim, 0:n_orb]
elif shells == 'csc':
projmat = self.proj_mat_csc[ik, isp, 0:n_orb, 0:n_orb]
gf_upfolded.from_L_G_R(
projmat.conjugate().transpose(), gf_to_upfold, projmat)
@ -553,7 +562,9 @@ class SumkDFT(object):
idmat = [numpy.identity(
self.n_orbitals[ik, ntoi[sp]], numpy.complex_) for sp in spn]
M = copy.deepcopy(idmat)
for ibl in range(self.n_spin_blocks[self.SO]):
ind = ntoi[spn[ibl]]
n_orb = self.n_orbitals[ik, ind]
M[ibl] = self.hopping[ik, ind, 0:n_orb, 0:n_orb] - \
@ -1840,6 +1851,15 @@ class SumkDFT(object):
for ik in mpi.slice_array(ikarray):
G_latt_iw = self.lattice_gf(
ik=ik, mu=self.chemical_potential, iw_or_w="iw")
if dm_type == 'vasp' and self.proj_or_hk == 'hk':
# rotate the Green function into the DFT band basis
for bname, gf in G_latt_iw:
G_latt_rot_iw = gf.copy()
G_latt_rot_iw << self.upfold(
ik, 0, bname, G_latt_iw[bname], gf,shells='csc')
G_latt_iw[bname] = G_latt_rot_iw.copy()
for bname, gf in G_latt_iw:
deltaN[bname][ik] = G_latt_iw[bname].density()

Binary file not shown.

View File

@ -0,0 +1,13 @@
[General]
[Group 1]
SHELLS = 1 2
[Shell 1]
LSHELL = 2
IONS = 5..8
[Shell 2]
LSHELL = 1
IONS = 1..4
CORR = False

View File

@ -0,0 +1,18 @@
[General]
[Group 1]
SHELLS = 1 2
[Shell 1]
LSHELL = 2
IONS = 5..8
[Shell 2]
LSHELL = 1
IONS = 1..4
TRANSFORM = 0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 1.0
CORR = False

View File

@ -30,7 +30,8 @@ class TestParseGeneral(arraytest.ArrayTestCase):
conf_pars.parse_general()
res = conf_pars.general
expected = {'basename': 'test_base', 'efermi': 0.1,
'dosmesh': {'n_points': 101, 'emin': -8.0, 'emax': 4.0}}
'dosmesh': {'n_points': 101, 'emin': -8.0, 'emax': 4.0},
'hk' : False}
self.assertDictEqual(res, expected)

View File

@ -39,9 +39,11 @@ class TestParseGroups(arraytest.ArrayTestCase):
conf_pars.parse_groups()
res = conf_pars.groups
expected = [{'index': 1, 'shells': [1, 2], 'ewindow': (-7.6, 3.0),
'normalize': True, 'normion': True},
'normalize': True, 'normion': True,'complement': False},
{'index': 2, 'shells': [3], 'ewindow': (-1.6, 2.0),
'normalize': True, 'normion': True}]
'normalize': True, 'normion': True,'complement': False}]
print res
print expected
self.assertListEqual(res, expected)

View File

@ -78,12 +78,12 @@ class TestParseInput(arraytest.ArrayTestCase):
res = res.replace(" ","") # Remove spaces for comparison
expected = r"""Shells:
[{'ions':{'nion':4,'ion_list':[[4],[5],[6],[7]]},'user_index':1,'lshell':2},{'tmatrix':array([[0.,1.,0.],
[{'ions':{'nion':4,'ion_list':[[4],[5],[6],[7]]},'user_index':1,'lshell':2,'corr':True},{'tmatrix':array([[0.,1.,0.],
[1.,0.,0.],
[0.,0.,1.]]),'ions':{'nion':4,'ion_list':[[0],[1],[2],[3]]},'user_index':2,'lshell':1},{'ions':{'nion':4,'ion_list':[[0],[1],[2],[3]]},'user_index':3,'lshell':3}]
[0.,0.,1.]]),'ions':{'nion':4,'ion_list':[[0],[1],[2],[3]]},'user_index':2,'lshell':1,'corr':True},{'ions':{'nion':4,'ion_list':[[0],[1],[2],[3]]},'user_index':3,'lshell':3,'corr':True}]
Groups:
[{'normalize':True,'index':1,'ewindow':(-7.6,3.0),'normion':True,'shells':[0,1]},{'normalize':True,'index':2,'ewindow':(-1.6,2.0),'normion':True,'shells':[2]}]"""
[{'normalize':True,'index':1,'ewindow':(-7.6,3.0),'shells':[0,1],'complement':False,'normion':True},{'normalize':True,'index':2,'ewindow':(-1.6,2.0),'shells':[2],'complement':False,'normion':True}]"""
self.assertEqual(res, expected)
@ -103,10 +103,10 @@ Groups:
res = res.replace(" ","") # Remove spaces for comparison
expected = r"""Shells:
[{'ions':{'nion':4,'ion_list':[[4],[5],[6],[7]]},'user_index':1,'lshell':2}]
[{'ions':{'nion':4,'ion_list':[[4],[5],[6],[7]]},'user_index':1,'lshell':2,'corr':True}]
Groups:
[{'normalize':True,'index':'1','ewindow':(-7.6,3.0),'shells':[0],'normion':True}]"""
[{'normalize':True,'index':'1','ewindow':(-7.6,3.0),'normion':True,'complement':False,'shells':[0]}]"""
self.assertEqual(res, expected)

View File

@ -30,6 +30,8 @@ class TestParseShells(arraytest.ArrayTestCase):
**raise** Exception
- **if** two correct [Shell] sections are defined
**return** a dictionary of shell parameters
- **if** two correct [Shell] sections (one has CORR=False are defined
**return** a dictionary of shell parameters
"""
# Scenario 1
def test_no_shell(self):
@ -57,9 +59,9 @@ class TestParseShells(arraytest.ArrayTestCase):
conf_pars = ConfigParameters(_rpath + 'parse_shells_4.cfg')
conf_pars.parse_shells()
res = conf_pars.shells
expected = [{'user_index': 1, 'lshell': 2, 'ions': {'nion': 4, 'ion_list': [[4],[5],[6],[7]]}},
expected = [{'user_index': 1, 'lshell': 2, 'ions': {'nion': 4, 'ion_list': [[4],[5],[6],[7]]},'corr': True},
{'user_index': 2, 'lshell': 1, 'ions': {'nion': 4, 'ion_list': [[0],[1],[2],[3]]},
'tmatrix': np.array([[ 0., 1., 0.], [ 1., 0., 0.], [ 0., 0., 1.]])}]
'tmatrix': np.array([[ 0., 1., 0.], [ 1., 0., 0.], [ 0., 0., 1.]]),'corr': True}]
# ...lousy way to test equality of two dictionaries containing numpy arrays
self.assertEqual(len(res), len(expected))
@ -77,6 +79,24 @@ class TestParseShells(arraytest.ArrayTestCase):
self.assertListEqual(res, expected)
# Scenario 5
def test_two_shells_with_corr_false(self):
conf_pars = ConfigParameters(_rpath + 'parse_shells_5.cfg')
conf_pars.parse_shells()
res = conf_pars.shells
expected = [{'user_index': 1, 'lshell': 2, 'ions': {'nion': 4, 'ion_list': [[4],[5],[6],[7]]},'corr': True},
{'user_index': 2, 'lshell': 1, 'ions': {'nion': 4, 'ion_list': [[0],[1],[2],[3]]},'corr': False}]
self.assertEqual(len(res), len(expected))
arr = res[0].pop('ions')
arr_exp = expected[0].pop('ions')
self.assertDictEqual(arr, arr_exp)
arr = res[1].pop('ions')
arr_exp = expected[1].pop('ions')
self.assertDictEqual(arr, arr_exp)
self.assertListEqual(res, expected)
if __name__ == '__main__':
import unittest

View File

@ -212,6 +212,59 @@ class TestParseEnergyWindow(arraytest.ArrayTestCase):
with self.assertRaisesRegexp(AssertionError, err_mess):
self.cpars.parse_energy_window('1.5 3.0 2.0')
################################################################################
#
# TestParseBandWindow
#
################################################################################
class TestParseBandWindow(arraytest.ArrayTestCase):
"""
Function:
def parse_band_window(self, par_str)
Scenarios:
- **if** par_str == '1 10' **return** (1, 10)
- **if** par_str == '3.0 -1.5' **raise** AssertionError
- **if** par_str == '1.0' **raise** AssertionError
- **if** par_str == 'aaa' **raise** ValueError
- **if** par_str == '1.5 3.0 2.0' **raise** AssertionError
"""
def setUp(self):
"""
"""
# Dummy ConfigParameters object
self.cpars = ConfigParameters(_rpath + 'test1.cfg')
# Scenario 1
def test_correct_range(self):
expected = (1, 10)
res = self.cpars.parse_band_window('1 10')
self.assertEqual(res, expected)
# Scenario 2
def test_wrong_range(self):
err_mess = "The first int in BANDS"
with self.assertRaisesRegexp(AssertionError, err_mess):
self.cpars.parse_band_window('10 1')
# Scenario 3
def test_one_float(self):
err_mess = "BANDS must be specified"
with self.assertRaisesRegexp(AssertionError, err_mess):
self.cpars.parse_band_window('1')
# Scenario 4
def test_wrong_string(self):
with self.assertRaises(ValueError):
self.cpars.parse_band_window('aaa')
# Scenario 5
def test_three_ints(self):
err_mess = "BANDS must be specified"
with self.assertRaisesRegexp(AssertionError, err_mess):
self.cpars.parse_band_window('1 2 3')
################################################################################
#

Binary file not shown.

View File

@ -25,7 +25,8 @@ class TestProjectorGroup(mytest.MyTestCase):
Scenarios:
- **test** that orthogonalization is correct
- **test** that NORMION = True gives the same result
- **test** that NORMION = True gives the same results
- **test that HK = TRUE gives correct H(k)
"""
def setUp(self):
conf_file = _rpath + 'example.cfg'
@ -88,5 +89,13 @@ class TestProjectorGroup(mytest.MyTestCase):
# self.assertFileEqual(testout, expected_file)
expected_file = _rpath + 'projortho.out.h5'
self.assertH5FileEqual(testout, expected_file)
def test_hk(self):
self.proj_gr.orthogonalize()
self.proj_gr.calc_hk(self.eigvals)
testout = _rpath + 'hk.test.h5'
with HDFArchive(testout, 'w') as h5test:
h5test['hk'] = self.proj_gr.hk
expected_file = _rpath + 'hk.out.h5'
self.assertH5FileEqual(testout, expected_file)

View File

@ -0,0 +1,94 @@
import os
import rpath
_rpath = os.path.dirname(rpath.__file__) + '/'
import numpy as np
from triqs_dft_tools.converters.plovasp.vaspio import VaspData
from triqs_dft_tools.converters.plovasp.elstruct import ElectronicStructure
from triqs_dft_tools.converters.plovasp.inpconf import ConfigParameters
from triqs_dft_tools.converters.plovasp.proj_shell import ProjectorShell
from triqs_dft_tools.converters.plovasp.proj_group import ProjectorGroup
from pytriqs.archive import HDFArchive
import mytest
################################################################################
#
# TestProjectorGroup
#
################################################################################
class TestProjectorGroupCompl(mytest.MyTestCase):
"""
Class:
ProjectorGroupCompl(sh_pars, proj_raw)
Scenarios:
- **test** that unequal number of bands at different k-points gives error
- **test** that COMLEMENT=TRUE gives orthonormal projectors
"""
def setUp(self):
conf_file = _rpath + 'example.cfg'
self.pars = ConfigParameters(conf_file)
self.pars.parse_input()
vasp_data = VaspData(_rpath + 'one_site/')
self.el_struct = ElectronicStructure(vasp_data)
efermi = self.el_struct.efermi
self.eigvals = self.el_struct.eigvals - efermi
struct = self.el_struct.structure
kmesh = self.el_struct.kmesh
self.proj_sh = ProjectorShell(self.pars.shells[0], vasp_data.plocar.plo, vasp_data.plocar.proj_params, kmesh, struct, 0)
def test_num_bands(self):
self.pars.groups[0]['complement'] = True
err_mess = "At each band the same number"
with self.assertRaisesRegexp(AssertionError, err_mess):
self.proj_gr = ProjectorGroup(self.pars.groups[0], [self.proj_sh], self.eigvals)
def test_compl(self):
self.pars.groups[0]['complement'] = True
self.pars.groups[0]['bands'] = [10, 25]
self.proj_gr = ProjectorGroup(self.pars.groups[0], [self.proj_sh], self.eigvals)
self.proj_gr.orthogonalize()
self.proj_gr.calc_complement(self.eigvals)
temp = self.proj_gr.normion
self.proj_gr.normion = False
block_maps, ndim = self.proj_gr.get_block_matrix_map()
self.proj_gr.normion = temp
_, ns, nk, _, _ = self.proj_gr.shells[0].proj_win.shape
# Note that 'ns' and 'nk' are the same for all shells
for isp in xrange(ns):
for ik in xrange(nk):
print('ik',ik)
bmin = self.proj_gr.ib_win[ik, isp, 0]
bmax = self.proj_gr.ib_win[ik, isp, 1]+1
nb = bmax - bmin
p_mat = np.zeros((ndim, nb), dtype=np.complex128)
#print(bmin,bmax,nb)
# Combine all projectors of the group to one block projector
for bl_map in block_maps:
p_mat[:, :] = 0.0j # !!! Clean-up from the last k-point and block!
for ibl, block in enumerate(bl_map):
i1, i2 = block['bmat_range']
ish, ion = block['shell_ion']
nlm = i2 - i1 + 1
shell = self.proj_gr.shells[ish]
p_mat[i1:i2, :nb] = shell.proj_win[ion, isp, ik, :nlm, :nb]
overlap_L = np.dot(p_mat.conjugate().transpose(),p_mat)
overlap_N = np.dot(p_mat,p_mat.conjugate().transpose())
assert np.all(np.abs(np.eye(overlap_N.shape[0]) - overlap_N) < 1e-13)
assert np.all(np.abs(np.eye(overlap_L.shape[0]) - overlap_L) < 1e-13)

View File

@ -1,4 +1,4 @@
pars: {'ions': {'nion': 1, 'ion_list': [[1]]}, 'user_index': 1, 'lshell': 2}
pars: {'ions': {'nion': 1, 'ion_list': [[1]]}, 'user_index': 1, 'lshell': 2, 'corr': True}
10 25
1 0.000000 -0.000000
2 0.000000 0.000000