Compare commits
105 Commits
Author | SHA1 | Date |
---|---|---|
Markus Aichhorn | e382b0a357 | |
Alexander Hampel | a56872c277 | |
Nils Wentzell | 7d1b16136f | |
Nils Wentzell | 750651283f | |
Nils Wentzell | 4a2ee146aa | |
Nils Wentzell | 762435cf0b | |
Manuel Zingl | ef74ea30b8 | |
Nils Wentzell | 9bca12d1ea | |
Manuel | 2d491050cc | |
Manuel | 9839dcdf9e | |
Nils Wentzell | 1ececb7a4b | |
Nils Wentzell | 3807534ef8 | |
Manuel | f0f998616e | |
Manuel | 7946e548a2 | |
Manuel | 5bb1d34459 | |
aichhorn | 10e0143413 | |
aichhorn | ba47c6a206 | |
aichhorn | d70b74831a | |
aichhorn | 06273aff93 | |
Nils Wentzell | fdc34c4d06 | |
Manuel | 64d9215dd0 | |
Manuel Zingl | 2fea75133b | |
egcpvanloon | ac33d05faf | |
Dylan Simon | 876f1a27f2 | |
Nils Wentzell | ff6ff34be8 | |
Dylan Simon | 883111d3a9 | |
Nils Wentzell | 31d6199c90 | |
Nils Wentzell | a1ecfd883f | |
Nils Wentzell | 816bdcb02d | |
Nils Wentzell | 4f5fb4670b | |
Manuel | 2fb39e7bf2 | |
Manuel | b516978606 | |
Nils Wentzell | 800ef7c0b4 | |
Dylan Simon | 203783ceba | |
Nils Wentzell | 96c085a69f | |
Nils Wentzell | 29e650de34 | |
Manuel | 4bbbcf93f1 | |
Manuel Zingl | 5a8996731d | |
Oleg E. Peil | 072011133b | |
Oleg E. Peil | c4028dcbd9 | |
Manuel | a8c7569830 | |
Manuel | 85c8c2b58c | |
Manuel | e8303aea2a | |
Manuel | d8488efd98 | |
aichhorn | b14e802859 | |
aichhorn | 10018c6aa6 | |
aichhorn | 9cf4521fec | |
Oleg E. Peil | ef9919c0f6 | |
Oleg E. Peil | 9081ce634c | |
Oleg E. Peil | c2dfc3fb6c | |
Oleg Peil | a304e0ed36 | |
Oleg Peil | d029aa8e3e | |
Oleg Peil | 04e1e86a4b | |
Oleg Peil | 14f8b1d9e1 | |
Oleg Peil | 64605e3267 | |
Oleg Peil | 19ce8a83e8 | |
Oleg Peil | 0fa24a28ef | |
Oleg Peil | 7471691219 | |
Dylan Simon | 3539ffd336 | |
Nils Wentzell | e61c3a7851 | |
Nils Wentzell | bd0f4f64ec | |
Manuel Zingl | f4ad91f8b4 | |
Manuel | dfa10dffda | |
Gernot J. Kraberger | e1d54ffcc5 | |
Gernot J. Kraberger | 6ed84c078f | |
Gernot J. Kraberger | 8f1011e389 | |
Manuel | 3f569a810e | |
Manuel | ba0cfa9013 | |
Manuel | 2af2bac8d6 | |
Manuel | 8a53a80e1e | |
Manuel | ad3a23196a | |
Manuel | e187958774 | |
Manuel | 9782be2c93 | |
Dylan Simon | 96fea4389b | |
Dylan Simon | c300fea4ea | |
aichhorn | 59e176e64a | |
Dylan Simon | 6d55aa7070 | |
Nils Wentzell | 006702252c | |
Nils Wentzell | 2632f2c87f | |
Dylan Simon | 67957c3c06 | |
Nils Wentzell | 8a4f23e340 | |
Dylan Simon | dfb185477a | |
Manuel | 8e883f6118 | |
Nils Wentzell | 45031b3bfe | |
Nils Wentzell | 4a75e3fdc8 | |
Nils Wentzell | 66b4b1129f | |
Nils Wentzell | 22a4b4357b | |
Nils Wentzell | 2468331dc1 | |
Nils Wentzell | 1bab92c721 | |
Nils Wentzell | cd7d01c4a9 | |
Nils Wentzell | cd918159d1 | |
Manuel | 60482613a1 | |
Manuel | 641dff8d01 | |
Manuel Zingl | 3a5848efb4 | |
Manuel | 88bf0fd435 | |
Manuel | 3cfca94b1f | |
leonid@cpht.polytechnique.fr | 5e17d333ee | |
Nils Wentzell | 78000328f1 | |
Nils Wentzell | 9731668cae | |
Gernot J. Kraberger | d00575632c | |
Manuel Zingl | 4649b2142c | |
Manuel Zingl | 3f7b9f6843 | |
Manuel Zingl | fff9e36354 | |
Oleg E. Peil | 8f28fcf41f | |
Oleg E. Peil | 974aa08e14 |
|
@ -1,3 +1,2 @@
|
|||
.git
|
||||
Dockerfile
|
||||
Jenkinsfile
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: Bug report
|
||||
labels: bug
|
||||
|
||||
---
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* Please check that a similar issue isn't already filed: https://github.com/issues?q=is%3Aissue+user%3Atriqs
|
||||
|
||||
### Description
|
||||
|
||||
[Description of the issue]
|
||||
|
||||
### Steps to Reproduce
|
||||
|
||||
1. [First Step]
|
||||
2. [Second Step]
|
||||
3. [and so on...]
|
||||
|
||||
or paste a minimal code example to reproduce the issue.
|
||||
|
||||
**Expected behavior:** [What you expect to happen]
|
||||
|
||||
**Actual behavior:** [What actually happens]
|
||||
|
||||
### Versions
|
||||
|
||||
Please provide the application version that you used.
|
||||
|
||||
You can get this information from copy and pasting the output of
|
||||
```bash
|
||||
python -c "from app4triqs.version import *; show_version(); show_git_hash();"
|
||||
```
|
||||
from the command line. Also, please include the OS you are running and its version.
|
||||
|
||||
### Formatting
|
||||
|
||||
Please use markdown in your issue message. A useful summary of commands can be found [here](https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf).
|
||||
|
||||
### Additional Information
|
||||
|
||||
Any additional information, configuration or data that might be necessary to reproduce the issue.
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: Feature request
|
||||
labels: feature
|
||||
|
||||
---
|
||||
|
||||
### Summary
|
||||
|
||||
One paragraph explanation of the feature.
|
||||
|
||||
### Motivation
|
||||
|
||||
Why is this feature of general interest?
|
||||
|
||||
### Implementation
|
||||
|
||||
What user interface do you suggest?
|
||||
|
||||
### Formatting
|
||||
|
||||
Please use markdown in your issue message. A useful summary of commands can be found [here](https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf).
|
|
@ -30,9 +30,8 @@ script:
|
|||
- cd $TRAVIS_BUILD_DIR
|
||||
- source root_install/share/cpp2pyvars.sh
|
||||
# ===== Set up TRIQS
|
||||
- git clone https://github.com/TRIQS/triqs --branch unstable
|
||||
- git clone https://github.com/TRIQS/triqs --branch $TRAVIS_BRANCH
|
||||
- mkdir triqs/build && cd triqs/build
|
||||
- git checkout unstable
|
||||
- cmake .. -DCMAKE_CXX_COMPILER=/usr/bin/${CXX} -DBuild_Tests=OFF -DCMAKE_INSTALL_PREFIX=$TRAVIS_BUILD_DIR/root_install -DCMAKE_BUILD_TYPE=Debug
|
||||
- make -j8 install
|
||||
- cd $TRAVIS_BUILD_DIR
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
Markus Aichhorn
|
||||
Michel Ferrero
|
||||
Gernot Kraberger
|
||||
Olivier Parcollet
|
||||
Oleg Peil
|
||||
Leonid Poyurovskiy
|
||||
Dylan Simon
|
||||
Nils Wentzell
|
||||
Manuel Zingl
|
|
@ -1,23 +1,22 @@
|
|||
# Version number of the application
|
||||
set (DFT_TOOLS_VERSION "1.5")
|
||||
set (DFT_TOOLS_RELEASE "1.5.0")
|
||||
# Start configuration
|
||||
cmake_minimum_required(VERSION 3.0.2 FATAL_ERROR)
|
||||
project(triqs_dft_tools C CXX Fortran)
|
||||
if(POLICY CMP0074)
|
||||
cmake_policy(SET CMP0074 NEW)
|
||||
endif()
|
||||
|
||||
# Default to Release build type
|
||||
if(NOT CMAKE_BUILD_TYPE)
|
||||
set(CMAKE_BUILD_TYPE Release CACHE STRING "Type of build" FORCE)
|
||||
endif()
|
||||
message( STATUS "-------- BUILD-TYPE: ${CMAKE_BUILD_TYPE} -------------")
|
||||
|
||||
# start configuration
|
||||
cmake_minimum_required(VERSION 2.8)
|
||||
project(dft_tools C CXX Fortran)
|
||||
message( STATUS "-------- BUILD-TYPE: ${CMAKE_BUILD_TYPE} --------")
|
||||
|
||||
# Use shared libraries
|
||||
set(BUILD_SHARED_LIBS ON)
|
||||
|
||||
# Load TRIQS and Cpp2Py
|
||||
find_package(TRIQS 1.5 EXACT REQUIRED)
|
||||
find_package(Cpp2Py REQUIRED)
|
||||
find_package(TRIQS 2.2 REQUIRED)
|
||||
find_package(Cpp2Py 1.6 REQUIRED)
|
||||
|
||||
if (NOT ${TRIQS_WITH_PYTHON_SUPPORT})
|
||||
MESSAGE(FATAL_ERROR "dft_tools require Python support in TRIQS")
|
||||
|
@ -30,8 +29,13 @@ if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT OR (NOT IS_ABSOLUTE ${CMAKE_INSTA
|
|||
endif()
|
||||
message(STATUS "-------- CMAKE_INSTALL_PREFIX: ${CMAKE_INSTALL_PREFIX} -------------")
|
||||
|
||||
# Macro defined in TRIQS which picks the hash of repo.
|
||||
# Define the dft_tools version numbers and get the git hash
|
||||
set(DFT_TOOLS_VERSION_MAJOR 2)
|
||||
set(DFT_TOOLS_VERSION_MINOR 2)
|
||||
set(DFT_TOOLS_VERSION_PATCH 1)
|
||||
set(DFT_TOOLS_VERSION ${DFT_TOOLS_VERSION_MAJOR}.${DFT_TOOLS_VERSION_MINOR}.${DFT_TOOLS_VERSION_PATCH})
|
||||
triqs_get_git_hash_of_source_dir(DFT_TOOLS_GIT_HASH)
|
||||
message(STATUS "Dft_tools version : ${DFT_TOOLS_VERSION}")
|
||||
message(STATUS "Git hash: ${DFT_TOOLS_GIT_HASH}")
|
||||
|
||||
add_subdirectory(fortran/dmftproj)
|
||||
|
@ -40,15 +44,30 @@ add_subdirectory(fortran/dmftproj)
|
|||
message(STATUS "TRIQS : Adding compilation flags detected by the library (C++11/14, libc++, etc...) ")
|
||||
|
||||
add_subdirectory(c++)
|
||||
add_subdirectory(python)
|
||||
add_subdirectory(python python/triqs_dft_tools)
|
||||
add_subdirectory(shells)
|
||||
|
||||
#------------------------
|
||||
# tests
|
||||
#------------------------
|
||||
|
||||
|
||||
option(TEST_COVERAGE "Analyze the coverage of tests" OFF)
|
||||
|
||||
# perform tests with coverage info
|
||||
if (${TEST_COVERAGE})
|
||||
# we try to locate the coverage program
|
||||
find_program(PYTHON_COVERAGE python-coverage)
|
||||
find_program(PYTHON_COVERAGE coverage)
|
||||
if(NOT PYTHON_COVERAGE)
|
||||
message(FATAL_ERROR "Program coverage (or python-coverage) not found.\nEither set PYTHON_COVERAGE explicitly or disable TEST_COVERAGE!\nYou need to install the python package coverage, e.g. with\n pip install coverage\nor with\n apt install python-coverage")
|
||||
endif()
|
||||
|
||||
message(STATUS "Setting up test coverage")
|
||||
add_custom_target(coverage ${PYTHON_COVERAGE} combine --append .coverage plovasp/.coverage || true COMMAND ${PYTHON_COVERAGE} html COMMAND echo "Open ${CMAKE_BINARY_DIR}/test/htmlcov/index.html in browser!" WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/test)
|
||||
endif()
|
||||
|
||||
enable_testing()
|
||||
|
||||
|
||||
option(Build_Tests "Build the tests of the library " ON)
|
||||
if (Build_Tests)
|
||||
message(STATUS "-------- Preparing tests -------------")
|
||||
|
@ -78,6 +97,9 @@ if(BUILD_DEBIAN_PACKAGE)
|
|||
SET(CPACK_PACKAGE_VERSION ${DFT_TOOLS_VERSION})
|
||||
SET(CPACK_PACKAGE_CONTACT "https://github.com/TRIQS/dft_tools")
|
||||
EXECUTE_PROCESS(COMMAND dpkg --print-architecture OUTPUT_VARIABLE CMAKE_DEBIAN_PACKAGE_ARCHITECTURE OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
SET(CPACK_DEBIAN_PACKAGE_DEPENDS "libc6 (>= 2.23), libgcc1 (>= 1:6), libstdc++6, python, libpython2.7, libopenmpi1.10, libhdf5-10, libgmp10, libfftw3-double3, libibverbs1, libgfortran3, zlib1g, libsz2, libhwloc5, libquadmath0, libaec0, libnuma1, libltdl7, libblas3, liblapack3, python-numpy, python-h5py, python-jinja2, python-mako, python-mpi4py, python-matplotlib, python-scipy, cpp2py (= ${DFT_TOOLS_VERSION}), triqs (= ${DFT_TOOLS_VERSION})")
|
||||
SET(CPACK_DEBIAN_PACKAGE_DEPENDS "triqs (>= 2.2)")
|
||||
SET(CPACK_DEBIAN_PACKAGE_CONFLICTS "dft_tools")
|
||||
SET(CPACK_DEBIAN_PACKAGE_SHLIBDEPS ON)
|
||||
SET(CPACK_DEBIAN_PACKAGE_GENERATE_SHLIBS ON)
|
||||
INCLUDE(CPack)
|
||||
endif()
|
||||
|
|
|
@ -0,0 +1,674 @@
|
|||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
|
@ -1,12 +1,12 @@
|
|||
# See ../triqs/packaging for other options
|
||||
FROM flatironinstitute/triqs:master-ubuntu-clang
|
||||
|
||||
ARG APPNAME=dft_tools
|
||||
ARG APPNAME
|
||||
COPY . $SRC/$APPNAME
|
||||
WORKDIR $BUILD/$APPNAME
|
||||
RUN chown build .
|
||||
USER build
|
||||
ARG BUILD_DOC=0
|
||||
RUN cmake $SRC/$APPNAME -DTRIQS_ROOT=${INSTALL} -DBuild_Documentation=${BUILD_DOC} && make -j2 && make test
|
||||
RUN cmake $SRC/$APPNAME -DTRIQS_ROOT=${INSTALL} -DBuild_Documentation=${BUILD_DOC} && make -j2 && make test CTEST_OUTPUT_ON_FAILURE=1
|
||||
USER root
|
||||
RUN make install
|
||||
|
|
|
@ -1,23 +1,29 @@
|
|||
def projectName = "dft_tools"
|
||||
def projectName = "dft_tools" /* set to app/repo name */
|
||||
|
||||
/* which platform to build documentation on */
|
||||
def documentationPlatform = "ubuntu-clang"
|
||||
/* depend on triqs upstream branch/project */
|
||||
def triqsBranch = env.CHANGE_TARGET ?: env.BRANCH_NAME
|
||||
def triqsProject = '/TRIQS/triqs/' + triqsBranch.replaceAll('/', '%2F')
|
||||
def publish = !env.BRANCH_NAME.startsWith("PR-")
|
||||
/* whether to keep and publish the results */
|
||||
def keepInstall = !env.BRANCH_NAME.startsWith("PR-")
|
||||
|
||||
properties([
|
||||
disableConcurrentBuilds(),
|
||||
buildDiscarder(logRotator(numToKeepStr: '10', daysToKeepStr: '30')),
|
||||
pipelineTriggers([
|
||||
pipelineTriggers(keepInstall ? [
|
||||
upstream(
|
||||
threshold: 'SUCCESS',
|
||||
upstreamProjects: triqsProject
|
||||
)
|
||||
])
|
||||
] : [])
|
||||
])
|
||||
|
||||
/* map of all builds to run, populated below */
|
||||
def platforms = [:]
|
||||
|
||||
/****************** linux builds (in docker) */
|
||||
/* Each platform must have a cooresponding Dockerfile.PLATFORM in triqs/packaging */
|
||||
def dockerPlatforms = ["ubuntu-clang", "ubuntu-gcc", "centos-gcc"]
|
||||
/* .each is currently broken in jenkins */
|
||||
for (int i = 0; i < dockerPlatforms.size(); i++) {
|
||||
|
@ -27,22 +33,22 @@ for (int i = 0; i < dockerPlatforms.size(); i++) {
|
|||
checkout scm
|
||||
/* construct a Dockerfile for this base */
|
||||
sh """
|
||||
( echo "FROM flatironinstitute/triqs:${triqsBranch}-${env.STAGE_NAME}" ; sed '0,/^FROM /d' Dockerfile ) > Dockerfile.jenkins
|
||||
( echo "FROM flatironinstitute/triqs:${triqsBranch}-${env.STAGE_NAME}" ; sed '0,/^FROM /d' Dockerfile ) > Dockerfile.jenkins
|
||||
mv -f Dockerfile.jenkins Dockerfile
|
||||
"""
|
||||
/* build and tag */
|
||||
def img = docker.build("flatironinstitute/${projectName}:${env.BRANCH_NAME}-${env.STAGE_NAME}", "--build-arg BUILD_DOC=${platform==documentationPlatform} .")
|
||||
if (!publish || platform != documentationPlatform) {
|
||||
/* but we don't need the tag so clean it up (except for documentation) */
|
||||
def img = docker.build("flatironinstitute/${projectName}:${env.BRANCH_NAME}-${env.STAGE_NAME}", "--build-arg APPNAME=${projectName} --build-arg BUILD_DOC=${platform==documentationPlatform} .")
|
||||
if (!keepInstall) {
|
||||
sh "docker rmi --no-prune ${img.imageName()}"
|
||||
}
|
||||
} }
|
||||
} }
|
||||
}
|
||||
|
||||
/****************** osx builds (on host) */
|
||||
def osxPlatforms = [
|
||||
["gcc", ['CC=gcc-7', 'CXX=g++-7']],
|
||||
["clang", ['CC=/usr/local/opt/llvm/bin/clang', 'CXX=/usr/local/opt/llvm/bin/clang++', 'CXXFLAGS=-I/usr/local/opt/llvm/include', 'LDFLAGS=-L/usr/local/opt/llvm/lib']]
|
||||
["clang", ['CC=$BREW/opt/llvm/bin/clang', 'CXX=$BREW/opt/llvm/bin/clang++', 'CXXFLAGS=-I$BREW/opt/llvm/include', 'LDFLAGS=-L$BREW/opt/llvm/lib']]
|
||||
]
|
||||
for (int i = 0; i < osxPlatforms.size(); i++) {
|
||||
def platformEnv = osxPlatforms[i]
|
||||
|
@ -52,23 +58,25 @@ for (int i = 0; i < osxPlatforms.size(); i++) {
|
|||
def srcDir = pwd()
|
||||
def tmpDir = pwd(tmp:true)
|
||||
def buildDir = "$tmpDir/build"
|
||||
def installDir = "$tmpDir/install"
|
||||
def installDir = keepInstall ? "${env.HOME}/install/${projectName}/${env.BRANCH_NAME}/${platform}" : "$tmpDir/install"
|
||||
def triqsDir = "${env.HOME}/install/triqs/${triqsBranch}/${platform}"
|
||||
dir(installDir) {
|
||||
deleteDir()
|
||||
}
|
||||
|
||||
checkout scm
|
||||
dir(buildDir) { withEnv(platformEnv[1]+[
|
||||
"PATH=$triqsDir/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin",
|
||||
"CPATH=$triqsDir/include",
|
||||
"LIBRARY_PATH=$triqsDir/lib",
|
||||
"CMAKE_PREFIX_PATH=$triqsDir/share/cmake"]) {
|
||||
dir(buildDir) { withEnv(platformEnv[1].collect { it.replace('\$BREW', env.BREW) } + [
|
||||
"PATH=$triqsDir/bin:${env.BREW}/bin:/usr/bin:/bin:/usr/sbin",
|
||||
"CPLUS_INCLUDE_PATH=$triqsDir/include:${env.BREW}/include",
|
||||
"LIBRARY_PATH=$triqsDir/lib:${env.BREW}/lib",
|
||||
"CMAKE_PREFIX_PATH=$triqsDir/lib/cmake/triqs"]) {
|
||||
deleteDir()
|
||||
/* note: this is installing into the parent (triqs) venv (install dir), which is thus shared among apps and so not be completely safe */
|
||||
sh "pip install -r $srcDir/requirements.txt"
|
||||
sh "cmake $srcDir -DCMAKE_INSTALL_PREFIX=$installDir -DTRIQS_ROOT=$triqsDir"
|
||||
sh "make -j3"
|
||||
try {
|
||||
sh "make test"
|
||||
sh "make test CTEST_OUTPUT_ON_FAILURE=1"
|
||||
} catch (exc) {
|
||||
archiveArtifacts(artifacts: 'Testing/Temporary/LastTest.log')
|
||||
throw exc
|
||||
|
@ -79,40 +87,51 @@ for (int i = 0; i < osxPlatforms.size(); i++) {
|
|||
} }
|
||||
}
|
||||
|
||||
/****************** wrap-up */
|
||||
try {
|
||||
parallel platforms
|
||||
if (publish) { node("docker") {
|
||||
stage("publish") { timeout(time: 1, unit: 'HOURS') {
|
||||
if (keepInstall) { node("docker") {
|
||||
/* Publish results */
|
||||
stage("publish") { timeout(time: 5, unit: 'MINUTES') {
|
||||
def commit = sh(returnStdout: true, script: "git rev-parse HEAD").trim()
|
||||
def release = env.BRANCH_NAME == "master" || env.BRANCH_NAME == "unstable" || sh(returnStdout: true, script: "git describe --exact-match HEAD || true").trim()
|
||||
def workDir = pwd()
|
||||
lock('triqs_publish') {
|
||||
/* Update documention on gh-pages branch */
|
||||
dir("$workDir/gh-pages") {
|
||||
def subdir = env.BRANCH_NAME
|
||||
git(url: "ssh://git@github.com/TRIQS/${projectName}.git", branch: "gh-pages", credentialsId: "ssh", changelog: false)
|
||||
def subdir = "${projectName}/${env.BRANCH_NAME}"
|
||||
git(url: "ssh://git@github.com/TRIQS/TRIQS.github.io.git", branch: "master", credentialsId: "ssh", changelog: false)
|
||||
sh "rm -rf ${subdir}"
|
||||
docker.image("flatironinstitute/${projectName}:${env.BRANCH_NAME}-${documentationPlatform}").inside() {
|
||||
sh "cp -rp \$INSTALL/share/doc/${projectName} ${subdir}"
|
||||
sh "cp -rp \$INSTALL/share/doc/triqs_${projectName} ${subdir}"
|
||||
}
|
||||
sh "git add -A ${subdir}"
|
||||
sh """
|
||||
git commit --author='Flatiron Jenkins <jenkins@flatironinstitute.org>' --allow-empty -m 'Generated documentation for ${env.BRANCH_NAME}' -m '${env.BUILD_TAG} ${commit}'
|
||||
git commit --author='Flatiron Jenkins <jenkins@flatironinstitute.org>' --allow-empty -m 'Generated documentation for ${subdir}' -m '${env.BUILD_TAG} ${commit}'
|
||||
"""
|
||||
// note: credentials used above don't work (need JENKINS-28335)
|
||||
sh "git push origin gh-pages"
|
||||
sh "git push origin master"
|
||||
}
|
||||
dir("$workDir/docker") { try {
|
||||
git(url: "ssh://git@github.com/TRIQS/docker.git", branch: env.BRANCH_NAME, credentialsId: "ssh", changelog: false)
|
||||
sh "echo '160000 commit ${commit}\t${projectName}' | git update-index --index-info"
|
||||
sh """
|
||||
git commit --author='Flatiron Jenkins <jenkins@flatironinstitute.org>' --allow-empty -m 'Autoupdate ${projectName}' -m '${env.BUILD_TAG}'
|
||||
"""
|
||||
/* Update packaging repo submodule */
|
||||
if (release) { dir("$workDir/packaging") { try {
|
||||
git(url: "ssh://git@github.com/TRIQS/packaging.git", branch: env.BRANCH_NAME, credentialsId: "ssh", changelog: false)
|
||||
// note: credentials used above don't work (need JENKINS-28335)
|
||||
sh "git push origin ${env.BRANCH_NAME}"
|
||||
sh """#!/bin/bash -ex
|
||||
dir="${projectName}"
|
||||
[[ -d triqs_\$dir ]] && dir=triqs_\$dir || [[ -d \$dir ]]
|
||||
echo "160000 commit ${commit}\t\$dir" | git update-index --index-info
|
||||
git commit --author='Flatiron Jenkins <jenkins@flatironinstitute.org>' -m 'Autoupdate ${projectName}' -m '${env.BUILD_TAG}'
|
||||
git push origin ${env.BRANCH_NAME}
|
||||
"""
|
||||
} catch (err) {
|
||||
echo "Failed to update docker repo"
|
||||
} }
|
||||
/* Ignore, non-critical -- might not exist on this branch */
|
||||
echo "Failed to update packaging repo"
|
||||
} } }
|
||||
}
|
||||
} }
|
||||
} }
|
||||
} catch (err) {
|
||||
/* send email on build failure (declarative pipeline's post section would work better) */
|
||||
if (env.BRANCH_NAME != "jenkins") emailext(
|
||||
subject: "\$PROJECT_NAME - Build # \$BUILD_NUMBER - FAILED",
|
||||
body: """\$PROJECT_NAME - Build # \$BUILD_NUMBER - FAILED
|
||||
|
@ -124,13 +143,13 @@ Check console output at \$BUILD_URL to view full results.
|
|||
Building \$BRANCH_NAME for \$CAUSE
|
||||
\$JOB_DESCRIPTION
|
||||
|
||||
Chages:
|
||||
Changes:
|
||||
\$CHANGES
|
||||
|
||||
End of build log:
|
||||
\${BUILD_LOG,maxLines=60}
|
||||
""",
|
||||
to: 'mzingl@flatironinstitute.org, hstrand@flatironinstitute.org, nils.wentzell@gmail.com, dsimon@flatironinstitute.org',
|
||||
to: 'mzingl@flatironinstitute.org, hstrand@flatironinstitute.org, nwentzell@flatironinstitute.org, dsimon@flatironinstitute.org',
|
||||
recipientProviders: [
|
||||
[$class: 'DevelopersRecipientProvider'],
|
||||
],
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
TRIQS: a Toolbox for Research in Interacting Quantum Systems
|
||||
|
||||
Copyright (C) 2013-2020 The Authors (c.f. AUTHORS.txt)
|
||||
Copyright (C) 2018-2020 The Simons Foundation
|
||||
|
||||
TRIQS is free software: you can redistribute it and/or modify it under the
|
||||
terms of the GNU General Public License as published by the Free Software
|
||||
Foundation, either version 3 of the License, or (at your option) any later
|
||||
version.
|
||||
|
||||
TRIQS is distributed in the hope that it will be useful, but WITHOUT ANY
|
||||
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. See the GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along with
|
||||
TRIQS (in the file COPYING.txt in this directory). If not, see
|
||||
<http://www.gnu.org/licenses/>.
|
|
@ -4,12 +4,12 @@ Copyright (C) 2011-2013, M. Aichhorn, L. Pourovskii, V. Vildosola and C. Martins
|
|||
1. Documentation
|
||||
|
||||
You will find the documentation of this application under
|
||||
<http://ipht.cea.fr/triqs/applications/dft_tools/>.
|
||||
<https://triqs.github.io/dft_tools/>.
|
||||
|
||||
2. Installation
|
||||
|
||||
The installation steps are described in
|
||||
<http://ipht.cea.fr/triqs/applications/dft_tools/install.html>
|
||||
<https://triqs.github.io/dft_tools/2.1.x/install.html>
|
||||
|
||||
3. Version
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ static const double small = 2.5e-2, tol = 1e-8;
|
|||
Returns corner contributions to the DOS of a band
|
||||
*/
|
||||
#ifdef __TETRA_ARRAY_VIEW
|
||||
array_view<double, 2> dos_tetra_weights_3d(array_view<double, 1> eigk, double en, array_view<long, 2> itt)
|
||||
array<double, 2> dos_tetra_weights_3d(array_view<double, 1> eigk, double en, array_view<long, 2> itt)
|
||||
#else
|
||||
array<double, 2> dos_tetra_weights_3d(array<double, 1> eigk, double en, array<long, 2> itt)
|
||||
#endif
|
||||
|
|
|
@ -28,7 +28,7 @@ using triqs::arrays::array_view;
|
|||
/// DOS of a band by analytical tetrahedron method
|
||||
///
|
||||
/// Returns corner weights for all tetrahedra for a given band and real energy.
|
||||
array_view<double, 2>
|
||||
array<double, 2>
|
||||
dos_tetra_weights_3d(array_view<double, 1> eigk, /// Band energies for each k-point
|
||||
double en, /// Energy at which DOS weights are to be calculated
|
||||
array_view<long, 2> itt /// Tetrahedra defined by k-point indices
|
||||
|
|
|
@ -1,147 +0,0 @@
|
|||
import pytriqs.utility.mpi as mpi
|
||||
from pytriqs.operators.util import *
|
||||
from pytriqs.archive import HDFArchive
|
||||
from triqs_cthyb import *
|
||||
from pytriqs.gf import *
|
||||
from triqs_dft_tools.sumk_dft import *
|
||||
from triqs_dft_tools.converters.wien2k_converter import *
|
||||
|
||||
dft_filename='Gd_fcc'
|
||||
U = 9.6
|
||||
J = 0.8
|
||||
beta = 40
|
||||
loops = 10 # Number of DMFT sc-loops
|
||||
sigma_mix = 1.0 # Mixing factor of Sigma after solution of the AIM
|
||||
delta_mix = 1.0 # Mixing factor of Delta as input for the AIM
|
||||
dc_type = 0 # DC type: 0 FLL, 1 Held, 2 AMF
|
||||
use_blocks = True # use bloc structure from DFT input
|
||||
prec_mu = 0.0001
|
||||
h_field = 0.0
|
||||
|
||||
# Solver parameters
|
||||
p = {}
|
||||
p["max_time"] = -1
|
||||
p["length_cycle"] = 50
|
||||
p["n_warmup_cycles"] = 50
|
||||
p["n_cycles"] = 5000
|
||||
|
||||
Converter = Wien2kConverter(filename=dft_filename, repacking=True)
|
||||
Converter.convert_dft_input()
|
||||
mpi.barrier()
|
||||
|
||||
previous_runs = 0
|
||||
previous_present = False
|
||||
if mpi.is_master_node():
|
||||
f = HDFArchive(dft_filename+'.h5','a')
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
del f
|
||||
previous_runs = mpi.bcast(previous_runs)
|
||||
previous_present = mpi.bcast(previous_present)
|
||||
|
||||
SK=SumkDFT(hdf_file=dft_filename+'.h5',use_dft_blocks=use_blocks,h_field=h_field)
|
||||
|
||||
n_orb = SK.corr_shells[0]['dim']
|
||||
l = SK.corr_shells[0]['l']
|
||||
spin_names = ["up","down"]
|
||||
orb_names = [i for i in range(n_orb)]
|
||||
|
||||
# Use GF structure determined by DFT blocks
|
||||
gf_struct = [(block, indices) for block, indices in SK.gf_struct_solver[0].iteritems()]
|
||||
# Construct U matrix for density-density calculations
|
||||
Umat, Upmat = U_matrix_kanamori(n_orb=n_orb, U_int=U, J_hund=J)
|
||||
# Construct Hamiltonian and solver
|
||||
h_int = h_int_density(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U=Umat, Uprime=Upmat, H_dump="H.txt")
|
||||
S = Solver(beta=beta, gf_struct=gf_struct)
|
||||
|
||||
if previous_present:
|
||||
chemical_potential = 0
|
||||
dc_imp = 0
|
||||
dc_energ = 0
|
||||
if mpi.is_master_node():
|
||||
S.Sigma_iw << HDFArchive(dft_filename+'.h5','a')['dmft_output']['Sigma_iw']
|
||||
chemical_potential,dc_imp,dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
chemical_potential = mpi.bcast(chemical_potential)
|
||||
dc_imp = mpi.bcast(dc_imp)
|
||||
dc_energ = mpi.bcast(dc_energ)
|
||||
SK.set_mu(chemical_potential)
|
||||
SK.set_dc(dc_imp,dc_energ)
|
||||
|
||||
for iteration_number in range(1,loops+1):
|
||||
if mpi.is_master_node(): print "Iteration = ", iteration_number
|
||||
|
||||
SK.symm_deg_gf(S.Sigma_iw,orb=0) # symmetrise Sigma
|
||||
SK.set_Sigma([ S.Sigma_iw ]) # set Sigma into the SumK class
|
||||
chemical_potential = SK.calc_mu( precision = prec_mu ) # find the chemical potential for given density
|
||||
S.G_iw << SK.extract_G_loc()[0] # calc the local Green function
|
||||
mpi.report("Total charge of Gloc : %.6f"%S.G_iw.total_density())
|
||||
|
||||
# Init the DC term and the real part of Sigma, if no previous runs found:
|
||||
if (iteration_number==1 and previous_present==False):
|
||||
dm = S.G_iw.density()
|
||||
SK.calc_dc(dm, U_interact = U, J_hund = J, orb = 0, use_dc_formula = dc_type)
|
||||
S.Sigma_iw << SK.dc_imp[0]['up'][0,0]
|
||||
|
||||
# Calculate new G0_iw to input into the solver:
|
||||
if mpi.is_master_node():
|
||||
# We can do a mixing of Delta in order to stabilize the DMFT iterations:
|
||||
S.G0_iw << S.Sigma_iw + inverse(S.G_iw)
|
||||
ar = HDFArchive(dft_filename+'.h5','a')['dmft_output']
|
||||
if (iteration_number>1 or previous_present):
|
||||
mpi.report("Mixing input Delta with factor %s"%delta_mix)
|
||||
Delta = (delta_mix * delta(S.G0_iw)) + (1.0-delta_mix) * ar['Delta_iw']
|
||||
S.G0_iw << S.G0_iw + delta(S.G0_iw) - Delta
|
||||
ar['Delta_iw'] = delta(S.G0_iw)
|
||||
S.G0_iw << inverse(S.G0_iw)
|
||||
del ar
|
||||
|
||||
S.G0_iw << mpi.bcast(S.G0_iw)
|
||||
|
||||
# Solve the impurity problem:
|
||||
S.solve(h_int=h_int, **p)
|
||||
|
||||
# Solved. Now do post-processing:
|
||||
mpi.report("Total charge of impurity problem : %.6f"%S.G_iw.total_density())
|
||||
|
||||
# Now mix Sigma and G with factor sigma_mix, if wanted:
|
||||
if (iteration_number>1 or previous_present):
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')['dmft_output']
|
||||
mpi.report("Mixing Sigma and G with factor %s"%sigma_mix)
|
||||
S.Sigma_iw << sigma_mix * S.Sigma_iw + (1.0-sigma_mix) * ar['Sigma_iw']
|
||||
S.G_iw << sigma_mix * S.G_iw + (1.0-sigma_mix) * ar['G_iw']
|
||||
del ar
|
||||
S.G_iw << mpi.bcast(S.G_iw)
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
|
||||
# Write the final Sigma and G to the hdf5 archive:
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')['dmft_output']
|
||||
if previous_runs: iteration_number += previous_runs
|
||||
ar['iterations'] = iteration_number
|
||||
ar['G_tau'] = S.G_tau
|
||||
ar['G_iw'] = S.G_iw
|
||||
ar['Sigma_iw'] = S.Sigma_iw
|
||||
ar['G0-%s'%(iteration_number)] = S.G0_iw
|
||||
ar['G-%s'%(iteration_number)] = S.G_iw
|
||||
ar['Sigma-%s'%(iteration_number)] = S.Sigma_iw
|
||||
del ar
|
||||
|
||||
# Set the new double counting:
|
||||
dm = S.G_iw.density() # compute the density matrix of the impurity problem
|
||||
SK.calc_dc(dm, U_interact = U, J_hund = J, orb = 0, use_dc_formula = dc_type)
|
||||
|
||||
# Save stuff into the dft_output group of hdf5 archive in case of rerun:
|
||||
SK.save(['chemical_potential','dc_imp','dc_energ'])
|
||||
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive("dftdmft.h5",'w')
|
||||
ar["G_tau"] = S.G_tau
|
||||
ar["G_iw"] = S.G_iw
|
||||
ar["Sigma_iw"] = S.Sigma_iw
|
|
@ -16,7 +16,7 @@ add_custom_target(doc_sphinx ALL DEPENDS ${sphinx_top} ${CMAKE_CURRENT_BINARY_DI
|
|||
# ---------------------------------
|
||||
# Install
|
||||
# ---------------------------------
|
||||
install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/html/ COMPONENT documentation DESTINATION share/doc/dft_tools
|
||||
install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/html/ COMPONENT documentation DESTINATION share/doc/triqs_dft_tools
|
||||
FILES_MATCHING
|
||||
REGEX "\\.(html|pdf|png|gif|jpg|js|xsl|css|py|txt|inv|bib)$"
|
||||
PATTERN "_*"
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
Version 2.2.1
|
||||
-------------
|
||||
|
||||
DFTTools Version 2.2.1 makes the application available
|
||||
through the Anaconda package manager. We adjust
|
||||
the install pages of the documentation accordingly.
|
||||
We provide a more detailed description of the changes below.
|
||||
|
||||
* Add a section on the Anaconda package to the install page
|
||||
* Add a LICENSE and AUTHORS file to the repository
|
||||
|
||||
|
||||
Version 2.2.0
|
||||
-------------
|
||||
|
||||
* Ensure that the chemical potential calculations results in a real number
|
||||
* Fix a bug in reading Wien2k optics files in SO/SP cases
|
||||
* Some clarifications in the documentation
|
||||
* Packaging/Jenkins/TRIQS/Installation adaptations
|
||||
|
||||
This is to a large extend a compatibility release against TRIQS version 2.2.0
|
||||
|
||||
Thanks to all commit-contributors (in alphabetical order):
|
||||
Markus Aichhorn, Dylan Simon, Erik van Loon, Nils Wentzell, Manuel Zingl
|
||||
|
||||
|
||||
Version 2.1.x (changes since 1.4)
|
||||
---------------------------------
|
||||
|
||||
* Added Debian Packaging
|
||||
* Compatibility changes for TRIQS 2.1.x
|
||||
* Jenkins adjustments
|
||||
* Add option to measure python test coverage
|
||||
* VASP interface (and documentation)
|
||||
* Added thermal conductivity in transport code (and documentation)
|
||||
* BlockStructure class and new methods to analyze the block structure and degshells
|
||||
* Multiple fixes of issues and bugs
|
||||
* Major updates and restructuring of documentation
|
||||
|
||||
|
||||
|
||||
Thanks to all commit-contributors (in alphabetical order):
|
||||
Markus Aichhorn, Gernot J. Kraberger, Olivier Parcollet, Oleg Peil, Hiroshi Shinaoka, Dylan Simon, Hugo U. R. Strand, Nils Wentzell, Manuel Zingl
|
||||
|
||||
Thanks to all user for reporting issues and suggesting improvements.
|
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 6.7 KiB After Width: | Height: | Size: 243 KiB |
|
@ -1,7 +1,14 @@
|
|||
<p>
|
||||
<p style="background-color:white;">
|
||||
<a href="http://ipht.cea.fr"> <img style="width: 80px; margin: 10px 5px 0 0" src='_static/logo_cea.png' alt="CEA"/> </a>
|
||||
<a href="http://www.cpht.polytechnique.fr"> <img style="width: 80px; margin: 10px 5px 0 5px" src='_static/logo_x.png' alt="Ecole Polytechnique"/> </a>
|
||||
<br>
|
||||
<a href="http://www.cnrs.fr"> <img style="width: 80px; margin: 10px 0 0 5px" src='_static/logo_cnrs.png' alt="CNRS"/> </a>
|
||||
<img style="width: 80px; margin: 10px 0 0 5px" src='_static/logo_erc.jpg' alt="ERC"/>
|
||||
<a href="https://www.simonsfoundation.org/flatiron"> <img style="width: 200px; margin: 10px 0 0 5px" src='http://itensor.org/flatiron_logo.png' alt="Flatiron Institute"/> </a>
|
||||
<br>
|
||||
<a href="https://www.simonsfoundation.org"> <img style="width: 200px; margin: 10px 0 0 5px" src='http://itensor.org/simons_found_logo.jpg' alt="Simons Foundation"/> </a>
|
||||
</p>
|
||||
<hr>
|
||||
<p>
|
||||
<a href="https://github.com/triqs/dft_tools"> <img style="width: 200px; margin: 10px 0 0 5px" src='_static/logo_github.png' alt="Visit the project on GitHub"/> </a>
|
||||
</p>
|
||||
|
|
|
@ -126,7 +126,7 @@ model. The DMFT self-consistency cycle can now be formulated as
|
|||
follows:
|
||||
|
||||
#. Take :math:`G^0_{mn}(i\omega)` and the interaction Hamiltonian and
|
||||
solve the impurity problem, to get the interacting Greens function
|
||||
solve the impurity problem, to get the interacting Green function
|
||||
:math:`G_{mn}(i\omega)` and the self energy
|
||||
:math:`\Sigma_{mn}(i\omega)`. For the details of how to do
|
||||
this in practice, we refer to the documentation of one of the
|
||||
|
@ -147,7 +147,7 @@ follows:
|
|||
G^{latt}_{\nu\nu'}(\mathbf{k},i\omega) = \frac{1}{i\omega+\mu
|
||||
-\varepsilon_{\nu\mathbf{k}}-\Sigma_{\nu\nu'}(\mathbf{k},i\omega)}
|
||||
|
||||
#. Calculate from that the local downfolded Greens function in orbital space:
|
||||
#. Calculate from that the local downfolded Green function in orbital space:
|
||||
|
||||
.. math::
|
||||
G^{loc}_{mn}(i\omega) = \sum_{\mathbf{k}}\sum_{\nu\nu'}P_{m\nu}(\mathbf{k})G^{latt}_{\nu\nu'}(\mathbf{k},i\omega)P^*_{\nu'
|
||||
|
@ -166,7 +166,7 @@ follows:
|
|||
This is the basic scheme for one-shot DFT+DMFT calculations. Of
|
||||
course, one has to make sure, that the chemical potential :math:`\mu`
|
||||
is set such that the electron density is correct. This can be achieved
|
||||
by adjusting it for the lattice Greens function such that the electron
|
||||
by adjusting it for the lattice Green function such that the electron
|
||||
count is fulfilled.
|
||||
|
||||
Full charge self-consistency
|
||||
|
|
|
@ -59,14 +59,19 @@ hybridization-expansion solver. In general, those tutorials will take at least a
|
|||
|
||||
Afterwards you can continue with the :ref:`DFTTools user guide <documentation>`.
|
||||
|
||||
.. _ac:
|
||||
|
||||
Maximum Entropy (MaxEnt)
|
||||
------------------------
|
||||
Analytic Continuation
|
||||
---------------------
|
||||
|
||||
Analytic continuation is needed for many :ref:`post-processing tools <analysis>`, e.g. to
|
||||
calculate the spectral function, the correlated band structure (:math:`A(k,\omega)`)
|
||||
and to perform :ref:`transport calculations <Transport>`.
|
||||
You can use the Pade approximation available in the :ref:`TRIQS <triqslibs:welcome>` library, however,
|
||||
it turns out to be very unstable for noisy numerical data. Most of the time, the MaxEnt method
|
||||
is used to obtain data on the real-frequency axis. At the moment neither :ref:`TRIQS <triqslibs:welcome>` nor
|
||||
:program:`DFTTools` provide such routines.
|
||||
Often impurity solvers working on the Matsubra axis are used within the
|
||||
DFT+DMFT framework. However, many :ref:`post-processing tools <analysis>`,
|
||||
require a self energy on the real-frequency axis, e.g. to calculate the spectral
|
||||
function :math:`A(k,\omega)` or to perform :ref:`transport calculations <Transport>`.
|
||||
The ill-posed nature of the analytic continuation has lead to a plethora of methods,
|
||||
and conversely, computer codes. :program:`DFTTools` itself does not provide functions to perform analytic
|
||||
continuations. Within the TRIQS environment the following options are available:
|
||||
|
||||
* Pade: Implemented in the :ref:`TRIQS<triqslibs:welcome>` library
|
||||
* Stochastic Optimization Method (Mishchenko): `SOM <http://krivenko.github.io/som/>`_ package by Igor Krivenko
|
||||
* Maximum Entropy Method: `TRIQS/maxent <https://triqs.github.io/maxent/master>`_ package
|
||||
|
|
|
@ -66,7 +66,7 @@ perform the multi-band DMFT calculation in the context of real
|
|||
materials. The major part is contained in the module
|
||||
:class:`SumkDFT`. It contains routines to
|
||||
|
||||
* calculate local Greens functions
|
||||
* calculate local Green functions
|
||||
* do the upfolding and downfolding from Bloch bands to Wannier
|
||||
orbitals
|
||||
* calculate the double-counting correction
|
||||
|
@ -91,12 +91,12 @@ self-consistent one is only a couple of additional lines in the code!
|
|||
Post-processing
|
||||
---------------
|
||||
|
||||
The main result of DMFT calculation is the interacting Greens function
|
||||
and the Self energy. However, one is normally interested in
|
||||
The main result of DMFT calculation is the interacting Green function
|
||||
and the self energy. However, one is normally interested in
|
||||
quantities like band structure, density of states, or transport
|
||||
properties. In order to calculate these things, :program:`DFTTools`
|
||||
provides the post-processing modules :class:`SumkDFTTools`. It
|
||||
contains routines to calculate
|
||||
properties. In order to calculate these, :program:`DFTTools`
|
||||
provides the post-processing modules :class:`SumkDFTTools`.
|
||||
It contains routines to calculate
|
||||
|
||||
* (projected) density of states
|
||||
* partial charges
|
||||
|
@ -104,9 +104,28 @@ contains routines to calculate
|
|||
* transport properties such as optical conductivity, resistivity,
|
||||
or thermopower.
|
||||
|
||||
.. warning::
|
||||
At the moment neither :ref:`TRIQS<triqslibs:welcome>` nor :program:`DFTTools`
|
||||
provides Maximum Entropy routines! You can use the Pade
|
||||
approximation implemented in the :ref:`TRIQS <triqslibs:welcome>` library, or you use your own
|
||||
home-made Maximum Entropy code to do the analytic continuation from
|
||||
Matsubara to the real-frequency axis.
|
||||
Note that most of these post-processing tools need a real-frequency
|
||||
self energy, and should you be using a CT-QMC impurity solver this
|
||||
comes with the necessity of performing an :ref:`analytic continuation<ac>`.
|
||||
|
||||
.. _runpy:
|
||||
|
||||
Executing your python scripts
|
||||
-----------------------------
|
||||
|
||||
After having prepared your own python script you may run
|
||||
it on one core with
|
||||
|
||||
`python MyScript.py`
|
||||
|
||||
or in parallel mode
|
||||
|
||||
`mpirun -np 64 python MyScript.py`
|
||||
|
||||
where :program:`mpirun` launches the calculation in parallel mode on 64 cores.
|
||||
The exact form of this command will, of course, depend on the
|
||||
mpi-launcher installed, but the form above works on most systems.
|
||||
|
||||
How to run full charge self-consistent DFT+DMFT calculations (in combination with Wien2k)
|
||||
is described in the :ref:`full charge self-consistency tutorial<full_charge_selfcons>` and
|
||||
the :ref:`Ce tutorial<DFTDMFTtutorial>`, as such calculations need to be launched in a different way.
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
.. _changelog:
|
||||
|
||||
Changelog
|
||||
=========
|
||||
|
||||
This document describes the main changes in DFTTools.
|
||||
|
||||
.. include:: ChangeLog.md
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
import sys
|
||||
sys.path.insert(0, "@TRIQS_SPHINXEXT_PATH@/numpydoc")
|
||||
sys.path.insert(0, "@CMAKE_BINARY_DIR@/python")
|
||||
|
||||
extensions = ['sphinx.ext.autodoc',
|
||||
'sphinx.ext.mathjax',
|
||||
|
@ -18,9 +19,8 @@ extensions = ['sphinx.ext.autodoc',
|
|||
source_suffix = '.rst'
|
||||
|
||||
project = u'TRIQS DFTTools'
|
||||
copyright = u'2011-2013, M. Aichhorn, L. Pourovskii, V. Vildosola, C. Martins'
|
||||
copyright = u'2011-2019'
|
||||
version = '@DFT_TOOLS_VERSION@'
|
||||
release = '@DFT_TOOLS_RELEASE@'
|
||||
|
||||
mathjax_path = "@TRIQS_MATHJAX_PATH@/MathJax.js?config=default"
|
||||
templates_path = ['@CMAKE_SOURCE_DIR@/doc/_templates']
|
||||
|
@ -29,14 +29,15 @@ html_theme = 'triqs'
|
|||
html_theme_path = ['@TRIQS_THEMES_PATH@']
|
||||
html_show_sphinx = False
|
||||
html_context = {'header_title': 'dft tools',
|
||||
'header_subtitle': 'connecting <a class="triqs" style="font-size: 12px" href="http://triqs.ipht.cnrs.fr/1.x">TRIQS</a> to DFT packages',
|
||||
'header_subtitle': 'connecting <a class="triqs" style="font-size: 12px" href="http://triqs.github.io/triqs">TRIQS</a> to DFT packages',
|
||||
'header_links': [['Install', 'install'],
|
||||
['Documentation', 'documentation'],
|
||||
['Tutorials', 'tutorials'],
|
||||
['Issues', 'issues'],
|
||||
['About DFTTools', 'about']]}
|
||||
html_static_path = ['@CMAKE_SOURCE_DIR@/doc/_static']
|
||||
html_sidebars = {'index': ['sideb.html', 'searchbox.html']}
|
||||
|
||||
htmlhelp_basename = 'TRIQSDftToolsdoc'
|
||||
htmlhelp_basename = 'TRIQSDFTToolsdoc'
|
||||
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/2.7', None), 'triqslibs': ('http://triqs.ipht.cnrs.fr/1.x', None), 'triqscthyb': ('https://triqs.ipht.cnrs.fr/1.x/applications/cthyb/', None)}
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/2.7', None), 'triqslibs': ('http://triqs.github.io/triqs/master', None), 'triqscthyb': ('https://triqs.github.io/cthyb/master', None)}
|
||||
|
|
|
@ -8,4 +8,5 @@ Table of contents
|
|||
install
|
||||
documentation
|
||||
issues
|
||||
changelog
|
||||
about
|
||||
|
|
|
@ -16,18 +16,31 @@ Basic notions
|
|||
basicnotions/structure
|
||||
|
||||
|
||||
User guide
|
||||
----------
|
||||
Construction of local orbitals from DFT
|
||||
---------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
guide/conversion
|
||||
|
||||
|
||||
DFT+DMFT
|
||||
--------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
guide/dftdmft_singleshot
|
||||
guide/SrVO3
|
||||
guide/dftdmft_selfcons
|
||||
|
||||
Postprocessing
|
||||
--------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
guide/analysis
|
||||
guide/full_tutorial
|
||||
guide/transport
|
||||
|
||||
|
||||
|
|
|
@ -28,14 +28,9 @@ Make sure that you set line 6 to "ON" and put a "1" to the following line.
|
|||
The "1" is undocumented in Wien2k, but needed to have `case.pmat` written.
|
||||
However, we are working on reading directly the `case.mommat2` file.
|
||||
|
||||
No module named pytriqs.*** error when running a script
|
||||
-------------------------------------------------------
|
||||
How do I get real-frequency quantities?
|
||||
---------------------------------------
|
||||
|
||||
Make sure that have properly build, tested and installed TRIQS and DFTTools
|
||||
using, make, make test and make install. Additionally, you should always
|
||||
use pytriqs to call your scripts, e.g. pytriqs yourscript.py
|
||||
|
||||
Why is my calculation not working?
|
||||
----------------------------------
|
||||
|
||||
Are you running in the right shell?
|
||||
:program:`DFTTools` does not provide functions to perform analytic
|
||||
continuations. However, within the TRIQS environment there are
|
||||
different :ref:`tools<ac>` available.
|
||||
|
|
|
@ -8,17 +8,17 @@ This section explains how to use some tools of the package in order to analyse t
|
|||
There are two practical tools for which a self energy on the real axis is not needed, namely:
|
||||
|
||||
* :meth:`dos_wannier_basis <dft.sumk_dft_tools.SumkDFTTools.dos_wannier_basis>` for the density of states of the Wannier orbitals and
|
||||
* :meth:`partial_charges <dft.sumk_dft_tools.SumkDFTTools.partial_charges>` for the partial charges according to the :program:`Wien2k` definition.
|
||||
* :meth:`partial_charges <dft.sumk_dft_tools.SumkDFTTools.partial_charges>` for the partial charges according to the Wien2k definition.
|
||||
|
||||
However, a real frequency self energy has to be provided by the user for the methods:
|
||||
However, a real-frequency self energy has to be provided by the user for the methods:
|
||||
|
||||
* :meth:`dos_parproj_basis <dft.sumk_dft_tools.SumkDFTTools.dos_parproj_basis>` for the momentum-integrated spectral function including self energy effects and
|
||||
* :meth:`spaghettis <dft.sumk_dft_tools.SumkDFTTools.spaghettis>` for the momentum-resolved spectral function (i.e. ARPES)
|
||||
|
||||
.. warning::
|
||||
This package does NOT provide an explicit method to do an **analytic continuation** of the
|
||||
self energies and Green functions from Matsubara frequencies to the real frequency axis!
|
||||
There are methods included e.g. in the :program:`ALPS` package, which can be used for these purposes.
|
||||
.. note::
|
||||
This package does NOT provide an explicit method to do an **analytic continuation** of
|
||||
self energies and Green functions from Matsubara frequencies to the real-frequency axis,
|
||||
but a list of options available within the TRIQS framework is given :ref:`here <ac>`.
|
||||
Keep in mind that all these methods have to be used very carefully!
|
||||
|
||||
Initialisation
|
||||
|
@ -36,16 +36,16 @@ class::
|
|||
|
||||
Note that all routines available in :class:`SumkDFT <dft.sumk_dft.SumkDFT>` are also available here.
|
||||
|
||||
If required, we have to load and initialise the real frequency self energy. Most conveniently,
|
||||
you have your self energy already stored as a real frequency :class:`BlockGf <pytriqs.gf.BlockGf>` object
|
||||
If required, we have to load and initialise the real-frequency self energy. Most conveniently,
|
||||
you have your self energy already stored as a real-frequency :class:`BlockGf <pytriqs.gf.BlockGf>` object
|
||||
in a hdf5 file::
|
||||
|
||||
ar = HDFArchive('case.h5', 'a')
|
||||
SigmaReFreq = ar['dmft_output']['Sigma_w']
|
||||
with HDFArchive('case.h5', 'r') as ar:
|
||||
SigmaReFreq = ar['dmft_output']['Sigma_w']
|
||||
|
||||
You may also have your self energy stored in text files. For this case the :ref:`TRIQS <triqslibs:welcome>` library offers
|
||||
the function :meth:`read_gf_from_txt`, which is able to load the data from text files of one Greens function block
|
||||
into a real frequency :class:`ReFreqGf <pytriqs.gf.ReFreqGf>` object. Loading each block separately and
|
||||
the function :meth:`read_gf_from_txt`, which is able to load the data from text files of one Green function block
|
||||
into a real-frequency :class:`ReFreqGf <pytriqs.gf.ReFreqGf>` object. Loading each block separately and
|
||||
building up a :class:´BlockGf <pytriqs.gf.BlockGf>´ is done with::
|
||||
|
||||
from pytriqs.gf.tools import *
|
||||
|
@ -61,7 +61,7 @@ where:
|
|||
* `block_txtfiles` is a rank 2 square np.array(str) or list[list[str]] holding the file names of one block and
|
||||
* `block_name` is the name of the block.
|
||||
|
||||
It is important that each data file has to contain three columns: the real frequency mesh, the real part and the imaginary part
|
||||
It is important that each data file has to contain three columns: the real-frequency mesh, the real part and the imaginary part
|
||||
of the self energy - exactly in this order! The mesh should be the same for all files read in and non-uniform meshes are not supported.
|
||||
|
||||
Finally, we set the self energy into the `SK` object::
|
||||
|
@ -73,7 +73,6 @@ and additionally set the chemical potential and the double counting correction f
|
|||
chemical_potential, dc_imp, dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
SK.set_mu(chemical_potential)
|
||||
SK.set_dc(dc_imp,dc_energ)
|
||||
del ar
|
||||
|
||||
.. _dos_wannier:
|
||||
|
||||
|
@ -101,18 +100,18 @@ otherwise, the output is returned by the function for a further usage in :progra
|
|||
Partial charges
|
||||
---------------
|
||||
|
||||
Since we can calculate the partial charges directly from the Matsubara Green's functions, we also do not need a
|
||||
real frequency self energy for this purpose. The calculation is done by::
|
||||
Since we can calculate the partial charges directly from the Matsubara Green functions, we also do not need a
|
||||
real-frequency self energy for this purpose. The calculation is done by::
|
||||
|
||||
SK.set_Sigma(SigmaImFreq)
|
||||
dm = SK.partial_charges(beta=40.0, with_Sigma=True, with_dc=True)
|
||||
|
||||
which calculates the partial charges using the self energy, double counting, and chemical potential as set in the
|
||||
`SK` object. On return, `dm` is a list, where the list items correspond to the density matrices of all shells
|
||||
defined in the list `SK.shells`. This list is constructed by the :program:`Wien2k` converter routines and stored automatically
|
||||
defined in the list `SK.shells`. This list is constructed by the Wien2k converter routines and stored automatically
|
||||
in the hdf5 archive. For the structure of `dm`, see also :meth:`reference manual <dft.sumk_dft_tools.SumkDFTTools.partial_charges>`.
|
||||
|
||||
Correlated spectral function (with real frequency self energy)
|
||||
Correlated spectral function (with real-frequency self energy)
|
||||
--------------------------------------------------------------
|
||||
|
||||
To produce both the momentum-integrated (total density of states or DOS) and orbitally-resolved (partial/projected DOS) spectral functions
|
||||
|
@ -124,7 +123,7 @@ The variable `broadening` is an additional Lorentzian broadening (default: `0.01
|
|||
The output is written in the same way as described above for the :ref:`Wannier density of states <dos_wannier>`, but with filenames
|
||||
`DOS_parproj_*` instead.
|
||||
|
||||
Momentum resolved spectral function (with real frequency self energy)
|
||||
Momentum resolved spectral function (with real-frequency self energy)
|
||||
---------------------------------------------------------------------
|
||||
|
||||
Another quantity of interest is the momentum-resolved spectral function, which can directly be compared to ARPES
|
||||
|
@ -141,7 +140,7 @@ Here, optional parameters are
|
|||
* `plotrange`: A list with two entries, :math:`\omega_{min}` and :math:`\omega_{max}`, which set the plot
|
||||
range for the output. The default value is `None`, in which case the full momentum range as given in the self energy is used.
|
||||
* `ishell`: An integer denoting the orbital index `ishell` onto which the spectral function is projected. The resulting function is saved in
|
||||
the files. The default value is `None`. Note for experts: The spectra are not rotated to the local coordinate system used in :program:`Wien2k`.
|
||||
the files. The default value is `None`. Note for experts: The spectra are not rotated to the local coordinate system used in Wien2k.
|
||||
|
||||
The output is written as the 3-column files ``Akw(sp).dat``, where `(sp)` is defined as above. The output format is
|
||||
`k`, :math:`\omega`, `value`.
|
||||
|
|
|
@ -0,0 +1,117 @@
|
|||
.. _convW90:
|
||||
|
||||
Wannier90 Converter
|
||||
===================
|
||||
|
||||
Using this converter it is possible to convert the output of
|
||||
`wannier90 <http://wannier.org>`_
|
||||
Maximally Localized Wannier Functions (MLWF) and create a HDF5 archive
|
||||
suitable for one-shot DMFT calculations with the
|
||||
:class:`SumkDFT <dft.sumk_dft.SumkDFT>` class.
|
||||
|
||||
The user must supply two files in order to run the Wannier90 Converter:
|
||||
|
||||
#. The file :file:`seedname_hr.dat`, which contains the DFT Hamiltonian
|
||||
in the MLWF basis calculated through :program:`wannier90` with ``hr_plot = true``
|
||||
(please refer to the :program:`wannier90` documentation).
|
||||
#. A file named :file:`seedname.inp`, which contains the required
|
||||
information about the :math:`\mathbf{k}`-point mesh, the electron density,
|
||||
the correlated shell structure, ... (see below).
|
||||
|
||||
Here and in the following, the keyword ``seedname`` should always be intended
|
||||
as a placeholder for the actual prefix chosen by the user when creating the
|
||||
input for :program:`wannier90`.
|
||||
Once these two files are available, one can use the converter as follows::
|
||||
|
||||
from triqs_dft_tools.converters import Wannier90Converter
|
||||
Converter = Wannier90Converter(seedname='seedname')
|
||||
Converter.convert_dft_input()
|
||||
|
||||
The converter input :file:`seedname.inp` is a simple text file with
|
||||
the following format (do not use the text/comments in your input file):
|
||||
|
||||
.. literalinclude:: images_scripts/LaVO3_w90.inp
|
||||
|
||||
The example shows the input for the perovskite crystal of LaVO\ :sub:`3`
|
||||
in the room-temperature `Pnma` symmetry. The unit cell contains four
|
||||
symmetry-equivalent correlated sites (the V atoms) and the total number
|
||||
of electrons per unit cell is 8 (see second line).
|
||||
The first line specifies how to generate the :math:`\mathbf{k}`-point
|
||||
mesh that will be used to obtain :math:`H(\mathbf{k})`
|
||||
by Fourier transforming :math:`H(\mathbf{R})`.
|
||||
Currently implemented options are:
|
||||
|
||||
* :math:`\Gamma`-centered uniform grid with dimensions
|
||||
:math:`n_{k_x} \times n_{k_y} \times n_{k_z}`;
|
||||
specify ``0`` followed by the three grid dimensions,
|
||||
like in the example above
|
||||
* :math:`\Gamma`-centered uniform grid with dimensions
|
||||
automatically determined by the converter (from the number of
|
||||
:math:`\mathbf{R}` vectors found in :file:`seedname_hr.dat`);
|
||||
just specify ``-1``
|
||||
|
||||
Inside :file:`seedname.inp`, it is crucial to correctly specify the
|
||||
correlated shell structure, which depends on the contents of the
|
||||
:program:`wannier90` output :file:`seedname_hr.dat` and on the order
|
||||
of the MLWFs contained in it. In this example we have four lines for the
|
||||
four V atoms. The MLWFs were constructed for the t\ :sub:`2g` subspace, and thus
|
||||
we set ``l`` to 2 and ``dim`` to 3 for all V atoms. Further the spin-orbit coupling (``SO``)
|
||||
is set to 0 and ``irep`` to 0.
|
||||
As in this example all 4 V atoms are equivalent we set ``sort`` to 0. We note
|
||||
that, e.g., for a magnetic DMFT calculation the correlated atoms can be made
|
||||
inequivalent at this point by using different values for ``sort``.
|
||||
|
||||
The number of MLWFs must be equal to, or greater than the total number
|
||||
of correlated orbitals (i.e., the sum of all ``dim`` in :file:`seedname.inp`).
|
||||
If the converter finds fewer MLWFs inside :file:`seedname_hr.dat`, then it
|
||||
stops with an error; if it finds more MLWFs, then it assumes that the
|
||||
additional MLWFs correspond to uncorrelated orbitals (e.g., the O-\ `2p` shells).
|
||||
When reading the hoppings :math:`\langle w_i | H(\mathbf{R}) | w_j \rangle`
|
||||
(where :math:`w_i` is the :math:`i`-th MLWF), the converter also assumes that
|
||||
the first indices correspond to the correlated shells (in our example,
|
||||
the V-t\ :sub:`2g` shells). Therefore, the MLWFs corresponding to the
|
||||
uncorrelated shells (if present) must be listed **after** those of the
|
||||
correlated shells.
|
||||
With the :program:`wannier90` code, this can be achieved by listing the
|
||||
projections for the uncorrelated shells after those for the correlated shells.
|
||||
In our `Pnma`-LaVO\ :sub:`3` example, for instance, we could use::
|
||||
|
||||
Begin Projections
|
||||
V:l=2,mr=2,3,5:z=0,0,1:x=-1,1,0
|
||||
O:l=1:mr=1,2,3:z=0,0,1:x=-1,1,0
|
||||
End Projections
|
||||
|
||||
where the ``x=-1,1,0`` option indicates that the V--O bonds in the octahedra are
|
||||
rotated by (approximatively) 45 degrees with respect to the axes of the `Pbnm` cell.
|
||||
|
||||
The converter will analyse the matrix elements of the local Hamiltonian
|
||||
to find the symmetry matrices `rot_mat` needed for the global-to-local
|
||||
transformation of the basis set for correlated orbitals
|
||||
(see section :ref:`hdfstructure`).
|
||||
The matrices are obtained by finding the unitary transformations that diagonalize
|
||||
:math:`\langle w_i | H_I(\mathbf{R}=0,0,0) | w_j \rangle`, where :math:`I` runs
|
||||
over the correlated shells and `i,j` belong to the same shell (more details elsewhere...).
|
||||
If two correlated shells are defined as equivalent in :file:`seedname.inp`,
|
||||
then the corresponding eigenvalues have to match within a threshold of 10\ :sup:`-5`,
|
||||
otherwise the converter will produce an error/warning.
|
||||
If this happens, please carefully check your data in :file:`seedname_hr.dat`.
|
||||
This method might fail in non-trivial cases (i.e., more than one correlated
|
||||
shell is present) when there are some degenerate eigenvalues:
|
||||
so far tests have not shown any issue, but one must be careful in those cases
|
||||
(the converter will print a warning message).
|
||||
|
||||
The current implementation of the Wannier90 Converter has some limitations:
|
||||
|
||||
* Since :program:`wannier90` does not make use of symmetries (symmetry-reduction
|
||||
of the :math:`\mathbf{k}`-point grid is not possible), the converter always
|
||||
sets ``symm_op=0`` (see the :ref:`hdfstructure` section).
|
||||
* No charge self-consistency possible at the moment.
|
||||
* Calculations with spin-orbit (``SO=1``) are not supported.
|
||||
* The spin-polarized case (``SP=1``) is not yet tested.
|
||||
* The post-processing routines in the module
|
||||
:class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`
|
||||
were not tested with this converter.
|
||||
* ``proj_mat_all`` are not used, so there are no projectors onto the
|
||||
uncorrelated orbitals for now.
|
||||
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
.. _convgeneralhk:
|
||||
|
||||
A general H(k)
|
||||
==============
|
||||
|
||||
In addition to the more extensive Wien2k, VASP, and W90 converters,
|
||||
:program:`DFTTools` contains also a light converter. It takes only
|
||||
one inputfile, and creates the necessary hdf outputfile for
|
||||
the DMFT calculation. The header of this input file has a defined
|
||||
format, an example is the following (do not use the text/comments in your
|
||||
input file):
|
||||
|
||||
.. literalinclude:: images_scripts/case.hk
|
||||
|
||||
The lines of this header define
|
||||
|
||||
#. Number of :math:`\mathbf{k}`-points used in the calculation
|
||||
#. Electron density for setting the chemical potential
|
||||
#. Number of total atomic shells in the hamiltonian matrix. In short,
|
||||
this gives the number of lines described in the following. IN the
|
||||
example file give above this number is 2.
|
||||
#. The next line(s) contain four numbers each: index of the atom, index
|
||||
of the equivalent shell, :math:`l` quantum number, dimension
|
||||
of this shell. Repeat this line for each atomic shell, the number
|
||||
of the shells is given in the previous line.
|
||||
|
||||
In the example input file given above, we have two inequivalent
|
||||
atomic shells, one on atom number 1 with a full d-shell (dimension 5),
|
||||
and one on atom number 2 with one p-shell (dimension 3).
|
||||
|
||||
Other examples for these lines are:
|
||||
|
||||
#. Full d-shell in a material with only one correlated atom in the
|
||||
unit cell (e.g. SrVO3). One line is sufficient and the numbers
|
||||
are `1 1 2 5`.
|
||||
#. Full d-shell in a material with two equivalent atoms in the unit
|
||||
cell (e.g. FeSe): You need two lines, one for each equivalent
|
||||
atom. First line is `1 1 2 5`, and the second line is
|
||||
`2 1 2 5`. The only difference is the first number, which tells on
|
||||
which atom the shell is located. The second number is the
|
||||
same in both lines, meaning that both atoms are equivalent.
|
||||
#. t2g orbitals on two non-equivalent atoms in the unit cell: Two
|
||||
lines again. First line is `1 1 2 3`, second line `2 2 2 3`. The
|
||||
difference to the case above is that now also the second number
|
||||
differs. Therefore, the two shells are treated independently in
|
||||
the calculation.
|
||||
#. d-p Hamiltonian in a system with two equivalent atoms each in
|
||||
the unit cell (e.g. FeSe has two Fe and two Se in the unit
|
||||
cell). You need for lines. First line `1 1 2 5`, second
|
||||
line
|
||||
`2 1 2 5`. These two lines specify Fe as in the case above. For the p
|
||||
orbitals you need line three as `3 2 1 3` and line four
|
||||
as `4 2 1 3`. We have 4 atoms, since the first number runs from 1 to 4,
|
||||
but only two inequivalent atoms, since the second number runs
|
||||
only form 1 to 2.
|
||||
|
||||
Note that the total dimension of the hamiltonian matrices that are
|
||||
read in is the sum of all shell dimensions that you specified. For
|
||||
example number 4 given above we have a dimension of 5+5+3+3=16. It is important
|
||||
that the order of the shells that you give here must be the same as
|
||||
the order of the orbitals in the hamiltonian matrix. In the last
|
||||
example case above the code assumes that matrix index 1 to 5
|
||||
belongs to the first d shell, 6 to 10 to the second, 11 to 13 to
|
||||
the first p shell, and 14 to 16 the second p shell.
|
||||
|
||||
#. Number of correlated shells in the hamiltonian matrix, in the same
|
||||
spirit as line 3.
|
||||
|
||||
#. The next line(s) contain six numbers: index of the atom, index
|
||||
of the equivalent shell, :math:`l` quantum number, dimension
|
||||
of the correlated shells, a spin-orbit parameter, and another
|
||||
parameter defining interactions. Note that the latter two
|
||||
parameters are not used at the moment in the code, and only kept
|
||||
for compatibility reasons. In our example file we use only the
|
||||
d-shell as correlated, that is why we have only one line here.
|
||||
|
||||
#. The last line contains several numbers: the number of irreducible
|
||||
representations, and then the dimensions of the irreps. One
|
||||
possibility is as the example above, another one would be 2
|
||||
2 3. This would mean, 2 irreps (eg and t2g), of dimension 2 and 3,
|
||||
resp.
|
||||
|
||||
After these header lines, the file has to contain the Hamiltonian
|
||||
matrix in orbital space. The standard convention is that you give for
|
||||
each :math:`\mathbf{k}`-point first the matrix of the real part, then the
|
||||
matrix of the imaginary part, and then move on to the next :math:`\mathbf{k}`-point.
|
||||
|
||||
The converter itself is used as::
|
||||
|
||||
from triqs_dft_tools.converters.hk_converter import *
|
||||
Converter = HkConverter(filename = hkinputfile)
|
||||
Converter.convert_dft_input()
|
||||
|
||||
where :file:`hkinputfile` is the name of the input file described
|
||||
above. This produces the hdf file that you need for a DMFT calculation.
|
||||
|
||||
For more options of this converter, have a look at the
|
||||
:ref:`refconverters` section of the reference manual.
|
||||
|
||||
|
|
@ -0,0 +1,144 @@
|
|||
.. _convVASP:
|
||||
|
||||
|
||||
Interface with VASP
|
||||
===================
|
||||
|
||||
.. warning::
|
||||
The VASP interface is in the alpha-version and the VASP part of it is not
|
||||
yet publicly released. The documentation may, thus, be subject to changes
|
||||
before the final release.
|
||||
|
||||
*Limitations of the alpha-version:*
|
||||
|
||||
* The interface works correctly only if the k-point symmetries
|
||||
are turned off during the VASP run (ISYM=-1).
|
||||
|
||||
* Generation of projectors for k-point lines (option `Lines` in KPOINTS)
|
||||
needed for Bloch spectral function calculations is not possible at the moment.
|
||||
|
||||
* The interface currently supports only collinear-magnetism calculation
|
||||
(this implis no spin-orbit coupling) and
|
||||
spin-polarized projectors have not been tested.
|
||||
|
||||
A detailed description of the VASP converter tool PLOVasp can be found
|
||||
in the :ref:`PLOVasp User's Guide <plovasp>`. Here, a quick-start guide is presented.
|
||||
|
||||
The VASP interface relies on new options introduced since version
|
||||
5.4.x. In particular, a new INCAR-option `LOCPROJ`
|
||||
and new `LORBIT` modes 13 and 14 have been added.
|
||||
|
||||
Option `LOCPROJ` selects a set of localized projectors that will
|
||||
be written to file `LOCPROJ` after a successful VASP run.
|
||||
A projector set is specified by site indices,
|
||||
labels of the target local states, and projector type:
|
||||
|
||||
| `LOCPROJ = <sites> : <shells> : <projector type>`
|
||||
|
||||
where `<sites>` represents a list of site indices separated by spaces,
|
||||
with the indices corresponding to the site position in the POSCAR file;
|
||||
`<shells>` specifies local states (see below);
|
||||
`<projector type>` chooses a particular type of the local basis function.
|
||||
The recommended projector type is `Pr 2`. The formalism for this type
|
||||
of projectors is presented in
|
||||
`M. Schüler et al. 2018 J. Phys.: Condens. Matter 30 475901 <https://doi.org/10.1088/1361-648X/aae80a>`_.
|
||||
|
||||
The allowed labels of the local states defined in terms of cubic
|
||||
harmonics are:
|
||||
|
||||
* Entire shells: `s`, `p`, `d`, `f`
|
||||
|
||||
* `p`-states: `py`, `pz`, `px`
|
||||
|
||||
* `d`-states: `dxy`, `dyz`, `dz2`, `dxz`, `dx2-y2`
|
||||
|
||||
* `f`-states: `fy(3x2-y2)`, `fxyz`, `fyz2`, `fz3`,
|
||||
`fxz2`, `fz(x2-y2)`, `fx(x2-3y2)`.
|
||||
|
||||
For projector type `Pr 2`, one should also set `LORBIT = 14` in the INCAR file
|
||||
and provide parameters `EMIN`, `EMAX`, defining, in this case, an
|
||||
energy range (energy window) corresponding to the valence states.
|
||||
Note that, as in the case
|
||||
of a DOS calculation, the position of the valence states depends on the
|
||||
Fermi level, which can usually be found at the end of the OUTCAR file.
|
||||
|
||||
For example, in case of SrVO3 one may first want to perform a self-consistent
|
||||
calculation, then set `ICHARGE = 1` and add the following additional
|
||||
lines into INCAR (provided that V is the second ion in POSCAR):
|
||||
|
||||
| `EMIN = 3.0`
|
||||
| `EMAX = 8.0`
|
||||
| `LORBIT = 14`
|
||||
| `LOCPROJ = 2 : d : Pr 2`
|
||||
|
||||
The energy range does not have to be precise. Important is that it has a large
|
||||
overlap with valence bands and no overlap with semi-core or high unoccupied states.
|
||||
|
||||
Conversion for the DMFT self-consistency cycle
|
||||
----------------------------------------------
|
||||
|
||||
The projectors generated by VASP require certain post-processing before
|
||||
they can be used for DMFT calculations. The most important step is to normalize
|
||||
them within an energy window that selects band states relevant for the impurity
|
||||
problem. Note that this energy window is different from the one described above
|
||||
and it must be chosen independently of the energy
|
||||
range given by `EMIN, EMAX` in INCAR.
|
||||
|
||||
Post-processing of `LOCPROJ` data is generally done as follows:
|
||||
|
||||
#. Prepare an input file `<name>.cfg` (e.g., `plo.cfg`) that describes the definition
|
||||
of your impurity problem (more details below).
|
||||
|
||||
#. Extract the value of the Fermi level from OUTCAR and paste it at the end of
|
||||
the first line of LOCPROJ.
|
||||
|
||||
#. Run :program:`plovasp` with the input file as an argument, e.g.:
|
||||
|
||||
| `plovasp plo.cfg`
|
||||
|
||||
This requires that the TRIQS paths are set correctly (see Installation
|
||||
of TRIQS).
|
||||
|
||||
If everything goes right one gets files `<name>.ctrl` and `<name>.pg1`.
|
||||
These files are needed for the converter that will be invoked in your
|
||||
DMFT script.
|
||||
|
||||
The format of input file `<name>.cfg` is described in details in
|
||||
the :ref:`User's Guide <plovasp>`. Here we just consider a simple example for the case
|
||||
of SrVO3:
|
||||
|
||||
.. literalinclude:: images_scripts/srvo3.cfg
|
||||
|
||||
A projector shell is defined by a section `[Shell 1]` where the number
|
||||
can be arbitrary and used only for user convenience. Several
|
||||
parameters are required
|
||||
|
||||
- **IONS**: list of site indices which must be a subset of indices
|
||||
given earlier in `LOCPROJ`.
|
||||
- **LSHELL**: :math:`l`-quantum number of the projector shell; the corresponding
|
||||
orbitals must be present in `LOCPROJ`.
|
||||
- **EWINDOW**: energy window in which the projectors are normalized;
|
||||
note that the energies are defined with respect to the Fermi level.
|
||||
|
||||
Option **TRANSFORM** is optional but here, it is specified to extract
|
||||
only three :math:`t_{2g}` orbitals out of five `d` orbitals given by
|
||||
:math:`l = 2`.
|
||||
|
||||
The conversion to a h5-file is performed in the same way as for Wien2TRIQS::
|
||||
|
||||
from triqs_dft_tools.converters.vasp_converter import *
|
||||
Converter = VaspConverter(filename = filename)
|
||||
Converter.convert_dft_input()
|
||||
|
||||
As usual, the resulting h5-file can then be used with the SumkDFT class.
|
||||
|
||||
Note that the automatic detection of the correct block structure might
|
||||
fail for VASP inputs.
|
||||
This can be circumvented by setting a bigger value of the threshold in
|
||||
:class:`SumkDFT <dft.sumk_dft.SumkDFT>`, e.g.::
|
||||
|
||||
SK.analyse_block_structure(threshold = 1e-4)
|
||||
|
||||
However, do this only after a careful study of the density matrix and
|
||||
the projected DOS in the localized basis.
|
||||
|
|
@ -0,0 +1,171 @@
|
|||
.. _convWien2k:
|
||||
|
||||
Interface with Wien2k
|
||||
=====================
|
||||
|
||||
We assume that the user has obtained a self-consistent solution of the
|
||||
Kohn-Sham equations. We further have to require that the user is
|
||||
familiar with the main in/output files of Wien2k, and how to run
|
||||
the DFT code.
|
||||
|
||||
Conversion for the DMFT self-consistency cycle
|
||||
----------------------------------------------
|
||||
|
||||
First, we have to write the necessary
|
||||
quantities into a file that can be processed further by invoking in a
|
||||
shell the command
|
||||
|
||||
`x lapw2 -almd`
|
||||
|
||||
We note that any other flag for lapw2, such as -c or -so (for
|
||||
spin-orbit coupling) has to be added also to this line. This creates
|
||||
some files that we need for the Wannier orbital construction.
|
||||
|
||||
The orbital construction itself is done by the Fortran program
|
||||
:program:`dmftproj`. For an extensive manual to this program see
|
||||
:download:`TutorialDmftproj.pdf <images_scripts/TutorialDmftproj.pdf>`.
|
||||
Here we will only describe the basic steps.
|
||||
|
||||
In the following, we use SrVO3 as an example to explain the
|
||||
input file :file:`case.indmftpr` for :program:`dmftproj`.
|
||||
A full tutorial on SrVO3 is available in the :ref:`SrVO3 tutorial <SrVO3>`.
|
||||
|
||||
.. literalinclude:: ../tutorials/images_scripts/SrVO3.indmftpr
|
||||
|
||||
The first three lines give the number of inequivalent sites, their
|
||||
multiplicity (to be in accordance with the Wien2k *struct* file) and
|
||||
the maximum orbital quantum number :math:`l_{max}`. In our case our
|
||||
struct file contains the atoms in the order Sr, V, O.
|
||||
|
||||
Next we have to specify for each of the inequivalent sites, whether
|
||||
we want to treat their orbitals as correlated or not. This information
|
||||
is given by the following 3 to 5 lines:
|
||||
|
||||
#. We specify which basis set is used (complex or cubic
|
||||
harmonics).
|
||||
#. The four numbers refer to *s*, *p*, *d*, and *f* electrons,
|
||||
resp. Putting 0 means doing nothing, putting 1 will calculate
|
||||
**unnormalized** projectors in compliance with the Wien2k
|
||||
definition. The important flag is 2, this means to include these
|
||||
electrons as correlated electrons, and calculate normalized Wannier
|
||||
functions for them. In the example above, you see that only for the
|
||||
vanadium *d* we set the flag to 2. If you want to do simply a DMFT
|
||||
calculation, then set everything to 0, except one flag 2 for the
|
||||
correlated electrons.
|
||||
#. In case you have a irrep splitting of the correlated shell, you can
|
||||
specify here how many irreps you have. You see that we put 2, since
|
||||
eg and t2g symmetries are irreps in this cubic case. If you don't
|
||||
want to use this splitting, just put 0.
|
||||
#. (optional) If you specifies a number different from 0 in above line, you have
|
||||
to tell now, which of the irreps you want to be treated
|
||||
correlated. We want to t2g, and not the eg, so we set 0 for eg and
|
||||
1 for t2g. Note that the example above is what you need in 99% of
|
||||
the cases when you want to treat only t2g electrons. For eg's only
|
||||
(e.g. nickelates), you set 10 and 01 in this line.
|
||||
#. (optional) If you have specified a correlated shell for this atom,
|
||||
you have to tell if spin-orbit coupling should be taken into
|
||||
account. 0 means no, 1 is yes.
|
||||
|
||||
These lines have to be repeated for each inequivalent atom.
|
||||
|
||||
The last line gives the energy window, relative to the Fermi energy,
|
||||
that is used for the projective Wannier functions. Note that, in
|
||||
accordance with Wien2k, we give energies in Rydberg units!
|
||||
|
||||
After setting up the :file:`case.indmftpr` input file, you run:
|
||||
|
||||
`dmftproj`
|
||||
|
||||
Again, adding possible flags like -so for spin-orbit coupling. This
|
||||
program produces the following files (in the following, take *case* as
|
||||
the standard Wien2k place holder, to be replaced by the actual working
|
||||
directory name):
|
||||
|
||||
* :file:`case.ctqmcout` and :file:`case.symqmc` containing projector
|
||||
operators and symmetry operations for orthonormalized Wannier
|
||||
orbitals, respectively.
|
||||
* :file:`case.parproj` and :file:`case.sympar` containing projector
|
||||
operators and symmetry operations for uncorrelated states,
|
||||
respectively. These files are needed for projected
|
||||
density-of-states or spectral-function calculations in
|
||||
post-processing only.
|
||||
* :file:`case.oubwin` needed for the charge density recalculation in
|
||||
the case of fully self-consistent DFT+DMFT run (see below).
|
||||
|
||||
Now we convert these files into an hdf5 file that can be used for the
|
||||
DMFT calculations. For this purpose we
|
||||
use the python module :class:`Wien2kConverter <dft.converters.wien2k_converter.Wien2kConverter>`. It is initialized as::
|
||||
|
||||
from triqs_dft_tools.converters.wien2k_converter import *
|
||||
Converter = Wien2kConverter(filename = case)
|
||||
|
||||
The only necessary parameter to this construction is the parameter `filename`.
|
||||
It has to be the root of the files produces by dmftproj. For our
|
||||
example, the :program:`Wien2k` naming convention is that all files have the
|
||||
same name, but different extensions, :file:`case.*`. The constructor opens
|
||||
an hdf5 archive, named :file:`case.h5`, where all relevant data will be
|
||||
stored. For other parameters of the constructor please visit the
|
||||
:ref:`refconverters` section of the reference manual.
|
||||
|
||||
After initializing the interface module, we can now convert the input
|
||||
text files to the hdf5 archive by::
|
||||
|
||||
Converter.convert_dft_input()
|
||||
|
||||
This reads all the data, and stores it in the file :file:`case.h5`.
|
||||
In this step, the files :file:`case.ctqmcout` and
|
||||
:file:`case.symqmc` have to be present in the working directory.
|
||||
|
||||
After this step, all the necessary information for the DMFT loop is
|
||||
stored in the hdf5 archive, where the string variable
|
||||
`Converter.hdf_filename` gives the file name of the archive.
|
||||
|
||||
At this point you should use the method :meth:`dos_wannier_basis <dft.sumk_dft_tools.SumkDFTTools.dos_wannier_basis>`
|
||||
contained in the module :class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>` to check the density of
|
||||
states of the Wannier orbitals (see :ref:`analysis`).
|
||||
|
||||
You have now everything for performing a DMFT calculation, and you can
|
||||
proceed with the section on :ref:`single-shot DFT+DMFT calculations <singleshot>`.
|
||||
|
||||
Data for post-processing
|
||||
------------------------
|
||||
|
||||
In case you want to do post-processing of your data using the module
|
||||
:class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`, some more files
|
||||
have to be converted to the hdf5 archive. For instance, for
|
||||
calculating the partial density of states or partial charges
|
||||
consistent with the definition of :program:`Wien2k`, you have to invoke::
|
||||
|
||||
Converter.convert_parproj_input()
|
||||
|
||||
This reads and converts the files :file:`case.parproj` and
|
||||
:file:`case.sympar`.
|
||||
|
||||
If you want to plot band structures, one has to do the
|
||||
following. First, one has to do the Wien2k calculation on the given
|
||||
:math:`\mathbf{k}`-path, and run :program:`dmftproj` on that path:
|
||||
|
||||
| `x lapw1 -band`
|
||||
| `x lapw2 -band -almd`
|
||||
| `dmftproj -band`
|
||||
|
||||
|
||||
Again, maybe with the optional additional extra flags according to
|
||||
Wien2k. Now we use a routine of the converter module allows to read
|
||||
and convert the input for :class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`::
|
||||
|
||||
Converter.convert_bands_input()
|
||||
|
||||
After having converted this input, you can further proceed with the
|
||||
:ref:`analysis`. For more options on the converter module, please have
|
||||
a look at the :ref:`refconverters` section of the reference manual.
|
||||
|
||||
Data for transport calculations
|
||||
-------------------------------
|
||||
|
||||
For the transport calculations, the situation is a bit more involved,
|
||||
since we need also the :program:`optics` package of Wien2k. Please
|
||||
look at the section on :ref:`Transport` to see how to do the necessary
|
||||
steps, including the conversion.
|
||||
|
||||
|
|
@ -1,464 +1,27 @@
|
|||
.. _conversion:
|
||||
|
||||
Orbital construction and conversion
|
||||
===================================
|
||||
Supported interfaces
|
||||
====================
|
||||
|
||||
The first step for a DMFT calculation is to provide the necessary
|
||||
input based on a DFT calculation. We will not review how to do the DFT
|
||||
calculation here in this documentation, but refer the user to the
|
||||
documentation and tutorials that come with the actual DFT
|
||||
package. Here, we will describe how to use output created by Wien2k,
|
||||
as well as how to use the light-weight general interface.
|
||||
package. At the moment, there are two full charge self consistent interfaces, for the
|
||||
Wien2k and the VASP DFT packages, resp. In addition, there is an interface to Wannier90, as well
|
||||
as a light-weight general-purpose interface. In the following, we will describe the usage of these
|
||||
conversion tools.
|
||||
|
||||
Interface with Wien2k
|
||||
---------------------
|
||||
|
||||
We assume that the user has obtained a self-consistent solution of the
|
||||
Kohn-Sham equations. We further have to require that the user is
|
||||
familiar with the main in/output files of Wien2k, and how to run
|
||||
the DFT code.
|
||||
|
||||
Conversion for the DMFT self-consistency cycle
|
||||
""""""""""""""""""""""""""""""""""""""""""""""
|
||||
|
||||
First, we have to write the necessary
|
||||
quantities into a file that can be processed further by invoking in a
|
||||
shell the command
|
||||
|
||||
`x lapw2 -almd`
|
||||
|
||||
We note that any other flag for lapw2, such as -c or -so (for
|
||||
spin-orbit coupling) has to be added also to this line. This creates
|
||||
some files that we need for the Wannier orbital construction.
|
||||
|
||||
The orbital construction itself is done by the Fortran program
|
||||
:program:`dmftproj`. For an extensive manual to this program see
|
||||
:download:`TutorialDmftproj.pdf <images_scripts/TutorialDmftproj.pdf>`.
|
||||
Here we will only describe the basic steps.
|
||||
|
||||
Let us take the compound SrVO3, a commonly used
|
||||
example for DFT+DMFT calculations. The input file for
|
||||
:program:`dmftproj` looks like
|
||||
|
||||
.. literalinclude:: images_scripts/SrVO3.indmftpr
|
||||
|
||||
The first three lines give the number of inequivalent sites, their
|
||||
multiplicity (to be in accordance with the Wien2k *struct* file) and
|
||||
the maximum orbital quantum number :math:`l_{max}`. In our case our
|
||||
struct file contains the atoms in the order Sr, V, O.
|
||||
|
||||
Next we have to
|
||||
specify for each of the inequivalent sites, whether we want to treat
|
||||
their orbitals as correlated or not. This information is given by the
|
||||
following 3 to 5 lines:
|
||||
|
||||
#. We specify which basis set is used (complex or cubic
|
||||
harmonics).
|
||||
#. The four numbers refer to *s*, *p*, *d*, and *f* electrons,
|
||||
resp. Putting 0 means doing nothing, putting 1 will calculate
|
||||
**unnormalized** projectors in compliance with the Wien2k
|
||||
definition. The important flag is 2, this means to include these
|
||||
electrons as correlated electrons, and calculate normalized Wannier
|
||||
functions for them. In the example above, you see that only for the
|
||||
vanadium *d* we set the flag to 2. If you want to do simply a DMFT
|
||||
calculation, then set everything to 0, except one flag 2 for the
|
||||
correlated electrons.
|
||||
#. In case you have a irrep splitting of the correlated shell, you can
|
||||
specify here how many irreps you have. You see that we put 2, since
|
||||
eg and t2g symmetries are irreps in this cubic case. If you don't
|
||||
want to use this splitting, just put 0.
|
||||
#. (optional) If you specifies a number different from 0 in above line, you have
|
||||
to tell now, which of the irreps you want to be treated
|
||||
correlated. We want to t2g, and not the eg, so we set 0 for eg and
|
||||
1 for t2g. Note that the example above is what you need in 99% of
|
||||
the cases when you want to treat only t2g electrons. For eg's only
|
||||
(e.g. nickelates), you set 10 and 01 in this line.
|
||||
#. (optional) If you have specified a correlated shell for this atom,
|
||||
you have to tell if spin-orbit coupling should be taken into
|
||||
account. 0 means no, 1 is yes.
|
||||
|
||||
These lines have to be repeated for each inequivalent atom.
|
||||
|
||||
The last line gives the energy window, relative to the Fermi energy,
|
||||
that is used for the projective Wannier functions. Note that, in
|
||||
accordance with Wien2k, we give energies in Rydberg units!
|
||||
|
||||
After setting up this input file, you run:
|
||||
|
||||
`dmftproj`
|
||||
|
||||
Again, adding possible flags like -so for spin-orbit coupling. This
|
||||
program produces the following files (in the following, take *case* as
|
||||
the standard Wien2k place holder, to be replaced by the actual working
|
||||
directory name):
|
||||
|
||||
* :file:`case.ctqmcout` and :file:`case.symqmc` containing projector
|
||||
operators and symmetry operations for orthonormalized Wannier
|
||||
orbitals, respectively.
|
||||
* :file:`case.parproj` and :file:`case.sympar` containing projector
|
||||
operators and symmetry operations for uncorrelated states,
|
||||
respectively. These files are needed for projected
|
||||
density-of-states or spectral-function calculations in
|
||||
post-processing only.
|
||||
* :file:`case.oubwin` needed for the charge density recalculation in
|
||||
the case of fully self-consistent DFT+DMFT run (see below).
|
||||
|
||||
Now we convert these files into an hdf5 file that can be used for the
|
||||
DMFT calculations. For this purpose we
|
||||
use the python module :class:`Wien2kConverter <dft.converters.wien2k_converter.Wien2kConverter>`. It is initialized as::
|
||||
|
||||
from triqs_dft_tools.converters.wien2k_converter import *
|
||||
Converter = Wien2kConverter(filename = case)
|
||||
|
||||
The only necessary parameter to this construction is the parameter `filename`.
|
||||
It has to be the root of the files produces by dmftproj. For our
|
||||
example, the :program:`Wien2k` naming convention is that all files are
|
||||
called the same, for instance
|
||||
:file:`SrVO3.*`, so you would give `filename = "SrVO3"`. The constructor opens
|
||||
an hdf5 archive, named :file:`case.h5`, where all the data is
|
||||
stored. For other parameters of the constructor please visit the
|
||||
:ref:`refconverters` section of the reference manual.
|
||||
|
||||
After initializing the interface module, we can now convert the input
|
||||
text files to the hdf5 archive by::
|
||||
|
||||
Converter.convert_dft_input()
|
||||
|
||||
This reads all the data, and stores it in the file :file:`case.h5`.
|
||||
In this step, the files :file:`case.ctqmcout` and
|
||||
:file:`case.symqmc`
|
||||
have to be present in the working directory.
|
||||
|
||||
After this step, all the necessary information for the DMFT loop is
|
||||
stored in the hdf5 archive, where the string variable
|
||||
`Converter.hdf_filename` gives the file name of the archive.
|
||||
|
||||
At this point you should use the method :meth:`dos_wannier_basis <dft.sumk_dft_tools.SumkDFTTools.dos_wannier_basis>`
|
||||
contained in the module :class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>` to check the density of
|
||||
states of the Wannier orbitals (see :ref:`analysis`).
|
||||
|
||||
You have now everything for performing a DMFT calculation, and you can
|
||||
proceed with the section on :ref:`single-shot DFT+DMFT calculations <singleshot>`.
|
||||
|
||||
Data for post-processing
|
||||
""""""""""""""""""""""""
|
||||
|
||||
In case you want to do post-processing of your data using the module
|
||||
:class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`, some more files
|
||||
have to be converted to the hdf5 archive. For instance, for
|
||||
calculating the partial density of states or partial charges
|
||||
consistent with the definition of :program:`Wien2k`, you have to invoke::
|
||||
|
||||
Converter.convert_parproj_input()
|
||||
|
||||
This reads and converts the files :file:`case.parproj` and
|
||||
:file:`case.sympar`.
|
||||
|
||||
If you want to plot band structures, one has to do the
|
||||
following. First, one has to do the Wien2k calculation on the given
|
||||
:math:`\mathbf{k}`-path, and run :program:`dmftproj` on that path:
|
||||
|
||||
| `x lapw1 -band`
|
||||
| `x lapw2 -band -almd`
|
||||
| `dmftproj -band`
|
||||
|
||||
|
||||
Again, maybe with the optional additional extra flags according to
|
||||
Wien2k. Now we use a routine of the converter module allows to read
|
||||
and convert the input for :class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`::
|
||||
|
||||
Converter.convert_bands_input()
|
||||
|
||||
After having converted this input, you can further proceed with the
|
||||
:ref:`analysis`. For more options on the converter module, please have
|
||||
a look at the :ref:`refconverters` section of the reference manual.
|
||||
|
||||
Data for transport calculations
|
||||
"""""""""""""""""""""""""""""""
|
||||
|
||||
For the transport calculations, the situation is a bit more involved,
|
||||
since we need also the :program:`optics` package of Wien2k. Please
|
||||
look at the section on :ref:`Transport` to see how to do the necessary
|
||||
steps, including the conversion.
|
||||
|
||||
Interface with VASP
|
||||
---------------------
|
||||
|
||||
.. warning::
|
||||
The VASP interface is in the alpha-version and the VASP part of it is not
|
||||
yet publicly released. The documentation may, thus, be subject to changes
|
||||
before the final release.
|
||||
|
||||
The interface with VASP relies on new options introduced since
|
||||
version 5.4.x. The output of raw (non-normalized) projectors is
|
||||
controlled by an INCAR option LOCPROJ whose complete syntax is described in
|
||||
VASP documentaion.
|
||||
|
||||
The definition of a projector set starts with specifying which sites
|
||||
and which local states we are going to project onto.
|
||||
This information is provided by option LOCPROJ
|
||||
|
||||
| `LOCPROJ = <sites> : <shells> : <projector type>`
|
||||
|
||||
where `<sites>` represents a list of site indices separated by spaces,
|
||||
with the indices corresponding to the site position in the POSCAR file;
|
||||
`<shells>` specifies local states (e.g. :math:`s`, :math:`p`, :math:`d`,
|
||||
:math:`d_{x^2-y^2}`, etc.);
|
||||
`<projector type>` chooses a particular type of the local basis function.
|
||||
|
||||
Some projector types also require parameters `EMIN`, `EMAX` in INCAR to
|
||||
be set to define an (approximate) energy window corresponding to the
|
||||
valence states.
|
||||
|
||||
When either a self-consistent (`ICHARG < 10`) or a non-self-consistent
|
||||
(`ICHARG >= 10`) calculation is done VASP produces file `LOCPROJ` which
|
||||
will serve as the main input for the conversion routine.
|
||||
|
||||
|
||||
Conversion for the DMFT self-consistency cycle
|
||||
""""""""""""""""""""""""""""""""""""""""""""""
|
||||
|
||||
In order to use the projectors generated by VASP for defining an
|
||||
impurity problem they must be processed, i.e. normalized, possibly
|
||||
transformed, and then converted to a format suitable for DFT_tools scripts.
|
||||
|
||||
The processing of projectors is performed by the program :program:`plovasp`
|
||||
invoked as
|
||||
|
||||
| `plovasp <plo.cfg>`
|
||||
|
||||
where `<plo.cfg>` is a input file controlling the conversion of projectors.
|
||||
|
||||
The format of input file `<plo.cfg>` is described in details in
|
||||
:ref:`plovasp`. Here we just give a simple example for the case
|
||||
of SrVO3:
|
||||
|
||||
.. literalinclude:: images_scripts/srvo3.cfg
|
||||
|
||||
A projector shell is defined by a section `[Shell 1]` where the number
|
||||
can be arbitrary and used only for user convenience. Several
|
||||
parameters are required
|
||||
|
||||
- **IONS**: list of site indices which must be a subset of indices
|
||||
given earlier in `LOCPROJ`.
|
||||
- **LSHELL**: :math:`l`-quantum number of the projector shell; the corresponding
|
||||
orbitals must be present in `LOCPROJ`.
|
||||
- **EWINDOW**: energy window in which the projectors are normalized;
|
||||
note that the energies are defined with respect to the Fermi level.
|
||||
|
||||
Option **TRANSFORM** is optional but here it is specified to extract
|
||||
only three :math:`t_{2g}` orbitals out of five `d` orbitals given by
|
||||
:math:`l = 2`.
|
||||
|
||||
|
||||
A general H(k)
|
||||
--------------
|
||||
|
||||
In addition to the more complicated Wien2k converter,
|
||||
:program:`DFTTools` contains also a light converter. It takes only
|
||||
one inputfile, and creates the necessary hdf outputfile for
|
||||
the DMFT calculation. The header of this input file has a defined
|
||||
format, an example is the following:
|
||||
|
||||
.. literalinclude:: images_scripts/case.hk
|
||||
|
||||
The lines of this header define
|
||||
|
||||
#. Number of :math:`\mathbf{k}`-points used in the calculation
|
||||
#. Electron density for setting the chemical potential
|
||||
#. Number of total atomic shells in the hamiltonian matrix. In short,
|
||||
this gives the number of lines described in the following. IN the
|
||||
example file give above this number is 2.
|
||||
#. The next line(s) contain four numbers each: index of the atom, index
|
||||
of the equivalent shell, :math:`l` quantum number, dimension
|
||||
of this shell. Repeat this line for each atomic shell, the number
|
||||
of the shells is given in the previous line.
|
||||
|
||||
In the example input file given above, we have two inequivalent
|
||||
atomic shells, one on atom number 1 with a full d-shell (dimension 5),
|
||||
and one on atom number 2 with one p-shell (dimension 3).
|
||||
|
||||
Other examples for these lines are:
|
||||
|
||||
#. Full d-shell in a material with only one correlated atom in the
|
||||
unit cell (e.g. SrVO3). One line is sufficient and the numbers
|
||||
are `1 1 2 5`.
|
||||
#. Full d-shell in a material with two equivalent atoms in the unit
|
||||
cell (e.g. FeSe): You need two lines, one for each equivalent
|
||||
atom. First line is `1 1 2 5`, and the second line is
|
||||
`2 1 2 5`. The only difference is the first number, which tells on
|
||||
which atom the shell is located. The second number is the
|
||||
same in both lines, meaning that both atoms are equivalent.
|
||||
#. t2g orbitals on two non-equivalent atoms in the unit cell: Two
|
||||
lines again. First line is `1 1 2 3`, second line `2 2 2 3`. The
|
||||
difference to the case above is that now also the second number
|
||||
differs. Therefore, the two shells are treated independently in
|
||||
the calculation.
|
||||
#. d-p Hamiltonian in a system with two equivalent atoms each in
|
||||
the unit cell (e.g. FeSe has two Fe and two Se in the unit
|
||||
cell). You need for lines. First line `1 1 2 5`, second
|
||||
line
|
||||
`2 1 2 5`. These two lines specify Fe as in the case above. For the p
|
||||
orbitals you need line three as `3 2 1 3` and line four
|
||||
as `4 2 1 3`. We have 4 atoms, since the first number runs from 1 to 4,
|
||||
but only two inequivalent atoms, since the second number runs
|
||||
only form 1 to 2.
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
Note that the total dimension of the hamiltonian matrices that are
|
||||
read in is the sum of all shell dimensions that you specified. For
|
||||
example number 4 given above we have a dimension of 5+5+3+3=16. It is important
|
||||
that the order of the shells that you give here must be the same as
|
||||
the order of the orbitals in the hamiltonian matrix. In the last
|
||||
example case above the code assumes that matrix index 1 to 5
|
||||
belongs to the first d shell, 6 to 10 to the second, 11 to 13 to
|
||||
the first p shell, and 14 to 16 the second p shell.
|
||||
|
||||
#. Number of correlated shells in the hamiltonian matrix, in the same
|
||||
spirit as line 3.
|
||||
conv_wien2k
|
||||
conv_vasp
|
||||
conv_W90
|
||||
conv_generalhk
|
||||
|
||||
#. The next line(s) contain six numbers: index of the atom, index
|
||||
of the equivalent shell, :math:`l` quantum number, dimension
|
||||
of the correlated shells, a spin-orbit parameter, and another
|
||||
parameter defining interactions. Note that the latter two
|
||||
parameters are not used at the moment in the code, and only kept
|
||||
for compatibility reasons. In our example file we use only the
|
||||
d-shell as correlated, that is why we have only one line here.
|
||||
|
||||
#. The last line contains several numbers: the number of irreducible
|
||||
representations, and then the dimensions of the irreps. One
|
||||
possibility is as the example above, another one would be 2
|
||||
2 3. This would mean, 2 irreps (eg and t2g), of dimension 2 and 3,
|
||||
resp.
|
||||
|
||||
After these header lines, the file has to contain the Hamiltonian
|
||||
matrix in orbital space. The standard convention is that you give for
|
||||
each :math:`\mathbf{k}`-point first the matrix of the real part, then the
|
||||
matrix of the imaginary part, and then move on to the next :math:`\mathbf{k}`-point.
|
||||
|
||||
The converter itself is used as::
|
||||
|
||||
from triqs_dft_tools.converters.hk_converter import *
|
||||
Converter = HkConverter(filename = hkinputfile)
|
||||
Converter.convert_dft_input()
|
||||
|
||||
where :file:`hkinputfile` is the name of the input file described
|
||||
above. This produces the hdf file that you need for a DMFT calculation.
|
||||
|
||||
For more options of this converter, have a look at the
|
||||
:ref:`refconverters` section of the reference manual.
|
||||
|
||||
|
||||
Wannier90 Converter
|
||||
-------------------
|
||||
|
||||
Using this converter it is possible to convert the output of
|
||||
`wannier90 <http://wannier.org>`_
|
||||
Maximally Localized Wannier Functions (MLWF) and create a HDF5 archive
|
||||
suitable for one-shot DMFT calculations with the
|
||||
:class:`SumkDFT <dft.sumk_dft.SumkDFT>` class.
|
||||
|
||||
The user must supply two files in order to run the Wannier90 Converter:
|
||||
|
||||
#. The file :file:`seedname_hr.dat`, which contains the DFT Hamiltonian
|
||||
in the MLWF basis calculated through :program:`wannier90` with ``hr_plot = true``
|
||||
(please refer to the :program:`wannier90` documentation).
|
||||
#. A file named :file:`seedname.inp`, which contains the required
|
||||
information about the :math:`\mathbf{k}`-point mesh, the electron density,
|
||||
the correlated shell structure, ... (see below).
|
||||
|
||||
Here and in the following, the keyword ``seedname`` should always be intended
|
||||
as a placeholder for the actual prefix chosen by the user when creating the
|
||||
input for :program:`wannier90`.
|
||||
Once these two files are available, one can use the converter as follows::
|
||||
|
||||
from triqs_dft_tools.converters import Wannier90Converter
|
||||
Converter = Wannier90Converter(seedname='seedname')
|
||||
Converter.convert_dft_input()
|
||||
|
||||
The converter input :file:`seedname.inp` is a simple text file with
|
||||
the following format:
|
||||
|
||||
.. literalinclude:: images_scripts/LaVO3_w90.inp
|
||||
|
||||
The example shows the input for the perovskite crystal of LaVO\ :sub:`3`
|
||||
in the room-temperature `Pnma` symmetry. The unit cell contains four
|
||||
symmetry-equivalent correlated sites (the V atoms) and the total number
|
||||
of electrons per unit cell is 8 (see second line).
|
||||
The first line specifies how to generate the :math:`\mathbf{k}`-point
|
||||
mesh that will be used to obtain :math:`H(\mathbf{k})`
|
||||
by Fourier transforming :math:`H(\mathbf{R})`.
|
||||
Currently implemented options are:
|
||||
|
||||
* :math:`\Gamma`-centered uniform grid with dimensions
|
||||
:math:`n_{k_x} \times n_{k_y} \times n_{k_z}`;
|
||||
specify ``0`` followed by the three grid dimensions,
|
||||
like in the example above
|
||||
* :math:`\Gamma`-centered uniform grid with dimensions
|
||||
automatically determined by the converter (from the number of
|
||||
:math:`\mathbf{R}` vectors found in :file:`seedname_hr.dat`);
|
||||
just specify ``-1``
|
||||
|
||||
Inside :file:`seedname.inp`, it is crucial to correctly specify the
|
||||
correlated shell structure, which depends on the contents of the
|
||||
:program:`wannier90` output :file:`seedname_hr.dat` and on the order
|
||||
of the MLWFs contained in it.
|
||||
|
||||
The number of MLWFs must be equal to, or greater than the total number
|
||||
of correlated orbitals (i.e., the sum of all ``dim`` in :file:`seedname.inp`).
|
||||
If the converter finds fewer MLWFs inside :file:`seedname_hr.dat`, then it
|
||||
stops with an error; if it finds more MLWFs, then it assumes that the
|
||||
additional MLWFs correspond to uncorrelated orbitals (e.g., the O-\ `2p` shells).
|
||||
When reading the hoppings :math:`\langle w_i | H(\mathbf{R}) | w_j \rangle`
|
||||
(where :math:`w_i` is the :math:`i`-th MLWF), the converter also assumes that
|
||||
the first indices correspond to the correlated shells (in our example,
|
||||
the V-t\ :sub:`2g` shells). Therefore, the MLWFs corresponding to the
|
||||
uncorrelated shells (if present) must be listed **after** those of the
|
||||
correlated shells.
|
||||
With the :program:`wannier90` code, this can be achieved this by listing the
|
||||
projections for the uncorrelated shells after those for the correlated shells.
|
||||
In our `Pnma`-LaVO\ :sub:`3` example, for instance, we could use::
|
||||
|
||||
Begin Projections
|
||||
V:l=2,mr=2,3,5:z=0,0,1:x=-1,1,0
|
||||
O:l=1:mr=1,2,3:z=0,0,1:x=-1,1,0
|
||||
End Projections
|
||||
|
||||
where the ``x=-1,1,0`` option indicates that the V--O bonds in the octahedra are
|
||||
rotated by (approximatively) 45 degrees with respect to the axes of the `Pbnm` cell.
|
||||
|
||||
The converter will analyze the matrix elements of the local Hamiltonian
|
||||
to find the symmetry matrices `rot_mat` needed for the global-to-local
|
||||
transformation of the basis set for correlated orbitals
|
||||
(see section :ref:`hdfstructure`).
|
||||
The matrices are obtained by finding the unitary transformations that diagonalize
|
||||
:math:`\langle w_i | H_I(\mathbf{R}=0,0,0) | w_j \rangle`, where :math:`I` runs
|
||||
over the correlated shells and `i,j` belong to the same shell (more details elsewhere...).
|
||||
If two correlated shells are defined as equivalent in :file:`seedname.inp`,
|
||||
then the corresponding eigenvalues have to match within a threshold of 10\ :sup:`-5`,
|
||||
otherwise the converter will produce an error/warning.
|
||||
If this happens, please carefully check your data in :file:`seedname_hr.dat`.
|
||||
This method might fail in non-trivial cases (i.e., more than one correlated
|
||||
shell is present) when there are some degenerate eigenvalues:
|
||||
so far tests have not shown any issue, but one must be careful in those cases
|
||||
(the converter will print a warning message).
|
||||
|
||||
The current implementation of the Wannier90 Converter has some limitations:
|
||||
|
||||
* Since :program:`wannier90` does not make use of symmetries (symmetry-reduction
|
||||
of the :math:`\mathbf{k}`-point grid is not possible), the converter always
|
||||
sets ``symm_op=0`` (see the :ref:`hdfstructure` section).
|
||||
* No charge self-consistency possible at the moment.
|
||||
* Calculations with spin-orbit (``SO=1``) are not supported.
|
||||
* The spin-polarized case (``SP=1``) is not yet tested.
|
||||
* The post-processing routines in the module
|
||||
:class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`
|
||||
were not tested with this converter.
|
||||
* ``proj_mat_all`` are not used, so there are no projectors onto the
|
||||
uncorrelated orbitals for now.
|
||||
|
||||
|
||||
MPI issues
|
||||
----------
|
||||
==========
|
||||
|
||||
The interface packages are written such that all the file operations
|
||||
are done only on the master node. In general, the philosophy of the
|
||||
|
@ -467,8 +30,9 @@ yourself, you have to *manually* broadcast it to the nodes. An
|
|||
exception to this rule is when you use routines from :class:`SumkDFT <dft.sumk_dft.SumkDFT>`
|
||||
or :class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>`, where the broadcasting is done for you.
|
||||
|
||||
|
||||
Interfaces to other packages
|
||||
----------------------------
|
||||
============================
|
||||
|
||||
Because of the modular structure, it is straight forward to extend the :ref:`TRIQS <triqslibs:welcome>` package
|
||||
in order to work with other band-structure codes. The only necessary requirement is that
|
||||
|
|
|
@ -14,35 +14,34 @@ Wien2k + dmftproj
|
|||
|
||||
|
||||
.. warning::
|
||||
Before using this tool, you should be familiar with the band-structure package :program:`Wien2k`, since
|
||||
the calculation is controlled by the :program:`Wien2k` scripts! Be
|
||||
Before using this tool, you should be familiar with the band-structure package Wien2k, since
|
||||
the calculation is controlled by the Wien2k scripts! Be
|
||||
sure that you also understand how :program:`dmftproj` is used to
|
||||
construct the Wannier functions. For this step, see either sections
|
||||
:ref:`conversion`, or the extensive :download:`dmftproj manual<images_scripts/TutorialDmftproj.pdf>`.
|
||||
|
||||
In the following, we discuss how to use the
|
||||
:ref:`TRIQS <triqslibs:install>` tools in combination with the :program:`Wien2k` program.
|
||||
In the following, we discuss how to use the
|
||||
:ref:`TRIQS <triqslibs:installation>` tools in combination with the Wien2k program.
|
||||
|
||||
We can use the DMFT script as introduced in section :ref:`singleshot`,
|
||||
with just a few simple
|
||||
modifications. First, in order to be compatible with the :program:`Wien2k` standards, the DMFT script has to be
|
||||
named :file:`case.py`, where `case` is the place holder name of the :program:`Wien2k` calculation, see the section
|
||||
:ref:`conversion` for details. We can then set the variable `dft_filename` dynamically::
|
||||
with just a few simple modifications. First, in order to be compatible with the Wien2k standards,
|
||||
the DMFT script has to be named :file:`case.py`, where `case` is the place holder name of the Wien2k
|
||||
calculation, see the section :ref:`conversion` for details. We can then set the variable `dft_filename` dynamically::
|
||||
|
||||
import os
|
||||
dft_filename = os.getcwd().rpartition('/')[2]
|
||||
|
||||
This sets the `dft_filename` to the name of the current directory. The
|
||||
remaining part of the script is identical to
|
||||
remaining part of the script is identical to
|
||||
that for one-shot calculations. Only at the very end we have to calculate the modified charge density,
|
||||
and store it in a format such that :program:`Wien2k` can read it. Therefore, after the DMFT loop that we saw in the
|
||||
and store it in a format such that Wien2k can read it. Therefore, after the DMFT loop that we saw in the
|
||||
previous section, we symmetrise the self energy, and recalculate the impurity Green function::
|
||||
|
||||
SK.symm_deg_gf(S.Sigma,orb=0)
|
||||
S.G_iw << inverse(S.G0_iw) - S.Sigma_iw
|
||||
S.G_iw.invert()
|
||||
|
||||
These steps are not necessary, but can help to reduce fluctuations in the total energy.
|
||||
These steps are not necessary, but can help to reduce fluctuations in the total energy.
|
||||
Now we calculate the modified charge density::
|
||||
|
||||
# find exact chemical potential
|
||||
|
@ -51,9 +50,9 @@ Now we calculate the modified charge density::
|
|||
dN, d = SK.calc_density_correction(filename = dft_filename+'.qdmft')
|
||||
SK.save(['chemical_potential','dc_imp','dc_energ'])
|
||||
|
||||
First we find the chemical potential with high precision, and after that the routine
|
||||
First we find the chemical potential with high precision, and after that the routine
|
||||
``SK.calc_density_correction(filename)`` calculates the density matrix including correlation effects. The result
|
||||
is stored in the file `dft_filename.qdmft`, which is later read by the :program:`Wien2k` program. The last statement saves
|
||||
is stored in the file `dft_filename.qdmft`, which is later read by the Wien2k program. The last statement saves
|
||||
the chemical potential into the hdf5 archive.
|
||||
|
||||
We need also the correlation energy, which we evaluate by the Migdal formula::
|
||||
|
@ -79,17 +78,16 @@ The above steps are valid for a calculation with only one correlated atom in the
|
|||
where you will apply this method. That is the reason why we give the index `0` in the list `SK.dc_energ`.
|
||||
If you have more than one correlated atom in the unit cell, but all of them
|
||||
are equivalent atoms, you have to multiply the `correnerg` by their multiplicity before writing it to the file.
|
||||
The multiplicity is easily found in the main input file of the :program:`Wien2k` package, i.e. `case.struct`. In case of
|
||||
The multiplicity is easily found in the main input file of the Wien2k package, i.e. `case.struct`. In case of
|
||||
non-equivalent atoms, the correlation energy has to be calculated for
|
||||
all of them separately and summed up.
|
||||
|
||||
As mentioned above, the calculation is controlled by the :program:`Wien2k` scripts and not by :program:`python`
|
||||
routines. You should think of replacing the lapw2 part of the
|
||||
:program:`Wien2k` self-consistency cycle by
|
||||
As mentioned above, the calculation is controlled by the Wien2k scripts and not by :program:`python`
|
||||
routines. You should think of replacing the lapw2 part of the Wien2k self-consistency cycle by
|
||||
|
||||
| `lapw2 -almd`
|
||||
| `dmftproj`
|
||||
| `pytriqs case.py`
|
||||
| `python case.py`
|
||||
| `lapw2 -qdmft`
|
||||
|
||||
In other words, for the calculation of the density matrix in lapw2, we
|
||||
|
@ -98,7 +96,7 @@ Therefore, at the command line, you start your calculation for instance by:
|
|||
|
||||
`me@home $ run -qdmft 1 -i 10`
|
||||
|
||||
The flag `-qdmft` tells the :program:`Wien2k` script that the density
|
||||
The flag `-qdmft` tells the Wien2k script that the density
|
||||
matrix including correlation effects is to be read in from the
|
||||
`case.qdmft` file, and that you want the code to run on one computing
|
||||
core only. Moreover, we ask for 10 self-consistency iterations are to be
|
||||
|
@ -110,15 +108,15 @@ number of nodes to be used:
|
|||
In that case, you will run on 64 computing cores. As standard setting,
|
||||
we use `mpirun` as the proper MPI execution statement. If you happen
|
||||
to have a different, non-standard MPI setup, you have to give the
|
||||
proper MPI execution statement, in the `run_lapw` script (see the
|
||||
corresponding :program:`Wien2k` documentation).
|
||||
proper MPI execution statement, in the `run_lapw` script (see the
|
||||
corresponding Wien2k documentation).
|
||||
|
||||
In many cases it is advisable to start from a converged one-shot
|
||||
In many cases it is advisable to start from a converged one-shot
|
||||
calculation. For practical purposes, you keep the number of DMFT loops
|
||||
within one DFT cycle low, or even to `loops=1`. If you encounter
|
||||
unstable convergence, you have to adjust the parameters such as
|
||||
the number of DMFT loops, or some mixing of the self energy to improve
|
||||
the convergence.
|
||||
the convergence.
|
||||
|
||||
In the section :ref:`DFTDMFTtutorial` we will see in a detailed
|
||||
example how such a self-consistent calculation is performed from scratch.
|
||||
|
|
|
@ -33,7 +33,7 @@ The next step is to setup an impurity solver. There are different
|
|||
solvers available within the :ref:`TRIQS <triqslibs:welcome>` framework.
|
||||
E.g. for :ref:`SrVO3 <SrVO3>`, we will use the hybridization
|
||||
expansion :ref:`CTHYB solver <triqscthyb:welcome>`. Later on, we will
|
||||
see also the example of the `Hubbard-I solver <https://triqs.ipht.cnrs.fr/1.x/applications/hubbardI/>`_.
|
||||
see also the example of the `Hubbard-I solver <https://triqs.github.io/triqs/1.4/applications/hubbardI/>`_.
|
||||
They all have in common, that they are called by an uniform command::
|
||||
|
||||
S.solve(params)
|
||||
|
@ -78,7 +78,7 @@ for :emphasis:`use_dc_formula` are:
|
|||
* `1`: DC formula as given in K. Held, Adv. Phys. 56, 829 (2007).
|
||||
* `2`: Around-mean-field (AMF)
|
||||
|
||||
At the end of the calculation, we can save the Greens function and self energy into a file::
|
||||
At the end of the calculation, we can save the Green function and self energy into a file::
|
||||
|
||||
from pytriqs.archive import HDFArchive
|
||||
import pytriqs.utility.mpi as mpi
|
||||
|
@ -106,15 +106,15 @@ are present, or if the calculation should start from scratch::
|
|||
previous_runs = 0
|
||||
previous_present = False
|
||||
if mpi.is_master_node():
|
||||
f = HDFArchive(dft_filename+'.h5','a')
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
with HDFArchive(dft_filename+'.h5','a') as f:
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
del f
|
||||
|
||||
previous_runs = mpi.bcast(previous_runs)
|
||||
previous_present = mpi.bcast(previous_present)
|
||||
|
||||
|
@ -126,9 +126,8 @@ double counting values of the last iteration::
|
|||
|
||||
if previous_present:
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
S.Sigma_iw << ar['dmft_output']['Sigma_iw']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
S.Sigma_iw << ar['dmft_output']['Sigma_iw']
|
||||
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
chemical_potential,dc_imp,dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
|
@ -140,7 +139,7 @@ Be careful when storing the :emphasis:`iteration_number` as we also have to add
|
|||
iteration count::
|
||||
|
||||
ar['dmft_output']['iterations'] = iteration_number + previous_runs
|
||||
|
||||
|
||||
.. _mixing:
|
||||
|
||||
|
||||
|
@ -153,11 +152,10 @@ functions) can be necessary in order to ensure convergence::
|
|||
mix = 0.8 # mixing factor
|
||||
if (iteration_number>1 or previous_present):
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
mpi.report("Mixing Sigma and G with factor %s"%mix)
|
||||
S.Sigma_iw << mix * S.Sigma_iw + (1.0-mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << mix * S.G_iw + (1.0-mix) * ar['dmft_output']['G_iw']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
mpi.report("Mixing Sigma and G with factor %s"%mix)
|
||||
S.Sigma_iw << mix * S.Sigma_iw + (1.0-mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << mix * S.G_iw + (1.0-mix) * ar['dmft_output']['G_iw']
|
||||
S.G_iw << mpi.bcast(S.G_iw)
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
|
||||
|
|
|
@ -1,240 +0,0 @@
|
|||
.. _DFTDMFTtutorial:
|
||||
|
||||
DFT+DMFT tutorial: Ce with Hubbard-I approximation
|
||||
==================================================
|
||||
|
||||
In this tutorial we will perform DFT+DMFT :program:`Wien2k`
|
||||
calculations from scratch, including all steps described in the
|
||||
previous sections. As example, we take the high-temperature
|
||||
:math:`\gamma`-phase of Ce employing the Hubbard-I approximation for
|
||||
its localized *4f* shell.
|
||||
|
||||
Wien2k setup
|
||||
------------
|
||||
|
||||
First we create the Wien2k :file:`Ce-gamma.struct` file as described in the `Wien2k manual <http://www.wien2k.at/reg_user/textbooks/usersguide.pdf>`_
|
||||
for the :math:`\gamma`-Ce fcc structure with lattice parameter of 9.75 a.u.
|
||||
|
||||
.. literalinclude:: images_scripts/Ce-gamma.struct
|
||||
|
||||
We initalize non-magnetic :program:`Wien2k` calculations using the :program:`init` script as described in the same manual.
|
||||
For this example we specify 3000 :math:`\mathbf{k}`-points in the full Brillouin zone
|
||||
and LDA exchange-correlation potential (*vxc=5*), other parameters are defaults.
|
||||
The Ce *4f* electrons are treated as valence states.
|
||||
Hence, the initialization script is executed as follows ::
|
||||
|
||||
init -b -vxc 5 -numk 3000
|
||||
|
||||
and then LDA calculations of non-magnetic :math:`\gamma`-Ce are performed by launching the :program:`Wien2k` :program:`run` script.
|
||||
These self-consistent LDA calculations will typically take a couple of minutes.
|
||||
|
||||
Wannier orbitals: dmftproj
|
||||
--------------------------
|
||||
|
||||
Then we create the :file:`Ce-gamma.indmftpr` file specifying parameters for construction of Wannier orbitals representing *4f* states:
|
||||
|
||||
.. literalinclude:: images_scripts/Ce-gamma.indmftpr
|
||||
|
||||
As we learned in the section :ref:`conversion`, the first three lines
|
||||
give the number of inequivalent sites, their multiplicity (to be in
|
||||
accordance with the *struct* file) and the maximum orbital quantum
|
||||
number :math:`l_{max}`.
|
||||
The following four lines describe the treatment of Ce *spdf* orbitals by the :program:`dmftproj` program::
|
||||
|
||||
complex
|
||||
1 1 1 2 ! l included for each sort
|
||||
0 0 0 0 ! l included for each sort
|
||||
0
|
||||
|
||||
where `complex` is the choice for the angular basis to be used (spherical complex harmonics), in the next line we specify, for each orbital
|
||||
quantum number, whether it is treated as correlated ('2') and, hence, the corresponding Wannier orbitals will be generated, or uncorrelated ('1').
|
||||
In the latter case the :program:`dmftproj` program will generate projectors to be used in calculations of corresponding partial densities of states (see below).
|
||||
In the present case we choose the fourth (i. e. *f*) orbitals as correlated.
|
||||
The next line specify the number of irreducible representations into which a given correlated shell should be split (or
|
||||
'0' if no splitting is desired, as in the present case). The fourth line specifies whether the spin-orbit interaction should be switched on ('1') or off ('0', as in the present case).
|
||||
|
||||
Finally, the last line of the file ::
|
||||
|
||||
-.40 0.40 ! Energy window relative to E_f
|
||||
|
||||
specifies the energy window for Wannier functions' construction. For a
|
||||
more complete description of :program:`dmftproj` options see its
|
||||
manual.
|
||||
|
||||
To prepare input data for :program:`dmftproj` we execute lapw2 with the `-almd` option ::
|
||||
|
||||
x lapw2 -almd
|
||||
|
||||
Then :program:`dmftproj` is executed in its default mode (i.e. without spin-polarization or spin-orbit included) ::
|
||||
|
||||
dmftproj
|
||||
|
||||
This program produces the following files:
|
||||
|
||||
* :file:`Ce-gamma.ctqmcout` and :file:`Ce-gamma.symqmc` containing projector operators and symmetry operations for orthonormalized Wannier orbitals, respectively.
|
||||
* :file:`Ce-gamma.parproj` and :file:`Ce-gamma.sympar` containing projector operators and symmetry operations for uncorrelated states, respectively. These files are needed for projected density-of-states or spectral-function calculations.
|
||||
* :file:`Ce-gamma.oubwin` needed for the charge density recalculation in the case of fully self-consistent DFT+DMFT run (see below).
|
||||
|
||||
Now we have all necessary input from :program:`Wien2k` for running DMFT calculations.
|
||||
|
||||
|
||||
DMFT setup: Hubbard-I calculations in TRIQS
|
||||
--------------------------------------------
|
||||
|
||||
In order to run DFT+DMFT calculations within Hubbard-I we need the corresponding python script, :ref:`Ce-gamma_script`.
|
||||
It is generally similar to the script for the case of DMFT calculations with the CT-QMC solver (see :ref:`singleshot`),
|
||||
however there are also some differences. First difference is that we import the Hubbard-I solver by::
|
||||
|
||||
from pytriqs.applications.impurity_solvers.hubbard_I.hubbard_solver import Solver
|
||||
|
||||
The Hubbard-I solver is very fast and we do not need to take into account the DFT block structure or use any approximation for the *U*-matrix.
|
||||
We load and convert the :program:`dmftproj` output and initialize the
|
||||
:class:`SumkDFT <dft.sumk_dft.SumkDFT>` class as described in :ref:`conversion` and
|
||||
:ref:`singleshot` and then set up the Hubbard-I solver ::
|
||||
|
||||
S = Solver(beta = beta, l = l)
|
||||
|
||||
where the solver is initialized with the value of `beta`, and the orbital quantum number `l` (equal to 3 in our case).
|
||||
|
||||
|
||||
The Hubbard-I initialization `Solver` has also optional parameters one may use:
|
||||
|
||||
* `n_msb`: the number of Matsubara frequencies used. The default is `n_msb=1025`.
|
||||
* `use_spin_orbit`: if set 'True' the solver is run with spin-orbit coupling included. To perform actual DFT+DMFT calculations with spin-orbit one should also run :program:`Wien2k` and :program:`dmftproj` in spin-polarized mode and with spin-orbit included. By default, `use_spin_orbit=False`.
|
||||
* `Nmoments`: the number of moments used to describe high-frequency tails of the Hubbard-I Green's function and self-energy. By default `Nmoments = 5`
|
||||
|
||||
The `Solver.solve(U_int, J_hund)` statement has two necessary parameters, the Hubbard U parameter `U_int` and Hund's rule coupling `J_hund`. Notice that the solver constructs the full 4-index `U`-matrix by default, and the `U_int` parameter is in fact the Slater `F0` integral. Other optional parameters are:
|
||||
|
||||
* `T`: matrix that transforms the interaction matrix from complex spherical harmonics to a symmetry adapted basis. By default, the complex spherical harmonics basis is used and `T=None`.
|
||||
* `verbosity`: tunes output from the solver. If `verbosity=0` only basic information is printed, if `verbosity=1` the ground state atomic occupancy and its energy are printed, if `verbosity=2` additional information is printed for all occupancies that were diagonalized. By default, `verbosity=0`.
|
||||
|
||||
* `Iteration_Number`: the iteration number of the DMFT loop. Used only for printing. By default `Iteration_Number=1`
|
||||
* `Test_Convergence`: convergence criterion. Once the self-energy is converged below `Test_Convergence` the Hubbard-I solver is not called anymore. By default `Test_Convergence=0.0001`.
|
||||
|
||||
We need also to introduce some changes in the DMFT loop with respect that used for CT-QMC calculations in :ref:`singleshot`.
|
||||
The hybridization function is neglected in the Hubbard-I approximation, and only non-interacting level
|
||||
positions (:math:`\hat{\epsilon}=-\mu+\langle H^{ff} \rangle - \Sigma_{DC}`) are required.
|
||||
Hence, instead of computing `S.G0` as in :ref:`singleshot` we set the level positions::
|
||||
|
||||
# set atomic levels:
|
||||
eal = SK.eff_atomic_levels()[0]
|
||||
S.set_atomic_levels( eal = eal )
|
||||
|
||||
The part after the solution of the impurity problem remains essentially the same: we mix the self-energy and local
|
||||
Green's function and then save them in the hdf5 file .
|
||||
Then the double counting is recalculated and the correlation energy is computed with the Migdal formula and stored in hdf5.
|
||||
|
||||
Finally, we compute the modified charge density and save it as well as correlational correction to the total energy in
|
||||
:file:`Ce-gamma.qdmft` file, which is then read by lapw2 in the case of self-consistent DFT+DMFT calculations.
|
||||
|
||||
|
||||
Running single-shot DFT+DMFT calculations
|
||||
------------------------------------------
|
||||
|
||||
After having prepared the script one may run one-shot DMFT calculations by
|
||||
executing :ref:`Ce-gamma_script` with :program:`pytriqs` on a single processor:
|
||||
|
||||
`pytriqs Ce-gamma.py`
|
||||
|
||||
or in parallel mode:
|
||||
|
||||
`mpirun -np 64 pytriqs Ce-gamma.py`
|
||||
|
||||
where :program:`mpirun` launches these calculations in parallel mode and
|
||||
enables MPI. The exact form of this command will, of course, depend on
|
||||
mpi-launcher installed in your system, but the form above applies to
|
||||
99% of the system setups.
|
||||
|
||||
|
||||
Running self-consistent DFT+DMFT calculations
|
||||
---------------------------------------------
|
||||
|
||||
Instead of doing a one-shot run one may also perform fully self-consistent
|
||||
DFT+DMFT calculations, as we will do now. We launch these
|
||||
calculations as follows :
|
||||
|
||||
`run -qdmft 1`
|
||||
|
||||
where `-qdmft` flag turns on DFT+DMFT calculations with
|
||||
:program:`Wien2k`, and one computing core. We
|
||||
use here the default convergence criterion in :program:`Wien2k` (convergence to
|
||||
0.1 mRy in energy).
|
||||
|
||||
After calculations are done we may check the value of correlation ('Hubbard') energy correction to the total energy::
|
||||
|
||||
>grep HUBBARD Ce-gamma.scf|tail -n 1
|
||||
HUBBARD ENERGY(included in SUM OF EIGENVALUES): -0.012866
|
||||
|
||||
In the case of Ce, with the correlated shell occupancy close to 1 the Hubbard energy is close to 0, while the DC correction to energy is about J/4 in accordance with the fully-localized-limit formula, hence, giving the total correction :math:`\Delta E_{HUB}=E_{HUB}-E_{DC} \approx -J/4`, which is in our case is equal to -0.175 eV :math:`\approx`-0.013 Ry.
|
||||
|
||||
The band ("kinetic") energy with DMFT correction is ::
|
||||
|
||||
>grep DMFT Ce-gamma.scf |tail -n 1
|
||||
KINETIC ENERGY with DMFT correction: -5.370632
|
||||
|
||||
One may also check the convergence in total energy::
|
||||
|
||||
>grep :ENE Ce-gamma.scf |tail -n 5
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56318334
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56342250
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56271503
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56285812
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56287381
|
||||
|
||||
|
||||
Post-processing and data analysis
|
||||
---------------------------------
|
||||
|
||||
Within Hubbard-I one may also easily obtain the angle-resolved spectral function (band
|
||||
structure) and integrated spectral function (density of states or DOS). In
|
||||
difference with the CT-QMC approach one does not need to do an
|
||||
analytic continuations to get the
|
||||
real-frequency self-energy, as it can be calculated directly
|
||||
in the Hubbard-I solver.
|
||||
|
||||
The corresponding script :ref:`Ce-gamma_DOS_script` contains several new parameters ::
|
||||
|
||||
ommin=-4.0 # bottom of the energy range for DOS calculations
|
||||
ommax=6.0 # top of the energy range for DOS calculations
|
||||
N_om=2001 # number of points on the real-energy axis mesh
|
||||
broadening = 0.02 # broadening (the imaginary shift of the real-energy mesh)
|
||||
|
||||
Then one needs to load projectors needed for calculations of
|
||||
corresponding projected densities of states, as well as corresponding
|
||||
symmetries::
|
||||
|
||||
Converter.convert_parpoj_input()
|
||||
|
||||
To get access to analysing tools we initialize the
|
||||
:class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>` class ::
|
||||
|
||||
SK = SumkDFTTools(hdf_file=dft_filename+'.h5', use_dft_blocks=False)
|
||||
|
||||
After the solver initialization, we load the previously calculated
|
||||
chemical potential and double-counting correction. Having set up
|
||||
atomic levels we then compute the atomic Green's function and
|
||||
self-energy on the real axis::
|
||||
|
||||
S.set_atomic_levels( eal = eal )
|
||||
S.GF_realomega(ommin=ommin, ommax = ommax, N_om=N_om,U_int=U_int,J_hund=J_hund)
|
||||
|
||||
put it into SK class and then calculated the actual DOS::
|
||||
|
||||
SK.dos_parproj_basis(broadening=broadening)
|
||||
|
||||
We may first increase the number of **k**-points in BZ to 10000 by executing :program:`Wien2k` program :program:`kgen` ::
|
||||
|
||||
x kgen
|
||||
|
||||
and then by executing :ref:`Ce-gamma_DOS_script` with :program:`pytriqs`::
|
||||
|
||||
pytriqs Ce-gamma_DOS.py
|
||||
|
||||
In result we get the total DOS for spins `up` and `down` (identical in our paramagnetic case) in :file:`DOScorrup.dat` and :file:`DOScorrdown.dat` files, respectively, as well as projected DOSs written in the corresponding files as described in :ref:`analysis`.
|
||||
In our case, for example, the files :file:`DOScorrup.dat` and :file:`DOScorrup_proj3.dat` contain the total DOS for spin *up* and the corresponding projected DOS for Ce *4f* orbital, respectively. They are plotted below.
|
||||
|
||||
.. image:: images_scripts/Ce_DOS.png
|
||||
:width: 700
|
||||
:align: center
|
||||
|
||||
As one may clearly see, the Ce *4f* band is split by the local Coulomb interaction into the filled lower Hubbard band and empty upper Hubbard band (the latter is additionally split into several peaks due to the Hund's rule coupling and multiplet effects).
|
|
@ -1,7 +1,7 @@
|
|||
0 6 4 6
|
||||
8.0
|
||||
4
|
||||
0 0 2 3 0 0
|
||||
1 0 2 3 0 0
|
||||
2 0 2 3 0 0
|
||||
3 0 2 3 0 0
|
||||
0 6 4 6 # specification of the k-mesh
|
||||
8.0 # electron density
|
||||
4 # number of atoms
|
||||
0 0 2 3 0 0 # atom, sort, l, dim, SO, irep
|
||||
1 0 2 3 0 0 # atom, sort, l, dim, SO, irep
|
||||
2 0 2 3 0 0 # atom, sort, l, dim, SO, irep
|
||||
3 0 2 3 0 0 # atom, sort, l, dim, SO, irep
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
64 ! number of k-points
|
||||
1.0 ! Electron density
|
||||
2 ! number of total atomic shells
|
||||
1 1 2 5 ! iatom, isort, l, dimension
|
||||
2 2 1 3 ! iatom, isort, l, dimension
|
||||
1 ! number of correlated shells
|
||||
1 1 2 5 0 0 ! iatom, isort, l, dimension, SO, irep
|
||||
1 5 ! # of ireps, dimension of irep
|
||||
64 # number of k-points
|
||||
1.0 # electron density
|
||||
2 # number of total atomic shells
|
||||
1 1 2 5 # atom, sort, l, dim
|
||||
2 2 1 3 # atom, sort, l, dim
|
||||
1 # number of correlated shells
|
||||
1 1 2 5 0 0 # atom, sort, l, dim, SO, irep
|
||||
1 5 # number of ireps, dim of irep
|
||||
|
|
|
@ -1,151 +0,0 @@
|
|||
import pytriqs.utility.mpi as mpi
|
||||
from pytriqs.operators.util import *
|
||||
from pytriqs.archive import HDFArchive
|
||||
from triqs_cthyb import *
|
||||
from pytriqs.gf import *
|
||||
from triqs_dft_tools.sumk_dft import *
|
||||
from triqs_dft_tools.converters.wien2k_converter import *
|
||||
|
||||
dft_filename='SrVO3'
|
||||
U = 9.6
|
||||
J = 0.8
|
||||
beta = 40
|
||||
loops = 10 # Number of DMFT sc-loops
|
||||
sigma_mix = 1.0 # Mixing factor of Sigma after solution of the AIM
|
||||
delta_mix = 1.0 # Mixing factor of Delta as input for the AIM
|
||||
dc_type = 0 # DC type: 0 FLL, 1 Held, 2 AMF
|
||||
use_blocks = True # use bloc structure from DFT input
|
||||
prec_mu = 0.0001
|
||||
h_field = 0.0
|
||||
|
||||
# Solver parameters
|
||||
p = {}
|
||||
p["max_time"] = -1
|
||||
p["random_seed"] = 123 * mpi.rank + 567
|
||||
p["length_cycle"] = 200
|
||||
p["n_warmup_cycles"] = 100000
|
||||
p["n_cycles"] = 1000000
|
||||
p["perfrom_tail_fit"] = True
|
||||
p["fit_max_moments"] = 4
|
||||
p["fit_min_n"] = 30
|
||||
p["fit_max_n"] = 60
|
||||
|
||||
# If conversion step was not done, we could do it here. Uncomment the lines it you want to do this.
|
||||
#from triqs_dft_tools.converters.wien2k_converter import *
|
||||
#Converter = Wien2kConverter(filename=dft_filename, repacking=True)
|
||||
#Converter.convert_dft_input()
|
||||
#mpi.barrier()
|
||||
|
||||
previous_runs = 0
|
||||
previous_present = False
|
||||
if mpi.is_master_node():
|
||||
f = HDFArchive(dft_filename+'.h5','a')
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
del f
|
||||
previous_runs = mpi.bcast(previous_runs)
|
||||
previous_present = mpi.bcast(previous_present)
|
||||
|
||||
SK=SumkDFT(hdf_file=dft_filename+'.h5',use_dft_blocks=use_blocks,h_field=h_field)
|
||||
|
||||
n_orb = SK.corr_shells[0]['dim']
|
||||
l = SK.corr_shells[0]['l']
|
||||
spin_names = ["up","down"]
|
||||
orb_names = [i for i in range(n_orb)]
|
||||
|
||||
# Use GF structure determined by DFT blocks
|
||||
gf_struct = [(block, indices) for block, indices in SK.gf_struct_solver[0].iteritems()]
|
||||
|
||||
# Construct Slater U matrix
|
||||
Umat = U_matrix(n_orb=n_orb, U_int=U, J_hund=J, basis='cubic',)
|
||||
|
||||
# Construct Hamiltonian and solver
|
||||
h_int = h_int_slater(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U_matrix=Umat)
|
||||
S = Solver(beta=beta, gf_struct=gf_struct)
|
||||
|
||||
if previous_present:
|
||||
chemical_potential = 0
|
||||
dc_imp = 0
|
||||
dc_energ = 0
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
S.Sigma_iw << ar['dmft_output']['Sigma_iw']
|
||||
del ar
|
||||
chemical_potential,dc_imp,dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
chemical_potential = mpi.bcast(chemical_potential)
|
||||
dc_imp = mpi.bcast(dc_imp)
|
||||
dc_energ = mpi.bcast(dc_energ)
|
||||
SK.set_mu(chemical_potential)
|
||||
SK.set_dc(dc_imp,dc_energ)
|
||||
|
||||
for iteration_number in range(1,loops+1):
|
||||
if mpi.is_master_node(): print "Iteration = ", iteration_number
|
||||
|
||||
SK.symm_deg_gf(S.Sigma_iw,orb=0) # symmetrise Sigma
|
||||
SK.set_Sigma([ S.Sigma_iw ]) # set Sigma into the SumK class
|
||||
chemical_potential = SK.calc_mu( precision = prec_mu ) # find the chemical potential for given density
|
||||
S.G_iw << SK.extract_G_loc()[0] # calc the local Green function
|
||||
mpi.report("Total charge of Gloc : %.6f"%S.G_iw.total_density())
|
||||
|
||||
# Init the DC term and the real part of Sigma, if no previous runs found:
|
||||
if (iteration_number==1 and previous_present==False):
|
||||
dm = S.G_iw.density()
|
||||
SK.calc_dc(dm, U_interact = U, J_hund = J, orb = 0, use_dc_formula = dc_type)
|
||||
S.Sigma_iw << SK.dc_imp[0]['up'][0,0]
|
||||
|
||||
# Calculate new G0_iw to input into the solver:
|
||||
if mpi.is_master_node():
|
||||
# We can do a mixing of Delta in order to stabilize the DMFT iterations:
|
||||
S.G0_iw << S.Sigma_iw + inverse(S.G_iw)
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
if (iteration_number>1 or previous_present):
|
||||
mpi.report("Mixing input Delta with factor %s"%delta_mix)
|
||||
Delta = (delta_mix * delta(S.G0_iw)) + (1.0-delta_mix) * ar['dmft_output']['Delta_iw']
|
||||
S.G0_iw << S.G0_iw + delta(S.G0_iw) - Delta
|
||||
ar['dmft_output']['Delta_iw'] = delta(S.G0_iw)
|
||||
S.G0_iw << inverse(S.G0_iw)
|
||||
del ar
|
||||
|
||||
S.G0_iw << mpi.bcast(S.G0_iw)
|
||||
|
||||
# Solve the impurity problem:
|
||||
S.solve(h_int=h_int, **p)
|
||||
|
||||
# Solved. Now do post-processing:
|
||||
mpi.report("Total charge of impurity problem : %.6f"%S.G_iw.total_density())
|
||||
|
||||
# Now mix Sigma and G with factor sigma_mix, if wanted:
|
||||
if (iteration_number>1 or previous_present):
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
mpi.report("Mixing Sigma and G with factor %s"%sigma_mix)
|
||||
S.Sigma_iw << sigma_mix * S.Sigma_iw + (1.0-sigma_mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << sigma_mix * S.G_iw + (1.0-sigma_mix) * ar['dmft_output']['G_iw']
|
||||
del ar
|
||||
S.G_iw << mpi.bcast(S.G_iw)
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
|
||||
# Write the final Sigma and G to the hdf5 archive:
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
ar['dmft_output']['iterations'] = iteration_number + previous_runs
|
||||
ar['dmft_output']['G_tau'] = S.G_tau
|
||||
ar['dmft_output']['G_iw'] = S.G_iw
|
||||
ar['dmft_output']['Sigma_iw'] = S.Sigma_iw
|
||||
ar['dmft_output']['G0-%s'%(iteration_number)] = S.G0_iw
|
||||
ar['dmft_output']['G-%s'%(iteration_number)] = S.G_iw
|
||||
ar['dmft_output']['Sigma-%s'%(iteration_number)] = S.Sigma_iw
|
||||
del ar
|
||||
|
||||
# Set the new double counting:
|
||||
dm = S.G_iw.density() # compute the density matrix of the impurity problem
|
||||
SK.calc_dc(dm, U_interact = U, J_hund = J, orb = 0, use_dc_formula = dc_type)
|
||||
|
||||
# Save stuff into the dft_output group of hdf5 archive in case of rerun:
|
||||
SK.save(['chemical_potential','dc_imp','dc_energ'])
|
|
@ -1,37 +1,35 @@
|
|||
.. _plovasp:
|
||||
|
||||
PLOVasp input file
|
||||
==================
|
||||
PLOVasp
|
||||
=======
|
||||
|
||||
The general purpose of the PLOVasp tool is to transform
|
||||
raw, non-normalized projectors generated by VASP into normalized
|
||||
projectors corresponding to user-defined projected localized orbitals (PLO).
|
||||
The PLOs can then be used for DFT+DMFT calculations with or without
|
||||
charge self-consistency. PLOVasp also provides some utilities
|
||||
for basic analysis of the generated projectors, such as outputting
|
||||
density matrices, local Hamiltonians, and projected
|
||||
density of states.
|
||||
The general purpose of the PLOVasp tool is to transform raw, non-normalized
|
||||
projectors generated by VASP into normalized projectors corresponding to
|
||||
user-defined projected localized orbitals (PLO). The PLOs can then be used for
|
||||
DFT+DMFT calculations with or without charge self-consistency. PLOVasp also
|
||||
provides some utilities for basic analysis of the generated projectors, such as
|
||||
outputting density matrices, local Hamiltonians, and projected density of
|
||||
states.
|
||||
|
||||
PLOs are determined by the energy window in which the raw projectors
|
||||
are normalized. This allows to define either atomic-like strongly
|
||||
localized Wannier functions (large energy window) or extended
|
||||
Wannier functions focusing on selected low-energy states (small
|
||||
energy window).
|
||||
PLOs are determined by the energy window in which the raw projectors are
|
||||
normalized. This allows to define either atomic-like strongly localized Wannier
|
||||
functions (large energy window) or extended Wannier functions focusing on
|
||||
selected low-energy states (small energy window).
|
||||
|
||||
In PLOVasp all projectors sharing the same energy window are combined
|
||||
into a `projector group`. Technically, this allows one to define
|
||||
several groups with different energy windows for the same set of
|
||||
raw projectors. Note, however, that DFTtools does not support projector
|
||||
groups at the moment but this feature might appear in future releases.
|
||||
In PLOVasp, all projectors sharing the same energy window are combined into a
|
||||
`projector group`. Technically, this allows one to define several groups with
|
||||
different energy windows for the same set of raw projectors. Note, however,
|
||||
that DFTtools does not support projector groups at the moment but this feature
|
||||
might appear in future releases.
|
||||
|
||||
A set of projectors defined on sites realted to each other either by symmetry
|
||||
or by sort along with a set of :math:`l`, :math:`m` quantum numbers forms a
|
||||
`projector shell`. There could be several projectors shells in a
|
||||
projector group, implying that they will be normalized within
|
||||
the same energy window.
|
||||
A set of projectors defined on sites related to each other either by symmetry
|
||||
or by an atomic sort, along with a set of :math:`l`, :math:`m` quantum numbers,
|
||||
forms a `projector shell`. There could be several projectors shells in a
|
||||
projector group, implying that they will be normalized within the same energy
|
||||
window.
|
||||
|
||||
Projector shells and groups are specified by a user-defined input file
|
||||
whose format is described below.
|
||||
Projector shells and groups are specified by a user-defined input file whose
|
||||
format is described below.
|
||||
|
||||
Input file format
|
||||
-----------------
|
||||
|
@ -43,7 +41,7 @@ Parameters (or 'options') are grouped into sections specified as
|
|||
A PLOVasp input file can contain three types of sections:
|
||||
|
||||
#. **[General]**: includes parameters that are independent
|
||||
of a particular projector set, such as the Fermi level, additional
|
||||
of a particular projector set, such as the Fermi level, additional
|
||||
output (e.g. the density of states), etc.
|
||||
#. **[Group <Ng>]**: describes projector groups, i.e. a set of
|
||||
projectors sharing the same energy window and normalization type.
|
||||
|
@ -51,8 +49,8 @@ A PLOVasp input file can contain three types of sections:
|
|||
there should be no more than one projector group.
|
||||
#. **[Shell <Ns>]**: contains parameters of a projector shell labelled
|
||||
with `<Ns>`. If there is only one group section and one shell section,
|
||||
the group section can be omitted and its required parameters can be
|
||||
given inside the single shell section.
|
||||
the group section can be omitted but in this case, the group required
|
||||
parameters must be provided inside the shell section.
|
||||
|
||||
Section [General]
|
||||
"""""""""""""""""
|
||||
|
@ -61,24 +59,24 @@ The entire section is optional and it contains three parameters:
|
|||
|
||||
* **BASENAME** (string): provides a base name for output files.
|
||||
Default filenames are :file:`vasp.*`.
|
||||
* **DOSMESH** ([float float] integer): if this parameter is given
|
||||
projected density of states for each projected orbital will be
|
||||
* **DOSMESH** ([float float] integer): if this parameter is given,
|
||||
the projected density of states for each projected orbital will be
|
||||
evaluated and stored to files :file:`pdos_<n>.dat`, where `n` is the
|
||||
orbital number. The energy
|
||||
mesh is defined by three numbers: `EMIN` `EMAX` `NPOINTS`. The first two
|
||||
orbital index. The energy
|
||||
mesh is defined by three numbers: `EMIN` `EMAX` `NPOINTS`. The first two
|
||||
can be omitted in which case they are taken to be equal to the projector
|
||||
energy window. **Important note**: at the moment this option works
|
||||
only if the tetrahedron integration method (`ISMEAR = -4` or `-5`)
|
||||
is used in VASP to produce `LOCPROJ`.
|
||||
* **EFERMI** (float): provides the Fermi level. This value overrides
|
||||
the one extracted from VASP output files.
|
||||
|
||||
|
||||
There are no required parameters in this section.
|
||||
|
||||
Section [Shell]
|
||||
"""""""""""""""
|
||||
|
||||
This section specifies a projector shell. Each shell section must be
|
||||
This section specifies a projector shell. Each `[Shell]` section must be
|
||||
labeled by an index, e.g. `[Shell 1]`. These indices can then be referenced
|
||||
in a `[Group]` section.
|
||||
|
||||
|
@ -87,17 +85,17 @@ In each `[Shell]` section two parameters are required:
|
|||
* **IONS** (list of integer): indices of sites included in the shell.
|
||||
The sites can be given either by a list of integers `IONS = 5 6 7 8`
|
||||
or by a range `IONS = 5..8`. The site indices must be compatible with
|
||||
POSCAR file.
|
||||
the POSCAR file.
|
||||
* **LSHELL** (integer): :math:`l` quantum number of the desired local states.
|
||||
|
||||
It is important that a given combination of site indices and local states
|
||||
given by `LSHELL` must be present in LOCPROJ file.
|
||||
given by `LSHELL` must be present in the LOCPROJ file.
|
||||
|
||||
There are additional optional parameters that allow one to transform
|
||||
the local states:
|
||||
|
||||
* **TRANSFORM** (matrix): local transformation matrix applied to all states
|
||||
in the projector shell. The matrix is defined by (multiline) block
|
||||
in the projector shell. The matrix is defined by a (multiline) block
|
||||
of floats, with each line corresponding to a row. The number of columns
|
||||
must be equal to :math:`2 l + 1`, with :math:`l` given by `LSHELL`. Only real matrices
|
||||
are allowed. This parameter can be useful to select certain subset of
|
||||
|
@ -105,14 +103,14 @@ the local states:
|
|||
* **TRANSFILE** (string): name of the file containing transformation
|
||||
matrices for each site. This option allows for a full-fledged functionality
|
||||
when it comes to local state transformations. The format of this file
|
||||
is described in :ref:`_transformation_file`.
|
||||
is described :ref:`below <transformation_file>`.
|
||||
|
||||
Section [Group]
|
||||
"""""""""""""""
|
||||
|
||||
Each defined projector shell must be part of a projector group. In the current
|
||||
implementation of DFTtools only a single group is supported which can be
|
||||
labeled by any integer, e.g. `[Group 1]`. This implies that all projector shells
|
||||
implementation of DFTtools only a single group (labelled by any integer, e.g. `[Group 1]`)
|
||||
is supported. This implies that all projector shells
|
||||
must be included in this group.
|
||||
|
||||
Required parameters for any group are the following:
|
||||
|
@ -121,34 +119,49 @@ Required parameters for any group are the following:
|
|||
All defined shells must be grouped.
|
||||
* **EWINDOW** (float float): the energy window specified by two floats: bottom
|
||||
and top. All projectors in the current group are going to be normalized within
|
||||
this window.
|
||||
this window. *Note*: This option must be specified inside the `[Shell]` section
|
||||
if only one shell is defined and the `[Group]` section is omitted.
|
||||
|
||||
Optional group parameters:
|
||||
|
||||
* **NORMALIZE** (True/False): specifies whether projectors in the group are
|
||||
to be noramlized. The default value is **True**.
|
||||
to be normalized. The default value is **True**.
|
||||
* **NORMION** (True/False): specifies whether projectors are normalized on
|
||||
a per-site (per-ion) basis. That is, if `NORMION = True` the orthogonality
|
||||
a per-site (per-ion) basis. That is, if `NORMION = True`, the orthogonality
|
||||
condition will be enforced on each site separately but the Wannier functions
|
||||
on different sites will not be orthogonal. If `NORMION = False` Wannier functions
|
||||
on different sites will not be orthogonal. If `NORMION = False`, the Wannier functions
|
||||
on different sites included in the group will be orthogonal to each other.
|
||||
|
||||
|
||||
.. _transformation_file
|
||||
|
||||
.. _transformation_file:
|
||||
|
||||
File of transformation matrices
|
||||
"""""""""""""""""""""""""""""""
|
||||
|
||||
.. warning::
|
||||
The description below applies only to collinear cases (i.e. without spin-orbit
|
||||
coupling). In this case the matrices are spin-independent.
|
||||
The description below applies only to collinear cases (i.e., without spin-orbit
|
||||
coupling). In this case, the matrices are spin-independent.
|
||||
|
||||
The file specified by option `TRANSFILE` contains transformation matrices
|
||||
for each ion. Each line must contain a series of floats whose number is either equal to
|
||||
the number of orbitals :math:`N_{orb}` (in this case the transformation matrices
|
||||
are assumed to be real) or to :math:`2 N_{orb}` (for the complex transformation matrices).
|
||||
The number of lines :math:`N` must be a multiple of the number of ions :math:`N_{ion}`
|
||||
The total number of lines :math:`N` must be a multiple of the number of ions :math:`N_{ion}`
|
||||
and the ratio :math:`N / N_{ion}`, then, gives the dimension of the transformed
|
||||
orbital space. The lines with floats can be separated by any number of empty or
|
||||
comment lines which are ignored.
|
||||
comment lines (starting from `#`), which are ignored.
|
||||
|
||||
A very simple example is a transformation matrix that selects the :math:`t_{2g}` manifold.
|
||||
For two correlated sites, one can define the file as follows:
|
||||
::
|
||||
|
||||
# Site 1
|
||||
1.0 0.0 0.0 0.0 0.0
|
||||
0.0 1.0 0.0 0.0 0.0
|
||||
0.0 0.0 0.0 1.0 0.0
|
||||
|
||||
# Site 2
|
||||
1.0 0.0 0.0 0.0 0.0
|
||||
0.0 1.0 0.0 0.0 0.0
|
||||
0.0 0.0 0.0 1.0 0.0
|
||||
|
||||
|
|
|
@ -5,11 +5,19 @@ Transport calculations
|
|||
|
||||
Formalism
|
||||
---------
|
||||
The conductivity and the Seebeck coefficient in direction :math:`\alpha\beta` are defined as [#transp]_:
|
||||
The conductivity, the Seebeck coefficient and the electronic contribution to the thermal conductivity in direction :math:`\alpha\beta` are defined as [#transp1]_ [#transp2]_:
|
||||
|
||||
.. math::
|
||||
|
||||
\sigma_{\alpha\beta} = \beta e^{2} A_{0,\alpha\beta} \ \ \ \text{and} \ \ \ S_{\alpha\beta} = -\frac{k_B}{|e|}\frac{A_{1,\alpha\beta}}{A_{0,\alpha\beta}},
|
||||
\sigma_{\alpha\beta} = \beta e^{2} A_{0,\alpha\beta}
|
||||
|
||||
.. math::
|
||||
|
||||
S_{\alpha\beta} = -\frac{k_B}{|e|}\frac{A_{1,\alpha\beta}}{A_{0,\alpha\beta}},
|
||||
|
||||
.. math::
|
||||
|
||||
\kappa^{\text{el}}_{\alpha\beta} = k_B \left(A_{2,\alpha\beta} - \frac{A_{1,\alpha\beta}^2}{A_{0,\alpha\beta}}\right),
|
||||
|
||||
in which the kinetic coefficients :math:`A_{n,\alpha\beta}` are given by
|
||||
|
||||
|
@ -34,15 +42,15 @@ The frequency depended optical conductivity is given by
|
|||
Prerequisites
|
||||
-------------
|
||||
First perform a standard :ref:`DFT+DMFT calculation <full_charge_selfcons>` for your desired material and obtain the
|
||||
real-frequency self energy by doing an analytic continuation.
|
||||
real-frequency self energy.
|
||||
|
||||
.. warning::
|
||||
This package does NOT provide an explicit method to do an **analytic continuation** of
|
||||
self energies and Green functions from Matsubara frequencies to the real frequency axis!
|
||||
There are methods included e.g. in the :program:`ALPS` package, which can be used for these purposes.
|
||||
Keep in mind that all these methods have to be used very carefully. Especially for optics calculations
|
||||
it is crucial to perform the analytic continuation in such a way that the obtained real frequency self energy
|
||||
is accurate around the Fermi energy as low energy features strongly influence the final results!
|
||||
.. note::
|
||||
If you use a CT-QMC impurity solver you need to perform an **analytic continuation** of
|
||||
self energies and Green functions from Matsubara frequencies to the real-frequency axis!
|
||||
This packages does NOT provide methods to do this, but a list of options available within the TRIQS framework
|
||||
is given :ref:`here <ac>`. Keep in mind that all these methods have to be used very carefully. Especially for optics calculations
|
||||
it is crucial to perform the analytic continuation in such a way that the real-frequency self energy
|
||||
is accurate around the Fermi energy as low-energy features strongly influence the final results.
|
||||
|
||||
Besides the self energy the Wien2k files read by the transport converter (:meth:`convert_transport_input <dft.converters.wien2k_converter.Wien2kConverter.convert_transport_input>`) are:
|
||||
* :file:`.struct`: The lattice constants specified in the struct file are used to calculate the unit cell volume.
|
||||
|
@ -88,12 +96,11 @@ The converter :meth:`convert_transport_input <dft.converters.wien2k_converter.Wi
|
|||
reads the required data of the Wien2k output and stores it in the `dft_transp_input` subgroup of your hdf file.
|
||||
Additionally we need to read and set the self energy, the chemical potential and the double counting::
|
||||
|
||||
ar = HDFArchive('case.h5', 'a')
|
||||
SK.set_Sigma([ar['dmft_output']['Sigma_w']])
|
||||
chemical_potential,dc_imp,dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
SK.set_mu(chemical_potential)
|
||||
SK.set_dc(dc_imp,dc_energ)
|
||||
del ar
|
||||
with HDFArchive('case.h5', 'r') as ar:
|
||||
SK.set_Sigma([ar['dmft_output']['Sigma_w']])
|
||||
chemical_potential,dc_imp,dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
SK.set_mu(chemical_potential)
|
||||
SK.set_dc(dc_imp,dc_energ)
|
||||
|
||||
As next step we can calculate the transport distribution :math:`\Gamma_{\alpha\beta}(\omega)`::
|
||||
|
||||
|
@ -102,7 +109,7 @@ As next step we can calculate the transport distribution :math:`\Gamma_{\alpha\b
|
|||
|
||||
Here the transport distribution is calculated in :math:`xx` direction for the frequencies :math:`\Omega=0.0` and :math:`0.1`.
|
||||
To use the previously obtained self energy we set with_Sigma to True and the broadening to :math:`0.0`.
|
||||
As we also want to calculate the Seebeck coefficient we have to include :math:`\Omega=0.0` in the mesh.
|
||||
As we also want to calculate the Seebeck coefficient and the thermal conductivity we have to include :math:`\Omega=0.0` in the mesh.
|
||||
Note that the current version of the code repines the :math:`\Omega` values to the closest values on the self energy mesh.
|
||||
For complete description of the input parameters see the :meth:`transport_distribution reference <dft.sumk_dft_tools.SumkDFTTools.transport_distribution>`.
|
||||
|
||||
|
@ -114,10 +121,10 @@ You can retrieve it from the archive by::
|
|||
|
||||
SK.Gamma_w, SK.Om_meshr, SK.omega, SK.directions = SK.load(['Gamma_w','Om_meshr','omega','directions'])
|
||||
|
||||
Finally the optical conductivity :math:`\sigma(\Omega)` and the Seebeck coefficient :math:`S` can be obtained with::
|
||||
Finally the optical conductivity :math:`\sigma(\Omega)`, the Seebeck coefficient :math:`S` and the thermal conductivity :math:`\kappa^{\text{el}}` can be obtained with::
|
||||
|
||||
SK.conductivity_and_seebeck(beta=40)
|
||||
SK.save(['seebeck','optic_cond'])
|
||||
SK.save(['seebeck','optic_cond','kappa'])
|
||||
|
||||
It is strongly advised to check convergence in the number of k-points!
|
||||
|
||||
|
@ -125,5 +132,6 @@ It is strongly advised to check convergence in the number of k-points!
|
|||
References
|
||||
----------
|
||||
|
||||
.. [#transp] `V. S. Oudovenko, G. Palsson, K. Haule, G. Kotliar, S. Y. Savrasov, Phys. Rev. B 73, 035120 (2006) <http://link.aps.org/doi/10.1103/PhysRevB.73.0351>`_
|
||||
.. [#transp1] `V. S. Oudovenko, G. Palsson, K. Haule, G. Kotliar, S. Y. Savrasov, Phys. Rev. B 73, 035120 (2006) <http://link.aps.org/doi/10.1103/PhysRevB.73.0351>`_
|
||||
.. [#transp2] `J. M. Tomczak, K. Haule, T. Miyake, A. Georges, G. Kotliar, Phys. Rev. B 82, 085104 (2010) <https://link.aps.org/doi/10.1103/PhysRevB.82.085104>`_
|
||||
.. [#userguide] `P. Blaha, K. Schwarz, G. K. H. Madsen, D. Kvasnicka, J. Luitz, ISBN 3-9501031-1-2 <http://www.wien2k.at/reg_user/textbooks/usersguide.pdf>`_
|
||||
|
|
|
@ -7,6 +7,11 @@
|
|||
DFTTools
|
||||
========
|
||||
|
||||
.. sidebar:: DFTTools 2.2
|
||||
|
||||
This is the homepage DFTTools Version 2.2
|
||||
For the changes in DFTTools, Cf :ref:`changelog page <changelog>`
|
||||
|
||||
This :ref:`TRIQS-based <triqslibs:welcome>`-based application is aimed
|
||||
at ab-initio calculations for
|
||||
correlated materials, combining realistic DFT band-structure
|
||||
|
@ -19,7 +24,7 @@ provides a generic interface for one-shot DFT+DMFT calculations, where
|
|||
only the single-particle Hamiltonian in orbital space has to be
|
||||
provided.
|
||||
|
||||
Learn how to use this package in the :ref:`documentation`.
|
||||
Learn how to use this package in the :ref:`documentation` and the :ref:`tutorials`.
|
||||
|
||||
|
||||
.. toctree::
|
||||
|
|
|
@ -1,31 +1,64 @@
|
|||
|
||||
.. highlight:: bash
|
||||
|
||||
Installation
|
||||
============
|
||||
.. _install:
|
||||
|
||||
|
||||
Packaged Versions of DFTTools
|
||||
=============================
|
||||
|
||||
.. _ubuntu_debian:
|
||||
Ubuntu Debian packages
|
||||
----------------------
|
||||
|
||||
We provide a Debian package for the Ubuntu LTS Versions 16.04 (xenial) and 18.04 (bionic), which can be installed by following the steps outlined :ref:`here <triqslibs:triqs_debian>`, and the subsequent command::
|
||||
|
||||
sudo apt-get install -y triqs_dft_tools
|
||||
|
||||
.. _anaconda:
|
||||
Anaconda (experimental)
|
||||
-----------------------
|
||||
|
||||
We provide Linux and OSX packages for the `Anaconda <https://www.anaconda.com/>`_ distribution. The packages are provided through the `conda-forge <https://conda-forge.org/>`_ repositories. After `installing conda <https://docs.conda.io/en/latest/miniconda.html>`_ you can install DFTTools with::
|
||||
|
||||
conda install -c conda-forge triqs_dft_tools
|
||||
|
||||
See also `github.com/conda-forge/triqs_dft_tools-feedstock <https://github.com/conda-forge/triqs_dft_tools-feedstock/>`_.
|
||||
|
||||
.. _docker:
|
||||
Docker
|
||||
------
|
||||
|
||||
A Docker image including the latest version of DFTTools is available `here <https://hub.docker.com/r/flatironinstitute/triqs>`_. For more information, please see the page on :ref:`TRIQS Docker <triqslibs:triqs_docker>`.
|
||||
|
||||
|
||||
Compiling DFTTools from source
|
||||
==============================
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
#. The :ref:`TRIQS <triqslibs:welcome>` toolbox. In the following, we will suppose that it is installed in the ``path_to_triqs`` directory.
|
||||
#. The :ref:`TRIQS <triqslibs:welcome>` toolbox.
|
||||
|
||||
#. Likely, you will also need at least one impurity solver, e.g. the :ref:`CTHYB solver <triqscthyb:welcome>`.
|
||||
|
||||
Installation steps
|
||||
------------------
|
||||
|
||||
#. Download the sources from github::
|
||||
#. Download the source code by cloning the ``TRIQS/dft_tools`` repository from GitHub::
|
||||
|
||||
$ git clone https://github.com/TRIQS/dft_tools.git src
|
||||
$ git clone https://github.com/TRIQS/dft_tools.git dft_tools.src
|
||||
|
||||
#. Create an empty build directory where you will compile the code::
|
||||
#. Create and move to a new directory where you will compile the code::
|
||||
|
||||
$ mkdir build && cd build
|
||||
$ mkdir dft_tools.build && cd dft_tools.build
|
||||
|
||||
#. In the build directory call cmake specifying where the TRIQS library is installed::
|
||||
|
||||
$ cmake -DTRIQS_PATH=path_to_triqs ../src
|
||||
#. Ensure that your shell contains the TRIQS environment variables by sourcing the ``triqsvars.sh`` file from your TRIQS installation::
|
||||
|
||||
$ source path_to_triqs/share/triqsvarsh.sh
|
||||
|
||||
#. In the build directory call cmake, including any additional custom CMake options, see below::
|
||||
|
||||
$ cmake ../dft_tools.src
|
||||
|
||||
#. Compile the code, run the tests and install the application::
|
||||
|
||||
|
@ -73,28 +106,27 @@ fully self-consistent calculations. These files should be copied to
|
|||
|
||||
$ chmod +x run*_triqs
|
||||
|
||||
You will also need to insert manually a correct call of :file:`pytriqs` into
|
||||
You will also need to insert manually a correct call of :file:`python` into
|
||||
these scripts using an appropriate for your system MPI wrapper (mpirun,
|
||||
mpprun, etc.), if needed. Search for *pytriqs* within the scripts to locate the
|
||||
appropriate place for inserting the :file:`pytriqs` call.
|
||||
mpprun, etc.), if needed.
|
||||
|
||||
Finally, you will have to change the calls to :program:`python_with_DMFT` to
|
||||
:program:`pytriqs` in the Wien2k :file:`path_to_Wien2k/run*` files.
|
||||
your :program:`python` installation in the Wien2k :file:`path_to_Wien2k/run*` files.
|
||||
|
||||
|
||||
Version compatibility
|
||||
---------------------
|
||||
|
||||
Be careful that the version of the TRIQS library and of the dft tools must be
|
||||
Be careful that the version of the TRIQS library and of the :program:`DFTTools` must be
|
||||
compatible (more information on the :ref:`TRIQS website <triqslibs:welcome>`.
|
||||
If you want to use a version of the dft tools that is not the latest one, go
|
||||
If you want to use a version of the :program:`DFTTools` that is not the latest one, go
|
||||
into the directory with the sources and look at all available versions::
|
||||
|
||||
$ cd src && git tag
|
||||
|
||||
Checkout the version of the code that you want, for instance::
|
||||
|
||||
$ git co 1.2
|
||||
$ git co 2.1
|
||||
|
||||
Then follow the steps 2 to 5 described above to compile the code.
|
||||
|
||||
|
@ -103,7 +135,7 @@ Custom CMake options
|
|||
|
||||
Functionality of ``dft_tools`` can be tweaked using extra compile-time options passed to CMake::
|
||||
|
||||
cmake -DOPTION1=value1 -DOPTION2=value2 ... ../cthyb.src
|
||||
cmake -DOPTION1=value1 -DOPTION2=value2 ... ../dft_tools.src
|
||||
|
||||
+---------------------------------------------------------------+-----------------------------------------------+
|
||||
| Options | Syntax |
|
||||
|
@ -112,3 +144,7 @@ Functionality of ``dft_tools`` can be tweaked using extra compile-time options p
|
|||
+---------------------------------------------------------------+-----------------------------------------------+
|
||||
| Build the documentation locally | -DBuild_Documentation=ON |
|
||||
+---------------------------------------------------------------+-----------------------------------------------+
|
||||
| Check test coverage when testing | -DTEST_COVERAGE=ON |
|
||||
| (run ``make coverage`` to show the results; requires the | |
|
||||
| python ``coverage`` package) | |
|
||||
+---------------------------------------------------------------+-----------------------------------------------+
|
||||
|
|
|
@ -2,7 +2,7 @@ Block Structure
|
|||
===============
|
||||
|
||||
The `BlockStructure` class allows to change and manipulate
|
||||
Green's functions structures and mappings from sumk to solver.
|
||||
Green functions structures and mappings from sumk to solver.
|
||||
|
||||
The block structure can also be written to and read from HDF files.
|
||||
|
||||
|
@ -16,7 +16,7 @@ The block structure can also be written to and read from HDF files.
|
|||
Writing the sumk_to_solver and solver_to_sumk elements
|
||||
individually is not implemented.
|
||||
|
||||
.. autoclass:: dft.block_structure.BlockStructure
|
||||
.. autoclass:: triqs_dft_tools.block_structure.BlockStructure
|
||||
:members:
|
||||
:show-inheritance:
|
||||
|
||||
|
|
|
@ -5,25 +5,25 @@ Converters
|
|||
|
||||
Wien2k Converter
|
||||
----------------
|
||||
.. autoclass:: dft.converters.wien2k_converter.Wien2kConverter
|
||||
.. autoclass:: triqs_dft_tools.converters.wien2k_converter.Wien2kConverter
|
||||
:members:
|
||||
:special-members:
|
||||
:show-inheritance:
|
||||
|
||||
H(k) Converter
|
||||
--------------
|
||||
.. autoclass:: dft.converters.hk_converter.HkConverter
|
||||
.. autoclass:: triqs_dft_tools.converters.hk_converter.HkConverter
|
||||
:members:
|
||||
:special-members:
|
||||
|
||||
Wannier90 Converter
|
||||
-------------------
|
||||
.. autoclass:: dft.converters.wannier90_converter.Wannier90Converter
|
||||
.. autoclass:: triqs_dft_tools.converters.wannier90_converter.Wannier90Converter
|
||||
:members:
|
||||
:special-members:
|
||||
|
||||
Converter Tools
|
||||
---------------
|
||||
.. autoclass:: dft.converters.converter_tools.ConverterTools
|
||||
.. autoclass:: triqs_dft_tools.converters.converter_tools.ConverterTools
|
||||
:members:
|
||||
:special-members:
|
||||
|
|
|
@ -2,7 +2,7 @@ SumK DFT
|
|||
========
|
||||
|
||||
|
||||
.. autoclass:: dft.sumk_dft.SumkDFT
|
||||
.. autoclass:: triqs_dft_tools.sumk_dft.SumkDFT
|
||||
:members:
|
||||
:special-members:
|
||||
:show-inheritance:
|
||||
|
|
|
@ -2,7 +2,7 @@ SumK DFT Tools
|
|||
==============
|
||||
|
||||
|
||||
.. autoclass:: dft.sumk_dft_tools.SumkDFTTools
|
||||
.. autoclass:: triqs_dft_tools.sumk_dft_tools.SumkDFTTools
|
||||
:members:
|
||||
:special-members:
|
||||
:show-inheritance:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
Symmetry
|
||||
========
|
||||
|
||||
.. autoclass:: dft.Symmetry
|
||||
.. autoclass:: triqs_dft_tools.Symmetry
|
||||
:members:
|
||||
:special-members:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
TransBasis
|
||||
==========
|
||||
|
||||
.. autoclass:: dft.trans_basis.TransBasis
|
||||
.. autoclass:: triqs_dft_tools.trans_basis.TransBasis
|
||||
:members:
|
||||
:special-members:
|
||||
|
|
|
@ -0,0 +1,25 @@
|
|||
.. module:: triqs_dft_tools
|
||||
|
||||
.. _tutorials:
|
||||
|
||||
Tutorials
|
||||
=========
|
||||
|
||||
A simple example: SrVO3
|
||||
-----------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
tutorials/srvo3
|
||||
|
||||
|
||||
Full charge self consistency with Wien2k: :math:`\gamma`-Ce
|
||||
-----------------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
|
||||
tutorials/ce-gamma-fscs_wien2k
|
||||
|
|
@ -0,0 +1,252 @@
|
|||
.. _DFTDMFTtutorial:
|
||||
|
||||
DFT+DMFT tutorial: Ce with Hubbard-I approximation
|
||||
==================================================
|
||||
|
||||
In this tutorial we will perform DFT+DMFT Wien2k
|
||||
calculations from scratch, including all steps described in the
|
||||
previous sections. As example, we take the high-temperature
|
||||
:math:`\gamma`-phase of Ce employing the Hubbard-I approximation for
|
||||
its localized *4f* shell.
|
||||
|
||||
Wien2k setup
|
||||
------------
|
||||
|
||||
First we create the Wien2k :file:`Ce-gamma.struct` file as described in the
|
||||
`Wien2k manual <http://www.wien2k.at/reg_user/textbooks/usersguide.pdf>`_
|
||||
for the :math:`\gamma`-Ce fcc structure with lattice parameter of 9.75 a.u.
|
||||
|
||||
.. literalinclude:: images_scripts/Ce-gamma.struct
|
||||
|
||||
We initalize non-magnetic Wien2k calculations using the :program:`init` script as
|
||||
described in the same manual. For this example we specify 3000 :math:`\mathbf{k}`-points
|
||||
in the full Brillouin zone and LDA exchange-correlation potential (*vxc=5*), other
|
||||
parameters are defaults. The Ce *4f* electrons are treated as valence states.
|
||||
Hence, the initialization script is executed as follows ::
|
||||
|
||||
init -b -vxc 5 -numk 3000
|
||||
|
||||
and then LDA calculations of non-magnetic :math:`\gamma`-Ce are performed by launching
|
||||
the Wien2k :program:`run` script. These self-consistent LDA calculations will typically
|
||||
take a couple of minutes.
|
||||
|
||||
Wannier orbitals: dmftproj
|
||||
--------------------------
|
||||
|
||||
Then we create the :file:`Ce-gamma.indmftpr` file specifying parameters for construction
|
||||
of Wannier orbitals representing *4f* states:
|
||||
|
||||
.. literalinclude:: images_scripts/Ce-gamma.indmftpr
|
||||
|
||||
As we learned in the section :ref:`conversion`, the first three lines
|
||||
give the number of inequivalent sites, their multiplicity (to be in
|
||||
accordance with the *struct* file) and the maximum orbital quantum
|
||||
number :math:`l_{max}`. The following four lines describe the treatment of
|
||||
Ce *spdf* orbitals by the :program:`dmftproj` program::
|
||||
|
||||
complex
|
||||
1 1 1 2 ! l included for each sort
|
||||
0 0 0 0 ! l included for each sort
|
||||
0
|
||||
|
||||
where `complex` is the choice for the angular basis to be used (spherical complex harmonics),
|
||||
in the next line we specify, for each orbital quantum number, whether it is treated as correlated ('2')
|
||||
and, hence, the corresponding Wannier orbitals will be generated, or uncorrelated ('1'). In the latter
|
||||
case the :program:`dmftproj` program will generate projectors to be used in calculations of
|
||||
corresponding partial densities of states (see below). In the present case we choose the fourth
|
||||
(i. e. *f*) orbitals as correlated. The next line specify the number of irreducible representations
|
||||
into which a given correlated shell should be split (or '0' if no splitting is desired, as in the
|
||||
present case). The fourth line specifies whether the spin-orbit interaction should be switched
|
||||
on ('1') or off ('0', as in the present case).
|
||||
|
||||
Finally, the last line of the file ::
|
||||
|
||||
-.40 0.40 ! Energy window relative to E_f
|
||||
|
||||
specifies the energy window for Wannier functions' construction. For a
|
||||
more complete description of :program:`dmftproj` options see its manual.
|
||||
|
||||
To prepare input data for :program:`dmftproj` we execute lapw2 with the `-almd` option ::
|
||||
|
||||
x lapw2 -almd
|
||||
|
||||
Then :program:`dmftproj` is executed in its default mode (i.e. without
|
||||
spin-polarization or spin-orbit included) ::
|
||||
|
||||
dmftproj
|
||||
|
||||
This program produces the following files:
|
||||
|
||||
* :file:`Ce-gamma.ctqmcout` and :file:`Ce-gamma.symqmc` containing projector operators and symmetry
|
||||
operations for orthonormalized Wannier orbitals, respectively.
|
||||
* :file:`Ce-gamma.parproj` and :file:`Ce-gamma.sympar` containing projector operators and symmetry
|
||||
operations for uncorrelated states, respectively. These files are needed for projected
|
||||
density-of-states or spectral-function calculations.
|
||||
* :file:`Ce-gamma.oubwin` needed for the charge density recalculation in the case of a fully
|
||||
self-consistent DFT+DMFT run (see below).
|
||||
|
||||
Now we have all necessary input from Wien2k for running DMFT calculations.
|
||||
|
||||
|
||||
DMFT setup: Hubbard-I calculations in TRIQS
|
||||
--------------------------------------------
|
||||
|
||||
In order to run DFT+DMFT calculations within Hubbard-I we need the corresponding python script,
|
||||
:ref:`Ce-gamma_script`. It is generally similar to the script for the case of DMFT calculations
|
||||
with the CT-QMC solver (see :ref:`singleshot`), however there are also some differences. First
|
||||
difference is that we import the Hubbard-I solver by::
|
||||
|
||||
from pytriqs.applications.impurity_solvers.hubbard_I.hubbard_solver import Solver
|
||||
|
||||
The Hubbard-I solver is very fast and we do not need to take into account the DFT block structure
|
||||
or use any approximation for the *U*-matrix. We load and convert the :program:`dmftproj` output
|
||||
and initialize the :class:`SumkDFT <dft.sumk_dft.SumkDFT>` class as described in :ref:`conversion` and
|
||||
:ref:`singleshot` and then set up the Hubbard-I solver ::
|
||||
|
||||
S = Solver(beta = beta, l = l)
|
||||
|
||||
where the solver is initialized with the value of `beta`, and the orbital quantum
|
||||
number `l` (equal to 3 in our case).
|
||||
|
||||
|
||||
The Hubbard-I initialization `Solver` has also optional parameters one may use:
|
||||
|
||||
* `n_msb`: the number of Matsubara frequencies used. The default is `n_msb=1025`.
|
||||
* `use_spin_orbit`: if set 'True' the solver is run with spin-orbit coupling included.
|
||||
To perform actual DFT+DMFT calculations with spin-orbit one should also run Wien2k and
|
||||
:program:`dmftproj` in spin-polarized mode and with spin-orbit included. By default,
|
||||
`use_spin_orbit=False`.
|
||||
* `Nmoments`: the number of moments used to describe high-frequency tails of the Hubbard-I
|
||||
Green function and self energy. By default `Nmoments = 5`
|
||||
|
||||
The `Solver.solve(U_int, J_hund)` statement has two necessary parameters, the Hubbard U
|
||||
parameter `U_int` and Hund's rule coupling `J_hund`. Notice that the solver constructs the
|
||||
full 4-index `U`-matrix by default, and the `U_int` parameter is in fact the Slater `F0` integral.
|
||||
Other optional parameters are:
|
||||
|
||||
* `T`: matrix that transforms the interaction matrix from complex spherical harmonics to a symmetry
|
||||
adapted basis. By default, the complex spherical harmonics basis is used and `T=None`.
|
||||
* `verbosity`: tunes output from the solver. If `verbosity=0` only basic information is printed,
|
||||
if `verbosity=1` the ground state atomic occupancy and its energy are printed, if `verbosity=2`
|
||||
additional information is printed for all occupancies that were diagonalized. By default, `verbosity=0`.
|
||||
* `Iteration_Number`: the iteration number of the DMFT loop. Used only for printing. By default `Iteration_Number=1`
|
||||
* `Test_Convergence`: convergence criterion. Once the self energy is converged below `Test_Convergence`
|
||||
the Hubbard-I solver is not called anymore. By default `Test_Convergence=0.0001`.
|
||||
|
||||
We need also to introduce some changes in the DMFT loop with respect that used for CT-QMC calculations
|
||||
in :ref:`singleshot`. The hybridization function is neglected in the Hubbard-I approximation, and only
|
||||
non-interacting level positions (:math:`\hat{\epsilon}=-\mu+\langle H^{ff} \rangle - \Sigma_{DC}`) are
|
||||
required. Hence, instead of computing `S.G0` as in :ref:`singleshot` we set the level positions::
|
||||
|
||||
# set atomic levels:
|
||||
eal = SK.eff_atomic_levels()[0]
|
||||
S.set_atomic_levels( eal = eal )
|
||||
|
||||
The part after the solution of the impurity problem remains essentially the same: we mix the self energy and local
|
||||
Green function and then save them in the hdf5 file.
|
||||
Then the double counting is recalculated and the correlation energy is computed with the Migdal formula and stored in hdf5.
|
||||
|
||||
Finally, we compute the modified charge density and save it as well as correlation correction to the total energy in
|
||||
:file:`Ce-gamma.qdmft` file, which is then read by lapw2 in the case of self-consistent DFT+DMFT calculations.
|
||||
|
||||
You should try to run your script before setting up the fully charge self-consistent calculation
|
||||
(see :ref:`this<runpy>` page).
|
||||
|
||||
Fully charge self-consistent DFT+DMFT calculation
|
||||
-------------------------------------------------
|
||||
|
||||
Instead of doing only one-shot runs we perform in this tutorial a fully
|
||||
self-consistent DFT+DMFT calculations. We launch such a calculations with
|
||||
|
||||
`run -qdmft 1`
|
||||
|
||||
where `-qdmft` flag turns on DFT+DMFT calculations with Wien2k,
|
||||
and one computing core. We use here the default convergence criterion
|
||||
in Wien2k (convergence to 0.1 mRy in energy).
|
||||
|
||||
After calculations are done we may check the value of correlation ('Hubbard') energy correction to the total energy::
|
||||
|
||||
>grep HUBBARD Ce-gamma.scf|tail -n 1
|
||||
HUBBARD ENERGY(included in SUM OF EIGENVALUES): -0.012866
|
||||
|
||||
In the case of Ce, with the correlated shell occupancy close to 1 the Hubbard energy is close to 0, while the
|
||||
DC correction to energy is about J/4 in accordance with the fully-localized-limit formula, hence, giving the
|
||||
total correction :math:`\Delta E_{HUB}=E_{HUB}-E_{DC} \approx -J/4`, which is in our case is equal
|
||||
to -0.175 eV :math:`\approx`-0.013 Ry.
|
||||
|
||||
The band ("kinetic") energy with DMFT correction is ::
|
||||
|
||||
>grep DMFT Ce-gamma.scf |tail -n 1
|
||||
KINETIC ENERGY with DMFT correction: -5.370632
|
||||
|
||||
One may also check the convergence in total energy::
|
||||
|
||||
>grep :ENE Ce-gamma.scf |tail -n 5
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56318334
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56342250
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56271503
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56285812
|
||||
:ENE : ********** TOTAL ENERGY IN Ry = -17717.56287381
|
||||
|
||||
|
||||
Post-processing and data analysis
|
||||
---------------------------------
|
||||
|
||||
Within Hubbard-I one may also easily obtain the angle-resolved spectral function
|
||||
(band structure) and integrated spectral function (density of states or DOS).
|
||||
In difference with the CT-QMC approach one does not need to do an
|
||||
analytic continuations to get the real-frequency self energy, as it can be
|
||||
calculated directly in the Hubbard-I solver.
|
||||
|
||||
The corresponding script :ref:`Ce-gamma_DOS_script` contains several new parameters ::
|
||||
|
||||
ommin=-4.0 # bottom of the energy range for DOS calculations
|
||||
ommax=6.0 # top of the energy range for DOS calculations
|
||||
N_om=2001 # number of points on the real-energy axis mesh
|
||||
broadening = 0.02 # broadening (the imaginary shift of the real-energy mesh)
|
||||
|
||||
Then one needs to load projectors needed for calculations of
|
||||
corresponding projected densities of states, as well as corresponding
|
||||
symmetries::
|
||||
|
||||
Converter.convert_parpoj_input()
|
||||
|
||||
To get access to analysing tools we initialize the
|
||||
:class:`SumkDFTTools <dft.sumk_dft_tools.SumkDFTTools>` class ::
|
||||
|
||||
SK = SumkDFTTools(hdf_file=dft_filename+'.h5', use_dft_blocks=False)
|
||||
|
||||
After the solver initialization, we load the previously calculated
|
||||
chemical potential and double-counting correction. Having set up
|
||||
atomic levels we then compute the atomic Green function and
|
||||
self energy on the real axis::
|
||||
|
||||
S.set_atomic_levels(eal=eal)
|
||||
S.GF_realomega(ommin=ommin, ommax=ommax, N_om=N_om, U_int=U_int, J_hund=J_hund)
|
||||
|
||||
put it into SK class and then calculated the actual DOS::
|
||||
|
||||
SK.dos_parproj_basis(broadening=broadening)
|
||||
|
||||
We may first increase the number of **k**-points in BZ to 10000 by executing the Wien2k
|
||||
program :program:`kgen` ::
|
||||
|
||||
x kgen
|
||||
|
||||
and then by executing the :ref:`Ce-gamma_DOS_script` with :program:`python`::
|
||||
|
||||
python Ce-gamma_DOS.py
|
||||
|
||||
In result we get the total DOS for spins `up` and `down` (identical in our paramagnetic case)
|
||||
in :file:`DOScorrup.dat` and :file:`DOScorrdown.dat` files, respectively, as well as the projected DOS
|
||||
written in the corresponding files as described in :ref:`analysis`. In our case, for example, the files
|
||||
:file:`DOScorrup.dat` and :file:`DOScorrup_proj3.dat` contain the total DOS for spin *up* and the
|
||||
corresponding projected DOS for Ce *4f* orbital, respectively. They are plotted below.
|
||||
|
||||
.. image:: images_scripts/Ce_DOS.png
|
||||
:width: 700
|
||||
:align: center
|
||||
|
||||
As one may clearly see, the Ce *4f* band is split by the local Coulomb interaction into the filled lower
|
||||
Hubbard band and empty upper Hubbard band (the latter is additionally split into several peaks due to the
|
||||
Hund's rule coupling and multiplet effects).
|
|
@ -22,15 +22,14 @@ mpi.barrier()
|
|||
previous_runs = 0
|
||||
previous_present = False
|
||||
if mpi.is_master_node():
|
||||
f = HDFArchive(dft_filename+'.h5','a')
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
del f
|
||||
with HDFArchive(dft_filename+'.h5','a') as f:
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
previous_runs = mpi.bcast(previous_runs)
|
||||
previous_present = mpi.bcast(previous_present)
|
||||
|
||||
|
@ -47,9 +46,8 @@ chemical_potential=chemical_potential_init
|
|||
# load previous data: old self-energy, chemical potential, DC correction
|
||||
if previous_present:
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
S.Sigma << ar['dmft_output']['Sigma']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
S.Sigma << ar['dmft_output']['Sigma']
|
||||
SK.chemical_potential,SK.dc_imp,SK.dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
S.Sigma << mpi.bcast(S.Sigma)
|
||||
SK.chemical_potential = mpi.bcast(SK.chemical_potential)
|
||||
|
@ -87,11 +85,10 @@ for iteration_number in range(1,Loops+1):
|
|||
# Now mix Sigma and G with factor Mix, if wanted:
|
||||
if (iteration_number>1 or previous_present):
|
||||
if (mpi.is_master_node() and (mixing<1.0)):
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
mpi.report("Mixing Sigma and G with factor %s"%mixing)
|
||||
S.Sigma << mixing * S.Sigma + (1.0-mixing) * ar['dmft_output']['Sigma']
|
||||
S.G << mixing * S.G + (1.0-mixing) * ar['dmft_output']['G']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
mpi.report("Mixing Sigma and G with factor %s"%mixing)
|
||||
S.Sigma << mixing * S.Sigma + (1.0-mixing) * ar['dmft_output']['Sigma']
|
||||
S.G << mixing * S.G + (1.0-mixing) * ar['dmft_output']['G']
|
||||
S.G << mpi.bcast(S.G)
|
||||
S.Sigma << mpi.bcast(S.Sigma)
|
||||
|
||||
|
@ -106,11 +103,10 @@ for iteration_number in range(1,Loops+1):
|
|||
|
||||
# store the impurity self-energy, GF as well as correlation energy in h5
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
ar['dmft_output']['iterations'] = iteration_number + previous_runs
|
||||
ar['dmft_output']['G'] = S.G
|
||||
ar['dmft_output']['Sigma'] = S.Sigma
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','a') as ar:
|
||||
ar['dmft_output']['iterations'] = iteration_number + previous_runs
|
||||
ar['dmft_output']['G'] = S.G
|
||||
ar['dmft_output']['Sigma'] = S.Sigma
|
||||
|
||||
#Save essential SumkDFT data:
|
||||
SK.save(['chemical_potential','dc_imp','dc_energ','correnerg'])
|
Before Width: | Height: | Size: 5.4 KiB After Width: | Height: | Size: 5.4 KiB |
|
@ -13,5 +13,3 @@ complex ! choice of angular harmonics
|
|||
1 1 0 0 ! l included for each sort
|
||||
0 0 0 0 ! If split into ireps, gives number of ireps. for a given orbital (otherwise 0)
|
||||
-0.11 0.14
|
||||
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
SrVO3
|
||||
P LATTICE,NONEQUIV.ATOMS: 3221_Pm-3m
|
||||
MODE OF CALC=RELA unit=bohr
|
||||
7.261300 7.261300 7.261300 90.000000 90.000000 90.000000
|
||||
ATOM 1: X=0.00000000 Y=0.00000000 Z=0.00000000
|
||||
MULT= 1 ISPLIT= 2
|
||||
Sr NPT= 781 R0=0.00001000 RMT= 2.50000 Z: 38.0
|
||||
LOCAL ROT MATRIX: 1.0000000 0.0000000 0.0000000
|
||||
0.0000000 1.0000000 0.0000000
|
||||
0.0000000 0.0000000 1.0000000
|
||||
ATOM 2: X=0.50000000 Y=0.50000000 Z=0.50000000
|
||||
MULT= 1 ISPLIT= 2
|
||||
V NPT= 781 R0=0.00005000 RMT= 1.91 Z: 23.0
|
||||
LOCAL ROT MATRIX: 1.0000000 0.0000000 0.0000000
|
||||
0.0000000 1.0000000 0.0000000
|
||||
0.0000000 0.0000000 1.0000000
|
||||
ATOM -3: X=0.00000000 Y=0.50000000 Z=0.50000000
|
||||
MULT= 3 ISPLIT=-2
|
||||
-3: X=0.50000000 Y=0.00000000 Z=0.50000000
|
||||
-3: X=0.50000000 Y=0.50000000 Z=0.00000000
|
||||
O NPT= 781 R0=0.00010000 RMT= 1.70 Z: 8.0
|
||||
LOCAL ROT MATRIX: 0.0000000 0.0000000 1.0000000
|
||||
0.0000000 1.0000000 0.0000000
|
||||
-1.0000000 0.0000000 0.0000000
|
||||
0 NUMBER OF SYMMETRY OPERATIONS
|
Before Width: | Height: | Size: 46 KiB After Width: | Height: | Size: 46 KiB |
|
@ -4,19 +4,38 @@ from pytriqs.archive import HDFArchive
|
|||
from triqs_cthyb import *
|
||||
from pytriqs.gf import *
|
||||
from triqs_dft_tools.sumk_dft import *
|
||||
from triqs_dft_tools.converters.wien2k_converter import *
|
||||
|
||||
dft_filename='SrVO3'
|
||||
U = 4.0
|
||||
J = 0.65
|
||||
beta = 40
|
||||
loops = 15 # Number of DMFT sc-loops
|
||||
sigma_mix = 1.0 # Mixing factor of Sigma after solution of the AIM
|
||||
delta_mix = 1.0 # Mixing factor of Delta as input for the AIM
|
||||
dc_type = 1 # DC type: 0 FLL, 1 Held, 2 AMF
|
||||
use_blocks = True # use bloc structure from DFT input
|
||||
prec_mu = 0.0001
|
||||
h_field = 0.0
|
||||
|
||||
## KANAMORI DENSITY-DENSITY (for full Kanamori use h_int_kanamori)
|
||||
# Define interaction paramters, DC and Hamiltonian
|
||||
U = 4.0
|
||||
J = 0.65
|
||||
dc_type = 1 # DC type: 0 FLL, 1 Held, 2 AMF
|
||||
# Construct U matrix for density-density calculations
|
||||
Umat, Upmat = U_matrix_kanamori(n_orb=n_orb, U_int=U, J_hund=J)
|
||||
# Construct density-density Hamiltonian
|
||||
h_int = h_int_density(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U=Umat, Uprime=Upmat)
|
||||
|
||||
## SLATER HAMILTONIAN
|
||||
## Define interaction paramters, DC and Hamiltonian
|
||||
#U = 9.6
|
||||
#J = 0.8
|
||||
#dc_type = 0 # DC type: 0 FLL, 1 Held, 2 AMF
|
||||
## Construct Slater U matrix
|
||||
#U_sph = U_matrix(l=2, U_int=U, J_hund=J)
|
||||
#U_cubic = transform_U_matrix(U_sph, spherical_to_cubic(l=2, convention='wien2k'))
|
||||
#Umat = t2g_submatrix(U_cubic, convention='wien2k')
|
||||
## Construct Slater Hamiltonian
|
||||
#h_int = h_int_slater(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U_matrix=Umat)
|
||||
|
||||
# Solver parameters
|
||||
p = {}
|
||||
p["max_time"] = -1
|
||||
|
@ -24,8 +43,8 @@ p["random_seed"] = 123 * mpi.rank + 567
|
|||
p["length_cycle"] = 200
|
||||
p["n_warmup_cycles"] = 100000
|
||||
p["n_cycles"] = 1000000
|
||||
p["perfrom_tail_fit"] = True
|
||||
p["fit_max_moments"] = 4
|
||||
p["perform_tail_fit"] = True
|
||||
p["fit_max_moment"] = 4
|
||||
p["fit_min_n"] = 30
|
||||
p["fit_max_n"] = 60
|
||||
|
||||
|
@ -38,15 +57,14 @@ p["fit_max_n"] = 60
|
|||
previous_runs = 0
|
||||
previous_present = False
|
||||
if mpi.is_master_node():
|
||||
f = HDFArchive(dft_filename+'.h5','a')
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
del f
|
||||
with HDFArchive(dft_filename+'.h5','a') as f:
|
||||
if 'dmft_output' in f:
|
||||
ar = f['dmft_output']
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
f.create_group('dmft_output')
|
||||
previous_runs = mpi.bcast(previous_runs)
|
||||
previous_present = mpi.bcast(previous_present)
|
||||
|
||||
|
@ -60,11 +78,7 @@ orb_names = [i for i in range(n_orb)]
|
|||
# Use GF structure determined by DFT blocks
|
||||
gf_struct = [(block, indices) for block, indices in SK.gf_struct_solver[0].iteritems()]
|
||||
|
||||
# Construct U matrix for density-density calculations
|
||||
Umat, Upmat = U_matrix_kanamori(n_orb=n_orb, U_int=U, J_hund=J)
|
||||
|
||||
# Construct density-density Hamiltonian and solver
|
||||
h_int = h_int_density(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U=Umat, Uprime=Upmat, H_dump="H.txt")
|
||||
# Construct Solver
|
||||
S = Solver(beta=beta, gf_struct=gf_struct)
|
||||
|
||||
if previous_present:
|
||||
|
@ -72,9 +86,8 @@ if previous_present:
|
|||
dc_imp = 0
|
||||
dc_energ = 0
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
S.Sigma_iw << ar['dmft_output']['Sigma_iw']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
S.Sigma_iw << ar['dmft_output']['Sigma_iw']
|
||||
chemical_potential,dc_imp,dc_energ = SK.load(['chemical_potential','dc_imp','dc_energ'])
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
chemical_potential = mpi.bcast(chemical_potential)
|
||||
|
@ -99,19 +112,7 @@ for iteration_number in range(1,loops+1):
|
|||
S.Sigma_iw << SK.dc_imp[0]['up'][0,0]
|
||||
|
||||
# Calculate new G0_iw to input into the solver:
|
||||
if mpi.is_master_node():
|
||||
# We can do a mixing of Delta in order to stabilize the DMFT iterations:
|
||||
S.G0_iw << S.Sigma_iw + inverse(S.G_iw)
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
if (iteration_number>1 or previous_present):
|
||||
mpi.report("Mixing input Delta with factor %s"%delta_mix)
|
||||
Delta = (delta_mix * delta(S.G0_iw)) + (1.0-delta_mix) * ar['dmft_output']['Delta_iw']
|
||||
S.G0_iw << S.G0_iw + delta(S.G0_iw) - Delta
|
||||
ar['dmft_output']['Delta_iw'] = delta(S.G0_iw)
|
||||
S.G0_iw << inverse(S.G0_iw)
|
||||
del ar
|
||||
|
||||
S.G0_iw << mpi.bcast(S.G0_iw)
|
||||
S.G0_iw << inverse(S.Sigma_iw + inverse(S.G_iw))
|
||||
|
||||
# Solve the impurity problem:
|
||||
S.solve(h_int=h_int, **p)
|
||||
|
@ -122,25 +123,24 @@ for iteration_number in range(1,loops+1):
|
|||
# Now mix Sigma and G with factor sigma_mix, if wanted:
|
||||
if (iteration_number>1 or previous_present):
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
mpi.report("Mixing Sigma and G with factor %s"%sigma_mix)
|
||||
S.Sigma_iw << sigma_mix * S.Sigma_iw + (1.0-sigma_mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << sigma_mix * S.G_iw + (1.0-sigma_mix) * ar['dmft_output']['G_iw']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
mpi.report("Mixing Sigma and G with factor %s"%sigma_mix)
|
||||
S.Sigma_iw << sigma_mix * S.Sigma_iw + (1.0-sigma_mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << sigma_mix * S.G_iw + (1.0-sigma_mix) * ar['dmft_output']['G_iw']
|
||||
|
||||
S.G_iw << mpi.bcast(S.G_iw)
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
|
||||
# Write the final Sigma and G to the hdf5 archive:
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
ar['dmft_output']['iterations'] = iteration_number + previous_runs
|
||||
ar['dmft_output']['G_tau'] = S.G_tau
|
||||
ar['dmft_output']['G_iw'] = S.G_iw
|
||||
ar['dmft_output']['Sigma_iw'] = S.Sigma_iw
|
||||
ar['dmft_output']['G0-%s'%(iteration_number)] = S.G0_iw
|
||||
ar['dmft_output']['G-%s'%(iteration_number)] = S.G_iw
|
||||
ar['dmft_output']['Sigma-%s'%(iteration_number)] = S.Sigma_iw
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','a') as ar:
|
||||
ar['dmft_output']['iterations'] = iteration_number + previous_runs
|
||||
ar['dmft_output']['G_tau'] = S.G_tau
|
||||
ar['dmft_output']['G_iw'] = S.G_iw
|
||||
ar['dmft_output']['Sigma_iw'] = S.Sigma_iw
|
||||
ar['dmft_output']['G0-%s'%(iteration_number)] = S.G0_iw
|
||||
ar['dmft_output']['G-%s'%(iteration_number)] = S.G_iw
|
||||
ar['dmft_output']['Sigma-%s'%(iteration_number)] = S.Sigma_iw
|
||||
|
||||
# Set the new double counting:
|
||||
dm = S.G_iw.density() # compute the density matrix of the impurity problem
|
|
@ -1,22 +1,75 @@
|
|||
.. _SrVO3:
|
||||
|
||||
SrVO3 (single-shot)
|
||||
===================
|
||||
|
||||
We will discuss now how to set up a full working calculation,
|
||||
On the example of SrVO3 we will discuss now how to set up a full working calculation,
|
||||
including the initialization of the :ref:`CTHYB solver <triqscthyb:welcome>`.
|
||||
Some additional parameter are introduced to make the calculation
|
||||
more efficient. This is a more advanced example, which is
|
||||
also suited for parallel execution. The conversion, which
|
||||
we assume to be carried out already, is discussed :ref:`here <conversion>`.
|
||||
also suited for parallel execution.
|
||||
|
||||
For the convenience of the user, we provide also two
|
||||
working python scripts in this documentation. One for a calculation
|
||||
using Kanamori definitions (:download:`dft_dmft_cthyb.py
|
||||
<images_scripts/dft_dmft_cthyb.py>`) and one with a
|
||||
rotational-invariant Slater interaction Hamiltonian (:download:`dft_dmft_cthyb_slater.py
|
||||
<images_scripts/dft_dmft_cthyb.py>`). The user has to adapt these
|
||||
scripts to his own needs.
|
||||
For the convenience of the user, we provide also a full
|
||||
python script (:download:`dft_dmft_cthyb.py <images_scripts/dft_dmft_cthyb.py>`).
|
||||
The user has to adapt it to his own needs. How to execute your script is described :ref:`here<runpy>`.
|
||||
|
||||
The conversion will now be discussed in detail for the Wien2k and VASP packages.
|
||||
For more details we refer to the :ref:`documentation <conversion>`.
|
||||
|
||||
|
||||
DFT (Wien2k) and Wannier orbitals
|
||||
=================================
|
||||
|
||||
DFT setup
|
||||
---------
|
||||
|
||||
First, we do a DFT calculation, using the Wien2k package. As main input file we have to provide the so-called struct file :file:`SrVO3.struct`. We use the following:
|
||||
|
||||
.. literalinclude:: images_scripts/SrVO3.struct
|
||||
|
||||
Instead of going through the whole initialisation process, we can use ::
|
||||
|
||||
init -b -vxc 5 -numk 5000
|
||||
|
||||
This is setting up a non-magnetic calculation, using the LDA and 5000 k-points in the full Brillouin zone. As usual, we start the DFT self consistent cycle by the Wien2k script ::
|
||||
|
||||
run
|
||||
|
||||
Wannier orbitals
|
||||
----------------
|
||||
|
||||
As a next step, we calculate localised orbitals for the t2g orbitals with :program:`dmftproj`.
|
||||
We create the following input file, :file:`SrVO3.indmftpr`
|
||||
|
||||
.. literalinclude:: images_scripts/SrVO3.indmftpr
|
||||
|
||||
Details on this input file and how to use :program:`dmftproj` are described :ref:`here <convWien2k>`.
|
||||
|
||||
To prepare the input data for :program:`dmftproj` we first execute lapw2 with the `-almd` option ::
|
||||
|
||||
x lapw2 -almd
|
||||
|
||||
Then :program:`dmftproj` is executed in its default mode (i.e. without spin-polarization or spin-orbit included) ::
|
||||
|
||||
dmftproj
|
||||
|
||||
This program produces the necessary files for the conversion to the hdf5 file structure. This is done using
|
||||
the python module :class:`Wien2kConverter <dft.converters.wien2k_converter.Wien2kConverter>`.
|
||||
A simple python script that initialises the converter is::
|
||||
|
||||
from triqs_dft_tools.converters.wien2k_converter import *
|
||||
Converter = Wien2kConverter(filename = "SrVO3")
|
||||
|
||||
After initializing the interface module, we can now convert the input
|
||||
text files to the hdf5 archive by::
|
||||
|
||||
Converter.convert_dft_input()
|
||||
|
||||
This reads all the data, and stores everything that is necessary for the DMFT calculation in the file :file:`SrVO3.h5`.
|
||||
|
||||
|
||||
The DMFT calculation
|
||||
====================
|
||||
|
||||
The DMFT script itself is, except very few details, independent of the DFT package that was used to calculate the local orbitals.
|
||||
As soon as one has converted everything to the hdf5 format, the following procedure is practially the same.
|
||||
|
||||
Loading modules
|
||||
---------------
|
||||
|
@ -28,6 +81,7 @@ First, we load the necessary modules::
|
|||
from pytriqs.archive import HDFArchive
|
||||
from pytriqs.operators.util import *
|
||||
from triqs_cthyb import *
|
||||
import pytriqs.utility.mpi as mpi
|
||||
|
||||
The last two lines load the modules for the construction of the
|
||||
:ref:`CTHYB solver <triqscthyb:welcome>`.
|
||||
|
@ -56,7 +110,7 @@ Initializing the solver
|
|||
-----------------------
|
||||
|
||||
We also have to specify the :ref:`CTHYB solver <triqscthyb:welcome>` related settings.
|
||||
We assume that the DMFT script for SrVO3 is executed on 16 cores. A sufficient set
|
||||
We assume that the DMFT script for SrVO3 is executed on 16 cores. A sufficient set
|
||||
of parameters for a first guess is::
|
||||
|
||||
p = {}
|
||||
|
@ -102,7 +156,7 @@ We assumed here that we want to use an interaction matrix with
|
|||
Kanamori definitions of :math:`U` and :math:`J`.
|
||||
|
||||
Next, we construct the Hamiltonian and the solver::
|
||||
|
||||
|
||||
h_int = h_int_density(spin_names, orb_names, map_operator_structure=SK.sumk_to_solver[0], U=Umat, Uprime=Upmat)
|
||||
S = Solver(beta=beta, gf_struct=gf_struct)
|
||||
|
||||
|
@ -116,6 +170,13 @@ For other choices of the interaction matrices (e.g Slater representation) or
|
|||
Hamiltonians, we refer to the reference manual of the :ref:`TRIQS <triqslibs:welcome>`
|
||||
library.
|
||||
|
||||
As a last step, we initialize the subgroup in the hdf5 archive to store the results::
|
||||
|
||||
if mpi.is_master_node():
|
||||
with HDFArchive(dft_filename+'.h5') as ar:
|
||||
if (not ar.is_group('dmft_output')):
|
||||
ar.create_group('dmft_output')
|
||||
|
||||
DMFT cycle
|
||||
----------
|
||||
|
||||
|
@ -125,49 +186,47 @@ some additional refinements::
|
|||
|
||||
for iteration_number in range(1,loops+1):
|
||||
if mpi.is_master_node(): print "Iteration = ", iteration_number
|
||||
|
||||
|
||||
SK.symm_deg_gf(S.Sigma_iw,orb=0) # symmetrizing Sigma
|
||||
SK.set_Sigma([ S.Sigma_iw ]) # put Sigma into the SumK class
|
||||
chemical_potential = SK.calc_mu( precision = prec_mu ) # find the chemical potential for given density
|
||||
S.G_iw << SK.extract_G_loc()[0] # calc the local Green function
|
||||
mpi.report("Total charge of Gloc : %.6f"%S.G_iw.total_density())
|
||||
|
||||
# Init the DC term and the real part of Sigma, if no previous runs found:
|
||||
if (iteration_number==1 and previous_present==False):
|
||||
# In the first loop, init the DC term and the real part of Sigma:
|
||||
if (iteration_number==1):
|
||||
dm = S.G_iw.density()
|
||||
SK.calc_dc(dm, U_interact = U, J_hund = J, orb = 0, use_dc_formula = dc_type)
|
||||
S.Sigma_iw << SK.dc_imp[0]['up'][0,0]
|
||||
|
||||
|
||||
# Calculate new G0_iw to input into the solver:
|
||||
S.G0_iw << S.Sigma_iw + inverse(S.G_iw)
|
||||
S.G0_iw << inverse(S.G0_iw)
|
||||
|
||||
# Solve the impurity problem:
|
||||
S.solve(h_int=h_int, **p)
|
||||
|
||||
|
||||
# Solved. Now do post-solution stuff:
|
||||
mpi.report("Total charge of impurity problem : %.6f"%S.G_iw.total_density())
|
||||
|
||||
|
||||
# Now mix Sigma and G with factor mix, if wanted:
|
||||
if (iteration_number>1 or previous_present):
|
||||
if (iteration_number>1):
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
mpi.report("Mixing Sigma and G with factor %s"%mix)
|
||||
S.Sigma_iw << mix * S.Sigma_iw + (1.0-mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << mix * S.G_iw + (1.0-mix) * ar['dmft_output']['G_iw']
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5','r') as ar:
|
||||
mpi.report("Mixing Sigma and G with factor %s"%mix)
|
||||
S.Sigma_iw << mix * S.Sigma_iw + (1.0-mix) * ar['dmft_output']['Sigma_iw']
|
||||
S.G_iw << mix * S.G_iw + (1.0-mix) * ar['dmft_output']['G_iw']
|
||||
S.G_iw << mpi.bcast(S.G_iw)
|
||||
S.Sigma_iw << mpi.bcast(S.Sigma_iw)
|
||||
|
||||
|
||||
# Write the final Sigma and G to the hdf5 archive:
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(dft_filename+'.h5','a')
|
||||
ar['dmft_output']['iterations'] = iteration_number
|
||||
ar['dmft_output']['G_0'] = S.G0_iw
|
||||
ar['dmft_output']['G_tau'] = S.G_tau
|
||||
ar['dmft_output']['G_iw'] = S.G_iw
|
||||
ar['dmft_output']['Sigma_iw'] = S.Sigma_iw
|
||||
del ar
|
||||
with HDFArchive(dft_filename+'.h5') as ar:
|
||||
ar['dmft_output']['iterations'] = iteration_number
|
||||
ar['dmft_output']['G_0'] = S.G0_iw
|
||||
ar['dmft_output']['G_tau'] = S.G_tau
|
||||
ar['dmft_output']['G_iw'] = S.G_iw
|
||||
ar['dmft_output']['Sigma_iw'] = S.Sigma_iw
|
||||
|
||||
# Set the new double counting:
|
||||
dm = S.G_iw.density() # compute the density matrix of the impurity problem
|
||||
|
@ -183,20 +242,19 @@ will be stored in a separate subgroup in the hdf5 file, called `dmft_output`.
|
|||
Note that this script performs 15 DMFT cycles, but does not check for
|
||||
convergence. Of course, it would be possible to build in convergence criteria.
|
||||
A simple check for convergence can be also done if you store multiple quantities
|
||||
of each iteration and analyze the convergence by hand. In general, it is advisable
|
||||
of each iteration and analyse the convergence by hand. In general, it is advisable
|
||||
to start with a lower statistics (less measurements), but then increase it at a
|
||||
point close to converged results (e.g. after a few initial iterations). This helps
|
||||
to keep computational costs low during the first iterations.
|
||||
|
||||
Using the Kanamori Hamiltonian and the parameters above (but on 16 cores),
|
||||
your self energy after the **first iteration** should look like the
|
||||
Using the Kanamori Hamiltonian and the parameters above (but on 16 cores),
|
||||
your self energy after the **first iteration** should look like the
|
||||
self energy shown below.
|
||||
|
||||
.. image:: images_scripts/SrVO3_Sigma_iw_it1.png
|
||||
:width: 700
|
||||
:align: center
|
||||
|
||||
|
||||
.. _tailfit:
|
||||
|
||||
Tail fit parameters
|
||||
|
@ -209,8 +267,8 @@ Therefore disabled the tail fitting first::
|
|||
|
||||
and perform only one DMFT iteration. The resulting self energy can be tail fitted by hand::
|
||||
|
||||
for name, sig in S.Sigma_iw:
|
||||
S.Sigma_iw[name].fit_tail(fit_n_moments = 4, fit_min_n = 60, fit_max_n = 140)
|
||||
Sigma_iw_fit = S.Sigma_iw.copy()
|
||||
Sigma_iw_fit << tail_fit(S.Sigma_iw, fit_max_moment = 4, fit_min_n = 40, fit_max_n = 160)[0]
|
||||
|
||||
Plot the self energy and adjust the tail fit parameters such that you obtain a
|
||||
proper fit. The :meth:`fit_tail function <pytriqs.gf.tools.tail_fit>` is part
|
|
@ -46,7 +46,7 @@ electronic structure data. At this stage simple consistency checks are performed
|
|||
|
||||
All electronic structure from VASP is stored in a class ElectronicStructure:
|
||||
|
||||
.. autoclass:: elstruct.ElectronicStructure
|
||||
.. autoclass:: triqs_dft_tools.converters.plovasp.elstruct.ElectronicStructure
|
||||
:members:
|
||||
|
||||
|
||||
|
@ -95,7 +95,7 @@ Order of operations:
|
|||
* distribute back the arrays assuming that the order is preserved
|
||||
|
||||
|
||||
.. autoclass:: proj_shell.ProjectorShell
|
||||
.. autoclass:: triqs_dft_tools.converters.plovasp.proj_shell.ProjectorShell
|
||||
:members:
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. sec_vaspio
|
||||
.. _vaspio:
|
||||
|
||||
VASP input-output
|
||||
#################
|
||||
|
|
|
@ -17,10 +17,10 @@ SET(D ${CMAKE_CURRENT_SOURCE_DIR}/SRC_templates/)
|
|||
SET(WIEN_SRC_TEMPL_FILES ${D}/case.cf_f_mm2 ${D}/case.cf_p_cubic ${D}/case.indmftpr ${D}/run_triqs ${D}/runsp_triqs)
|
||||
message(STATUS "-----------------------------------------------------------------------------")
|
||||
message(STATUS " ******** WARNING ******** ")
|
||||
message(STATUS " Wien2k users : after installation of TRIQS, copy the files from ")
|
||||
message(STATUS " Wien2k 14.2 and older : after installation of TRIQS, copy the files from ")
|
||||
message(STATUS " ${CMAKE_INSTALL_PREFIX}/share/triqs/Wien2k_SRC_files/SRC_templates ")
|
||||
message(STATUS " to your Wien2k installation WIENROOT/SRC_templates (Cf documentation). ")
|
||||
message(STATUS " This is not handled automatically by the installation process. ")
|
||||
message(STATUS " For newer versions these files are already shipped with Wien2k. ")
|
||||
message(STATUS "-----------------------------------------------------------------------------")
|
||||
install (FILES ${WIEN_SRC_TEMPL_FILES} DESTINATION share/triqs/Wien2k_SRC_files/SRC_templates )
|
||||
|
||||
|
|
|
@ -260,13 +260,12 @@ class HkConverter(ConverterTools):
|
|||
R.close()
|
||||
|
||||
# Save to the HDF5:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.dft_subgrp in ar):
|
||||
ar.create_group(self.dft_subgrp)
|
||||
things_to_save = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.dft_subgrp in ar):
|
||||
ar.create_group(self.dft_subgrp)
|
||||
things_to_save = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
|
||||
'symm_op', 'n_shells', 'shells', 'n_corr_shells', 'corr_shells', 'use_rotations', 'rot_mat',
|
||||
'rot_mat_time_inv', 'n_reps', 'dim_reps', 'T', 'n_orbitals', 'proj_mat', 'bz_weights', 'hopping',
|
||||
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr']
|
||||
for it in things_to_save:
|
||||
ar[self.dft_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
for it in things_to_save:
|
||||
ar[self.dft_subgrp][it] = locals()[it]
|
||||
|
|
|
@ -15,6 +15,6 @@ module.add_preamble("""
|
|||
#include <triqs/cpp2py_converters/arrays.hpp>
|
||||
""")
|
||||
|
||||
module.add_function ("array_view<double,2> dos_tetra_weights_3d (array_view<double,1> eigk, double en, array_view<long,2> itt)", doc = """DOS of a band by analytical tetrahedron method\n\n Returns corner weights for all tetrahedra for a given band and real energy.""")
|
||||
module.add_function ("array<double,2> dos_tetra_weights_3d (array_view<double,1> eigk, double en, array_view<long,2> itt)", doc = """DOS of a band by analytical tetrahedron method\n\n Returns corner weights for all tetrahedra for a given band and real energy.""")
|
||||
|
||||
module.generate_code()
|
||||
|
|
|
@ -67,7 +67,7 @@ def main():
|
|||
This function should not be called directly but via a bash script
|
||||
'plovasp' invoking the main function as follows:
|
||||
|
||||
pytriqs -m applications.dft.converters.plovasp.converter $@
|
||||
python -m applications.dft.converters.plovasp.converter $@
|
||||
"""
|
||||
narg = len(sys.argv)
|
||||
if narg < 2:
|
||||
|
|
|
@ -137,7 +137,7 @@ class ElectronicStructure:
|
|||
"""
|
||||
plo = self.proj_raw
|
||||
nproj, ns, nk, nb = plo.shape
|
||||
ions = list(set([param['isite'] for param in self.proj_params]))
|
||||
ions = sorted(list(set([param['isite'] for param in self.proj_params])))
|
||||
nions = len(ions)
|
||||
norb = nproj / nions
|
||||
|
||||
|
|
|
@ -44,10 +44,9 @@ class TestSumkDFT(SumkDFT):
|
|||
fermi_weights = 0
|
||||
band_window = 0
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(self.hdf_file,'r')
|
||||
fermi_weights = ar['dft_misc_input']['dft_fermi_weights']
|
||||
band_window = ar['dft_misc_input']['band_window']
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file,'r') as ar:
|
||||
fermi_weights = ar['dft_misc_input']['dft_fermi_weights']
|
||||
band_window = ar['dft_misc_input']['band_window']
|
||||
fermi_weights = mpi.bcast(fermi_weights)
|
||||
band_window = mpi.bcast(band_window)
|
||||
|
||||
|
@ -184,10 +183,9 @@ class TestSumkDFT(SumkDFT):
|
|||
fermi_weights = 0
|
||||
band_window = 0
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(self.hdf_file,'r')
|
||||
fermi_weights = ar['dft_misc_input']['dft_fermi_weights']
|
||||
band_window = ar['dft_misc_input']['band_window']
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file,'r') as ar:
|
||||
fermi_weights = ar['dft_misc_input']['dft_fermi_weights']
|
||||
band_window = ar['dft_misc_input']['band_window']
|
||||
fermi_weights = mpi.bcast(fermi_weights)
|
||||
band_window = mpi.bcast(band_window)
|
||||
|
||||
|
@ -282,14 +280,13 @@ def dmft_cycle():
|
|||
previous_present = False
|
||||
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(HDFfilename,'a')
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
previous_runs = 0
|
||||
previous_present = False
|
||||
del ar
|
||||
with HDFArchive(HDFfilename,'a') as ar:
|
||||
if 'iterations' in ar:
|
||||
previous_present = True
|
||||
previous_runs = ar['iterations']
|
||||
else:
|
||||
previous_runs = 0
|
||||
previous_present = False
|
||||
|
||||
mpi.barrier()
|
||||
previous_runs = mpi.bcast(previous_runs)
|
||||
|
@ -315,9 +312,8 @@ def dmft_cycle():
|
|||
if (previous_present):
|
||||
mpi.report("Using stored data for initialisation")
|
||||
if (mpi.is_master_node()):
|
||||
ar = HDFArchive(HDFfilename,'a')
|
||||
S.Sigma <<= ar['SigmaF']
|
||||
del ar
|
||||
with HDFArchive(HDFfilename,'a') as ar:
|
||||
S.Sigma <<= ar['SigmaF']
|
||||
things_to_load=['chemical_potential','dc_imp']
|
||||
old_data=SK.load(things_to_load)
|
||||
chemical_potential=old_data[0]
|
||||
|
@ -365,13 +361,12 @@ def dmft_cycle():
|
|||
# Now mix Sigma and G:
|
||||
if ((itn>1)or(previous_present)):
|
||||
if (mpi.is_master_node()and (Mix<1.0)):
|
||||
ar = HDFArchive(HDFfilename,'r')
|
||||
mpi.report("Mixing Sigma and G with factor %s"%Mix)
|
||||
if ('SigmaF' in ar):
|
||||
S.Sigma <<= Mix * S.Sigma + (1.0-Mix) * ar['SigmaF']
|
||||
if ('GF' in ar):
|
||||
S.G <<= Mix * S.G + (1.0-Mix) * ar['GF']
|
||||
del ar
|
||||
with HDFArchive(HDFfilename,'r') as ar:
|
||||
mpi.report("Mixing Sigma and G with factor %s"%Mix)
|
||||
if ('SigmaF' in ar):
|
||||
S.Sigma <<= Mix * S.Sigma + (1.0-Mix) * ar['SigmaF']
|
||||
if ('GF' in ar):
|
||||
S.G <<= Mix * S.G + (1.0-Mix) * ar['GF']
|
||||
S.G = mpi.bcast(S.G)
|
||||
S.Sigma = mpi.bcast(S.Sigma)
|
||||
|
||||
|
@ -386,14 +381,13 @@ def dmft_cycle():
|
|||
|
||||
# store the impurity self-energy, GF as well as correlation energy in h5
|
||||
if (mpi.is_master_node()):
|
||||
ar = HDFArchive(HDFfilename,'a')
|
||||
ar['iterations'] = itn
|
||||
ar['chemical_cotential%s'%itn] = chemical_potential
|
||||
ar['SigmaF'] = S.Sigma
|
||||
ar['GF'] = S.G
|
||||
ar['correnerg%s'%itn] = correnerg
|
||||
ar['DCenerg%s'%itn] = SK.dc_energ
|
||||
del ar
|
||||
with HDFArchive(HDFfilename,'a') as ar:
|
||||
ar['iterations'] = itn
|
||||
ar['chemical_cotential%s'%itn] = chemical_potential
|
||||
ar['SigmaF'] = S.Sigma
|
||||
ar['GF'] = S.G
|
||||
ar['correnerg%s'%itn] = correnerg
|
||||
ar['DCenerg%s'%itn] = SK.dc_energ
|
||||
|
||||
#Save essential SumkDFT data:
|
||||
things_to_save=['chemical_potential','dc_energ','dc_imp']
|
||||
|
@ -428,11 +422,10 @@ def dmft_cycle():
|
|||
|
||||
# store correlation energy contribution to be read by Wien2ki and then included to DFT+DMFT total energy
|
||||
if (mpi.is_master_node()):
|
||||
ar = HDFArchive(HDFfilename)
|
||||
itn = ar['iterations']
|
||||
correnerg = ar['correnerg%s'%itn]
|
||||
DCenerg = ar['DCenerg%s'%itn]
|
||||
del ar
|
||||
with HDFArchive(HDFfilename) as ar:
|
||||
itn = ar['iterations']
|
||||
correnerg = ar['correnerg%s'%itn]
|
||||
DCenerg = ar['DCenerg%s'%itn]
|
||||
correnerg -= DCenerg[0]
|
||||
f=open(lda_filename+'.qdmft','a')
|
||||
f.write("%.16f\n"%correnerg)
|
||||
|
|
|
@ -54,7 +54,7 @@ def issue_warning(message):
|
|||
class ConfigParameters:
|
||||
r"""
|
||||
Class responsible for parsing of the input config-file.
|
||||
|
||||
|
||||
Parameters:
|
||||
|
||||
- *sh_required*, *sh_optional* : required and optional parameters of shells
|
||||
|
@ -79,7 +79,7 @@ class ConfigParameters:
|
|||
self.parameters = {}
|
||||
|
||||
self.sh_required = {
|
||||
'ions': ('ion_list', self.parse_string_ion_list),
|
||||
'ions': ('ions', self.parse_string_ion_list),
|
||||
'lshell': ('lshell', int)}
|
||||
|
||||
self.sh_optional = {
|
||||
|
@ -109,14 +109,20 @@ class ConfigParameters:
|
|||
################################################################################
|
||||
def parse_string_ion_list(self, par_str):
|
||||
"""
|
||||
The ion list accepts two formats:
|
||||
The ion list accepts the following formats:
|
||||
1). A list of ion indices according to POSCAR.
|
||||
The list can be defined as a range '9..20'.
|
||||
2). An element name, in which case all ions with
|
||||
this name are included.
|
||||
|
||||
2). A list of ion groups (e.g. '[1 4] [2 3]') in which
|
||||
case each group defines a set of equivalent sites.
|
||||
|
||||
3). An element name, in which case all ions with
|
||||
this name are included. NOT YET IMPLEMENTED.
|
||||
|
||||
The second option requires an input from POSCAR file.
|
||||
"""
|
||||
ion_info = {}
|
||||
|
||||
# First check if a range is given
|
||||
patt = '([0-9]+)\.\.([0-9]+)'
|
||||
match = re.match(patt, par_str)
|
||||
|
@ -125,7 +131,8 @@ class ConfigParameters:
|
|||
mess = "First index of the range must be smaller or equal to the second"
|
||||
assert i1 <= i2, mess
|
||||
# Note that we need to subtract 1 from VASP indices
|
||||
ion_list = np.array(range(i1 - 1, i2))
|
||||
ion_info['ion_list'] = [[ion - 1] for ion in range(i1, i2 + 1)]
|
||||
ion_info['nion'] = len(ion_info['ion_list'])
|
||||
else:
|
||||
# Check if a set of indices is given
|
||||
try:
|
||||
|
@ -133,15 +140,40 @@ class ConfigParameters:
|
|||
l_tmp.sort()
|
||||
# Subtract 1 so that VASP indices (starting with 1) are converted
|
||||
# to Python indices (starting with 0)
|
||||
ion_list = np.array(l_tmp) - 1
|
||||
ion_info['ion_list'] = [[ion - 1] for ion in l_tmp]
|
||||
ion_info['nion'] = len(ion_info['ion_list'])
|
||||
except ValueError:
|
||||
err_msg = "Only an option with a list of ion indices is implemented"
|
||||
pass
|
||||
# Check if equivalence classes are given
|
||||
|
||||
if not ion_info:
|
||||
try:
|
||||
patt = '[0-9][0-9,\s]*'
|
||||
patt2 = '[0-9]+'
|
||||
classes = re.findall(patt, par_str)
|
||||
ion_list = []
|
||||
nion = 0
|
||||
for cl in classes:
|
||||
ions = map(int, re.findall(patt2, cl))
|
||||
ion_list.append([ion - 1 for ion in ions])
|
||||
nion += len(ions)
|
||||
|
||||
if not ion_list:
|
||||
raise ValueError
|
||||
|
||||
ion_info['ion_list'] = ion_list
|
||||
ion_info['nion'] = nion
|
||||
except ValueError:
|
||||
err_msg = "Error parsing list of ions"
|
||||
raise NotImplementedError(err_msg)
|
||||
|
||||
err_mess = "Lowest ion index is smaller than 1 in '%s'"%(par_str)
|
||||
assert np.all(ion_list >= 0), err_mess
|
||||
if 'ion_list' in ion_info:
|
||||
ion_list = ion_info['ion_list']
|
||||
|
||||
return ion_list
|
||||
assert all([all([ion >= 0 for ion in gr]) for gr in ion_list]), (
|
||||
"Lowest ion index is smaller than 1 in '%s'"%(par_str))
|
||||
|
||||
return ion_info
|
||||
|
||||
################################################################################
|
||||
#
|
||||
|
@ -225,7 +257,7 @@ class ConfigParameters:
|
|||
|
||||
################################################################################
|
||||
#
|
||||
# parse_string_ion_list()
|
||||
# parse_string_dosmesh()
|
||||
#
|
||||
################################################################################
|
||||
def parse_string_dosmesh(self, par_str):
|
||||
|
@ -470,13 +502,13 @@ class ConfigParameters:
|
|||
if len(self.gr_optional[par]) > 2:
|
||||
self.groups[0][key] = self.gr_optional[par][2]
|
||||
continue
|
||||
# Add the index of the single shell into the group
|
||||
# Add the index of the single shell into the group
|
||||
self.groups[0].update({'shells': [1]})
|
||||
|
||||
#
|
||||
# Consistency checks
|
||||
#
|
||||
# Check the existance of shells referenced in the groups
|
||||
# Check the existence of shells referenced in the groups
|
||||
def find_shell_by_user_index(uindex):
|
||||
for ind, shell in enumerate(self.shells):
|
||||
if shell['user_index'] == uindex:
|
||||
|
@ -529,7 +561,7 @@ class ConfigParameters:
|
|||
# Check that all shells are referenced in the groups
|
||||
assert sh_refs_used == range(self.nshells), "Some shells are not inside any of the groups"
|
||||
|
||||
|
||||
|
||||
################################################################################
|
||||
#
|
||||
# parse_general()
|
||||
|
|
|
@ -64,12 +64,14 @@ def check_data_consistency(pars, el_struct):
|
|||
"""
|
||||
# Check that ions inside each shell are of the same sort
|
||||
for sh in pars.shells:
|
||||
assert max(sh['ion_list']) <= el_struct.natom, "Site index in the projected shell exceeds the number of ions in the structure"
|
||||
sorts = set([el_struct.type_of_ion[io] for io in sh['ion_list']])
|
||||
max_ion_index = max([max(gr) for gr in sh['ions']['ion_list']])
|
||||
assert max_ion_index < el_struct.natom, "Site index in the projected shell exceeds the number of ions in the structure"
|
||||
ion_list = list(it.chain(*sh['ions']['ion_list']))
|
||||
|
||||
sorts = set([el_struct.type_of_ion[io] for io in ion_list])
|
||||
assert len(sorts) == 1, "Each projected shell must contain only ions of the same sort"
|
||||
|
||||
# Check that ion and orbital lists in shells match those of projectors
|
||||
ion_list = sh['ion_list']
|
||||
lshell = sh['lshell']
|
||||
for ion in ion_list:
|
||||
for par in el_struct.proj_params:
|
||||
|
@ -113,7 +115,7 @@ def generate_plo(conf_pars, el_struct):
|
|||
print
|
||||
print " Shell : %s"%(pshell.user_index)
|
||||
print " Orbital l : %i"%(pshell.lorb)
|
||||
print " Number of ions: %i"%(len(pshell.ion_list))
|
||||
print " Number of ions: %i"%(pshell.nion)
|
||||
print " Dimension : %i"%(pshell.ndim)
|
||||
pshells.append(pshell)
|
||||
|
||||
|
@ -323,8 +325,9 @@ def plo_output(conf_pars, el_struct, pshells, pgroups):
|
|||
# Convert ion indices from the internal representation (starting from 0)
|
||||
# to conventional VASP representation (starting from 1)
|
||||
ion_output = [io + 1 for io in shell.ion_list]
|
||||
# Derive sorts from equivalence classes
|
||||
sh_dict['ion_list'] = ion_output
|
||||
sh_dict['ion_sort'] = el_struct.type_of_ion[shell.ion_list[0]]
|
||||
sh_dict['ion_sort'] = shell.ion_sort
|
||||
|
||||
# TODO: add the output of transformation matrices
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ class ProjectorGroup:
|
|||
for isp in xrange(ns_band):
|
||||
for ik in xrange(nk):
|
||||
ib1 = self.ib_win[ik, isp, 0]
|
||||
ib2 = self.ib_win[ik, isp, 1]
|
||||
ib2 = self.ib_win[ik, isp, 1]+1
|
||||
occ = el_struct.ferw[isp, ik, ib1:ib2]
|
||||
kwght = el_struct.kmesh['kweights'][ik]
|
||||
self.nelect += occ.sum() * kwght * rspin
|
||||
|
|
|
@ -70,7 +70,7 @@ class ProjectorShell:
|
|||
"""
|
||||
def __init__(self, sh_pars, proj_raw, proj_params, kmesh, structure, nc_flag):
|
||||
self.lorb = sh_pars['lshell']
|
||||
self.ion_list = sh_pars['ion_list']
|
||||
self.ions = sh_pars['ions']
|
||||
self.user_index = sh_pars['user_index']
|
||||
self.nc_flag = nc_flag
|
||||
# try:
|
||||
|
@ -81,8 +81,17 @@ class ProjectorShell:
|
|||
self.lm1 = self.lorb**2
|
||||
self.lm2 = (self.lorb+1)**2
|
||||
|
||||
self.nion = self.ions['nion']
|
||||
# Extract ion list and equivalence classes (ion sorts)
|
||||
self.ion_list = sorted(it.chain(*self.ions['ion_list']))
|
||||
self.ion_sort = []
|
||||
for ion in self.ion_list:
|
||||
for icl, eq_cl in enumerate(self.ions['ion_list']):
|
||||
if ion in eq_cl:
|
||||
self.ion_sort.append(icl + 1) # Enumerate classes starting from 1
|
||||
break
|
||||
|
||||
self.ndim = self.extract_tmatrices(sh_pars)
|
||||
self.nion = len(self.ion_list)
|
||||
|
||||
self.extract_projectors(proj_raw, proj_params, kmesh, structure)
|
||||
|
||||
|
@ -106,7 +115,7 @@ class ProjectorShell:
|
|||
Flag 'self.do_transform' is introduced for the optimization purposes
|
||||
to avoid superfluous matrix multiplications.
|
||||
"""
|
||||
nion = len(self.ion_list)
|
||||
nion = self.nion
|
||||
nm = self.lm2 - self.lm1
|
||||
|
||||
if 'tmatrices' in sh_pars:
|
||||
|
@ -213,7 +222,8 @@ class ProjectorShell:
|
|||
"""
|
||||
assert self.nc_flag == False, "Non-collinear case is not implemented"
|
||||
|
||||
nion = len(self.ion_list)
|
||||
# nion = len(self.ion_list)
|
||||
nion = self.nion
|
||||
nlm = self.lm2 - self.lm1
|
||||
_, ns, nk, nb = proj_raw.shape
|
||||
|
||||
|
|
|
@ -345,7 +345,7 @@ class Poscar:
|
|||
----------
|
||||
|
||||
vasp_dir (str) : path to the VASP working directory [default = `./']
|
||||
plocar_filename (str) : filename [default = `PLOCAR']
|
||||
plocar_filename (str) : filename [default = `POSCAR']
|
||||
|
||||
"""
|
||||
# Convenince local function
|
||||
|
@ -465,7 +465,7 @@ class Kpoints:
|
|||
----------
|
||||
|
||||
vasp_dir (str) : path to the VASP working directory [default = `./']
|
||||
plocar_filename (str) : filename [default = `PLOCAR']
|
||||
plocar_filename (str) : filename [default = `IBZKPT']
|
||||
|
||||
"""
|
||||
|
||||
|
|
|
@ -166,8 +166,7 @@ class VaspConverter(ConverterTools):
|
|||
pars = {}
|
||||
pars['atom'] = ion
|
||||
# We set all sites inequivalent
|
||||
# pars['sort'] = sh['ion_sort']
|
||||
pars['sort'] = ion
|
||||
pars['sort'] = sh['ion_sort'][i]
|
||||
pars['l'] = sh['lorb']
|
||||
pars['dim'] = sh['ndim']
|
||||
pars['SO'] = SO
|
||||
|
@ -270,22 +269,23 @@ class VaspConverter(ConverterTools):
|
|||
|
||||
|
||||
# Save it to the HDF:
|
||||
ar = HDFArchive(self.hdf_file,'a')
|
||||
if not (self.dft_subgrp in ar): ar.create_group(self.dft_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is created. If it exists, the data is overwritten!
|
||||
things_to_save = ['energy_unit','n_k','k_dep_projection','SP','SO','charge_below','density_required',
|
||||
with HDFArchive(self.hdf_file,'a') as ar:
|
||||
if not (self.dft_subgrp in ar): ar.create_group(self.dft_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is created. If it exists, the data is overwritten!
|
||||
things_to_save = ['energy_unit','n_k','k_dep_projection','SP','SO','charge_below','density_required',
|
||||
'symm_op','n_shells','shells','n_corr_shells','corr_shells','use_rotations','rot_mat',
|
||||
'rot_mat_time_inv','n_reps','dim_reps','T','n_orbitals','proj_mat','bz_weights','hopping',
|
||||
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr']
|
||||
for it in things_to_save: ar[self.dft_subgrp][it] = locals()[it]
|
||||
for it in things_to_save: ar[self.dft_subgrp][it] = locals()[it]
|
||||
|
||||
# Store Fermi weights to 'dft_misc_input'
|
||||
if not (self.misc_subgrp in ar): ar.create_group(self.misc_subgrp)
|
||||
ar[self.misc_subgrp]['dft_fermi_weights'] = f_weights
|
||||
ar[self.misc_subgrp]['band_window'] = band_window
|
||||
del ar
|
||||
# Store Fermi weights to 'dft_misc_input'
|
||||
if not (self.misc_subgrp in ar): ar.create_group(self.misc_subgrp)
|
||||
ar[self.misc_subgrp]['dft_fermi_weights'] = f_weights
|
||||
ar[self.misc_subgrp]['band_window'] = band_window
|
||||
|
||||
# Symmetries are used, so now convert symmetry information for *correlated* orbitals:
|
||||
self.convert_symmetry_input(ctrl_head, orbits=self.corr_shells, symm_subgrp=self.symmcorr_subgrp)
|
||||
|
||||
# TODO: Implement misc_input
|
||||
# self.convert_misc_input(bandwin_file=self.bandwin_file,struct_file=self.struct_file,outputs_file=self.outputs_file,
|
||||
# misc_subgrp=self.misc_subgrp,SO=self.SO,SP=self.SP,n_k=self.n_k)
|
||||
|
@ -382,10 +382,9 @@ class VaspConverter(ConverterTools):
|
|||
raise "convert_misc_input: reading file %s failed" %self.outputs_file
|
||||
|
||||
# Save it to the HDF:
|
||||
ar=HDFArchive(self.hdf_file,'a')
|
||||
if not (misc_subgrp in ar): ar.create_group(misc_subgrp)
|
||||
for it in things_to_save: ar[misc_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file,'a') as ar:
|
||||
if not (misc_subgrp in ar): ar.create_group(misc_subgrp)
|
||||
for it in things_to_save: ar[misc_subgrp][it] = locals()[it]
|
||||
|
||||
|
||||
def convert_symmetry_input(self, ctrl_head, orbits, symm_subgrp):
|
||||
|
@ -406,10 +405,8 @@ class VaspConverter(ConverterTools):
|
|||
mat_tinv = [numpy.identity(1)]
|
||||
|
||||
# Save it to the HDF:
|
||||
ar=HDFArchive(self.hdf_file,'a')
|
||||
if not (symm_subgrp in ar): ar.create_group(symm_subgrp)
|
||||
things_to_save = ['n_symm','n_atoms','perm','orbits','SO','SP','time_inv','mat','mat_tinv']
|
||||
for it in things_to_save:
|
||||
# print "%s:"%(it), locals()[it]
|
||||
ar[symm_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file,'a') as ar:
|
||||
if not (symm_subgrp in ar): ar.create_group(symm_subgrp)
|
||||
things_to_save = ['n_symm','n_atoms','perm','orbits','SO','SP','time_inv','mat','mat_tinv']
|
||||
for it in things_to_save:
|
||||
ar[symm_subgrp][it] = locals()[it]
|
||||
|
|
|
@ -345,18 +345,17 @@ class Wannier90Converter(ConverterTools):
|
|||
iorb += norb
|
||||
|
||||
# Finally, save all required data into the HDF archive:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.dft_subgrp in ar):
|
||||
ar.create_group(self.dft_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.dft_subgrp in ar):
|
||||
ar.create_group(self.dft_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
|
||||
'symm_op', 'n_shells', 'shells', 'n_corr_shells', 'corr_shells', 'use_rotations', 'rot_mat',
|
||||
'rot_mat_time_inv', 'n_reps', 'dim_reps', 'T', 'n_orbitals', 'proj_mat', 'bz_weights', 'hopping',
|
||||
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr']
|
||||
for it in things_to_save:
|
||||
ar[self.dft_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
for it in things_to_save:
|
||||
ar[self.dft_subgrp][it] = locals()[it]
|
||||
|
||||
def read_wannier90hr(self, hr_filename="wannier_hr.dat"):
|
||||
"""
|
||||
|
|
|
@ -252,24 +252,23 @@ class Wien2kConverter(ConverterTools):
|
|||
for it in things_to_set:
|
||||
setattr(self, it, locals()[it])
|
||||
except StopIteration: # a more explicit error if the file is corrupted.
|
||||
raise "Wien2k_converter : reading file %s failed!" % self.dft_file
|
||||
raise IOError, "Wien2k_converter : reading file %s failed!" % self.dft_file
|
||||
|
||||
R.close()
|
||||
# Reading done!
|
||||
|
||||
# Save it to the HDF:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.dft_subgrp in ar):
|
||||
ar.create_group(self.dft_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.dft_subgrp in ar):
|
||||
ar.create_group(self.dft_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['energy_unit', 'n_k', 'k_dep_projection', 'SP', 'SO', 'charge_below', 'density_required',
|
||||
'symm_op', 'n_shells', 'shells', 'n_corr_shells', 'corr_shells', 'use_rotations', 'rot_mat',
|
||||
'rot_mat_time_inv', 'n_reps', 'dim_reps', 'T', 'n_orbitals', 'proj_mat', 'bz_weights', 'hopping',
|
||||
'n_inequiv_shells', 'corr_to_inequiv', 'inequiv_to_corr']
|
||||
for it in things_to_save:
|
||||
ar[self.dft_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
for it in things_to_save:
|
||||
ar[self.dft_subgrp][it] = locals()[it]
|
||||
|
||||
# Symmetries are used, so now convert symmetry information for
|
||||
# *correlated* orbitals:
|
||||
|
@ -292,15 +291,14 @@ class Wien2kConverter(ConverterTools):
|
|||
return
|
||||
|
||||
# get needed data from hdf file
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
things_to_read = ['SP', 'SO', 'n_shells',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
things_to_read = ['SP', 'SO', 'n_shells',
|
||||
'n_k', 'n_orbitals', 'shells']
|
||||
|
||||
for it in things_to_read:
|
||||
if not hasattr(self, it):
|
||||
setattr(self, it, ar[self.dft_subgrp][it])
|
||||
self.n_spin_blocs = self.SP + 1 - self.SO
|
||||
del ar
|
||||
for it in things_to_read:
|
||||
if not hasattr(self, it):
|
||||
setattr(self, it, ar[self.dft_subgrp][it])
|
||||
self.n_spin_blocs = self.SP + 1 - self.SO
|
||||
|
||||
mpi.report("Reading input from %s..." % self.parproj_file)
|
||||
|
||||
|
@ -368,16 +366,15 @@ class Wien2kConverter(ConverterTools):
|
|||
# Reading done!
|
||||
|
||||
# Save it to the HDF:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.parproj_subgrp in ar):
|
||||
ar.create_group(self.parproj_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['dens_mat_below', 'n_parproj',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.parproj_subgrp in ar):
|
||||
ar.create_group(self.parproj_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['dens_mat_below', 'n_parproj',
|
||||
'proj_mat_all', 'rot_mat_all', 'rot_mat_all_time_inv']
|
||||
for it in things_to_save:
|
||||
ar[self.parproj_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
for it in things_to_save:
|
||||
ar[self.parproj_subgrp][it] = locals()[it]
|
||||
|
||||
# Symmetries are used, so now convert symmetry information for *all*
|
||||
# orbitals:
|
||||
|
@ -395,15 +392,14 @@ class Wien2kConverter(ConverterTools):
|
|||
|
||||
try:
|
||||
# get needed data from hdf file
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
things_to_read = ['SP', 'SO', 'n_corr_shells',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
things_to_read = ['SP', 'SO', 'n_corr_shells',
|
||||
'n_shells', 'corr_shells', 'shells', 'energy_unit']
|
||||
|
||||
for it in things_to_read:
|
||||
if not hasattr(self, it):
|
||||
setattr(self, it, ar[self.dft_subgrp][it])
|
||||
self.n_spin_blocs = self.SP + 1 - self.SO
|
||||
del ar
|
||||
for it in things_to_read:
|
||||
if not hasattr(self, it):
|
||||
setattr(self, it, ar[self.dft_subgrp][it])
|
||||
self.n_spin_blocs = self.SP + 1 - self.SO
|
||||
|
||||
mpi.report("Reading input from %s..." % self.band_file)
|
||||
R = ConverterTools.read_fortran_file(
|
||||
|
@ -475,23 +471,22 @@ class Wien2kConverter(ConverterTools):
|
|||
R.close()
|
||||
|
||||
except KeyError:
|
||||
raise "convert_bands_input : Needed data not found in hdf file. Consider calling convert_dft_input first!"
|
||||
raise IOError, "convert_bands_input : Needed data not found in hdf file. Consider calling convert_dft_input first!"
|
||||
except StopIteration: # a more explicit error if the file is corrupted.
|
||||
raise "Wien2k_converter : reading file band_file failed!"
|
||||
raise IOError, "Wien2k_converter : reading file %s failed!" % self.band_file
|
||||
|
||||
# Reading done!
|
||||
|
||||
# Save it to the HDF:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.bands_subgrp in ar):
|
||||
ar.create_group(self.bands_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['n_k', 'n_orbitals', 'proj_mat',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.bands_subgrp in ar):
|
||||
ar.create_group(self.bands_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!
|
||||
things_to_save = ['n_k', 'n_orbitals', 'proj_mat',
|
||||
'hopping', 'n_parproj', 'proj_mat_all']
|
||||
for it in things_to_save:
|
||||
ar[self.bands_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
for it in things_to_save:
|
||||
ar[self.bands_subgrp][it] = locals()[it]
|
||||
|
||||
def convert_misc_input(self):
|
||||
"""
|
||||
|
@ -510,13 +505,12 @@ class Wien2kConverter(ConverterTools):
|
|||
return
|
||||
|
||||
# Check if SP, SO and n_k are already in h5
|
||||
ar = HDFArchive(self.hdf_file, 'r')
|
||||
if not (self.dft_subgrp in ar):
|
||||
raise IOError, "convert_misc_input: No %s subgroup in hdf file found! Call convert_dft_input first." % self.dft_subgrp
|
||||
SP = ar[self.dft_subgrp]['SP']
|
||||
SO = ar[self.dft_subgrp]['SO']
|
||||
n_k = ar[self.dft_subgrp]['n_k']
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'r') as ar:
|
||||
if not (self.dft_subgrp in ar):
|
||||
raise IOError, "convert_misc_input: No %s subgroup in hdf file found! Call convert_dft_input first." % self.dft_subgrp
|
||||
SP = ar[self.dft_subgrp]['SP']
|
||||
SO = ar[self.dft_subgrp]['SO']
|
||||
n_k = ar[self.dft_subgrp]['n_k']
|
||||
|
||||
things_to_save = []
|
||||
|
||||
|
@ -525,12 +519,19 @@ class Wien2kConverter(ConverterTools):
|
|||
# band_window: Contains the index of the lowest and highest band within the
|
||||
# projected subspace (used by dmftproj) for each k-point.
|
||||
|
||||
if (SP == 0 or SO == 1):
|
||||
if (SP == 0 and SO == 0): # read .oubwin file
|
||||
files = [self.bandwin_file]
|
||||
elif SP == 1:
|
||||
elif (SP == 1 and SO == 0): # read .oubwinup and .oubwindn
|
||||
files = [self.bandwin_file + 'up', self.bandwin_file + 'dn']
|
||||
else: # SO and SP can't both be 1
|
||||
assert 0, "convert_misc_input: Reading oubwin error! Check SP and SO!"
|
||||
elif (SP == 1 and SO == 1): # read either .oubwinup or .oubwindn
|
||||
if os.path.exists(self.bandwin_file + 'up'):
|
||||
files = [self.bandwin_file + 'up']
|
||||
elif os.path.exists(self.bandwin_file + 'dn'):
|
||||
files = [self.bandwin_file + 'dn']
|
||||
else:
|
||||
assert 0, "convert_misc_input: If SO and SP are 1 provide either .oubwinup or .oubwindn file"
|
||||
else:
|
||||
assert 0, "convert_misc_input: Reading oubwin error! Check SP and SO, if SO=1 SP must be 1."
|
||||
|
||||
band_window = [None for isp in range(SP + 1 - SO)]
|
||||
for isp, f in enumerate(files):
|
||||
|
@ -577,7 +578,7 @@ class Wien2kConverter(ConverterTools):
|
|||
things_to_save.extend(
|
||||
['lattice_type', 'lattice_constants', 'lattice_angles'])
|
||||
except IOError:
|
||||
raise "convert_misc_input: reading file %s failed" % self.struct_file
|
||||
raise IOError, "convert_misc_input: reading file %s failed" % self.struct_file
|
||||
|
||||
# Read relevant data from .outputs file
|
||||
#######################################
|
||||
|
@ -609,15 +610,14 @@ class Wien2kConverter(ConverterTools):
|
|||
things_to_save.extend(['n_symmetries', 'rot_symmetries'])
|
||||
things_to_save.append('rot_symmetries')
|
||||
except IOError:
|
||||
raise "convert_misc_input: reading file %s failed" % self.outputs_file
|
||||
raise IOError, "convert_misc_input: reading file %s failed" % self.outputs_file
|
||||
|
||||
# Save it to the HDF:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.misc_subgrp in ar):
|
||||
ar.create_group(self.misc_subgrp)
|
||||
for it in things_to_save:
|
||||
ar[self.misc_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.misc_subgrp in ar):
|
||||
ar.create_group(self.misc_subgrp)
|
||||
for it in things_to_save:
|
||||
ar[self.misc_subgrp][it] = locals()[it]
|
||||
|
||||
def convert_transport_input(self):
|
||||
"""
|
||||
|
@ -633,13 +633,12 @@ class Wien2kConverter(ConverterTools):
|
|||
return
|
||||
|
||||
# Check if SP, SO and n_k are already in h5
|
||||
ar = HDFArchive(self.hdf_file, 'r')
|
||||
if not (self.dft_subgrp in ar):
|
||||
raise IOError, "convert_transport_input: No %s subgroup in hdf file found! Call convert_dft_input first." % self.dft_subgrp
|
||||
SP = ar[self.dft_subgrp]['SP']
|
||||
SO = ar[self.dft_subgrp]['SO']
|
||||
n_k = ar[self.dft_subgrp]['n_k']
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'r') as ar:
|
||||
if not (self.dft_subgrp in ar):
|
||||
raise IOError, "convert_transport_input: No %s subgroup in hdf file found! Call convert_dft_input first." % self.dft_subgrp
|
||||
SP = ar[self.dft_subgrp]['SP']
|
||||
SO = ar[self.dft_subgrp]['SO']
|
||||
n_k = ar[self.dft_subgrp]['n_k']
|
||||
|
||||
# Read relevant data from .pmat/up/dn files
|
||||
###########################################
|
||||
|
@ -648,12 +647,19 @@ class Wien2kConverter(ConverterTools):
|
|||
# velocities_k: velocity (momentum) matrix elements between all bands in band_window_optics
|
||||
# and each k-point.
|
||||
|
||||
if (SP == 0 or SO == 1):
|
||||
if (SP == 0 and SO == 0): # read .pmat file
|
||||
files = [self.pmat_file]
|
||||
elif SP == 1:
|
||||
elif (SP == 1 and SO == 0): # read .pmatup and pmatdn
|
||||
files = [self.pmat_file + 'up', self.pmat_file + 'dn']
|
||||
else: # SO and SP can't both be 1
|
||||
assert 0, "convert_transport_input: Reading velocity file error! Check SP and SO!"
|
||||
elif (SP == 1 and SO == 1): # read either .pmatup or .pmatdn
|
||||
if os.path.exists(self.pmat_file + 'up'):
|
||||
files = [self.pmat_file + 'up']
|
||||
elif os.path.exists(self.pmat_file + 'dn'):
|
||||
files = [self.pmat_file + 'dn']
|
||||
else:
|
||||
assert 0, "convert_transport_input: If SO and SP are 1 provide either .pmatup or .pmatdn file"
|
||||
else:
|
||||
assert 0, "convert_transport_input: Reading velocity file error! Check SP and SO, if SO=1 SP must be 1."
|
||||
|
||||
velocities_k = [[] for f in files]
|
||||
band_window_optics = []
|
||||
|
@ -691,15 +697,14 @@ class Wien2kConverter(ConverterTools):
|
|||
R.close() # Reading done!
|
||||
|
||||
# Put data to HDF5 file
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (self.transp_subgrp in ar):
|
||||
ar.create_group(self.transp_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!!!
|
||||
things_to_save = ['band_window_optics', 'velocities_k']
|
||||
for it in things_to_save:
|
||||
ar[self.transp_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (self.transp_subgrp in ar):
|
||||
ar.create_group(self.transp_subgrp)
|
||||
# The subgroup containing the data. If it does not exist, it is
|
||||
# created. If it exists, the data is overwritten!!!
|
||||
things_to_save = ['band_window_optics', 'velocities_k']
|
||||
for it in things_to_save:
|
||||
ar[self.transp_subgrp][it] = locals()[it]
|
||||
|
||||
def convert_symmetry_input(self, orbits, symm_file, symm_subgrp, SO, SP):
|
||||
"""
|
||||
|
@ -775,17 +780,16 @@ class Wien2kConverter(ConverterTools):
|
|||
R.next() # imaginary part
|
||||
|
||||
except StopIteration: # a more explicit error if the file is corrupted.
|
||||
raise "Wien2k_converter : reading file symm_file failed!"
|
||||
raise IOError, "Wien2k_converter : reading file %s failed!" %symm_file
|
||||
|
||||
R.close()
|
||||
# Reading done!
|
||||
|
||||
# Save it to the HDF:
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not (symm_subgrp in ar):
|
||||
ar.create_group(symm_subgrp)
|
||||
things_to_save = ['n_symm', 'n_atoms', 'perm',
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not (symm_subgrp in ar):
|
||||
ar.create_group(symm_subgrp)
|
||||
things_to_save = ['n_symm', 'n_atoms', 'perm',
|
||||
'orbits', 'SO', 'SP', 'time_inv', 'mat', 'mat_tinv']
|
||||
for it in things_to_save:
|
||||
ar[symm_subgrp][it] = locals()[it]
|
||||
del ar
|
||||
for it in things_to_save:
|
||||
ar[symm_subgrp][it] = locals()[it]
|
||||
|
|
|
@ -58,7 +58,7 @@ class SumkDFT(object):
|
|||
If True, the local Green's function matrix for each spin is divided into smaller blocks
|
||||
with the block structure determined from the DFT density matrix of the corresponding correlated shell.
|
||||
|
||||
Alternatively and additionally, the block structure can be analyzed using :meth:`analyse_block_structure <dft.sumk_dft.SumkDFT.analyse_block_structure>`
|
||||
Alternatively and additionally, the block structure can be analysed using :meth:`analyse_block_structure <dft.sumk_dft.SumkDFT.analyse_block_structure>`
|
||||
and manipulated using the SumkDFT.block_structre attribute (see :class:`BlockStructure <dft.block_structure.BlockStructure>`).
|
||||
dft_data : string, optional
|
||||
Name of hdf5 subgroup in which DFT data for projector and lattice Green's function construction are stored.
|
||||
|
@ -187,23 +187,22 @@ class SumkDFT(object):
|
|||
subgroup_present = 0
|
||||
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(self.hdf_file, 'r')
|
||||
if subgrp in ar:
|
||||
subgroup_present = True
|
||||
# first read the necessary things:
|
||||
for it in things_to_read:
|
||||
if it in ar[subgrp]:
|
||||
setattr(self, it, ar[subgrp][it])
|
||||
else:
|
||||
mpi.report("Loading %s failed!" % it)
|
||||
value_read = False
|
||||
else:
|
||||
if (len(things_to_read) != 0):
|
||||
mpi.report(
|
||||
"Loading failed: No %s subgroup in hdf5!" % subgrp)
|
||||
subgroup_present = False
|
||||
value_read = False
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'r') as ar:
|
||||
if subgrp in ar:
|
||||
subgroup_present = True
|
||||
# first read the necessary things:
|
||||
for it in things_to_read:
|
||||
if it in ar[subgrp]:
|
||||
setattr(self, it, ar[subgrp][it])
|
||||
else:
|
||||
mpi.report("Loading %s failed!" % it)
|
||||
value_read = False
|
||||
else:
|
||||
if (len(things_to_read) != 0):
|
||||
mpi.report(
|
||||
"Loading failed: No %s subgroup in hdf5!" % subgrp)
|
||||
subgroup_present = False
|
||||
value_read = False
|
||||
# now do the broadcasting:
|
||||
for it in things_to_read:
|
||||
setattr(self, it, mpi.bcast(getattr(self, it)))
|
||||
|
@ -226,18 +225,16 @@ class SumkDFT(object):
|
|||
|
||||
if not (mpi.is_master_node()):
|
||||
return # do nothing on nodes
|
||||
ar = HDFArchive(self.hdf_file, 'a')
|
||||
if not subgrp in ar:
|
||||
ar.create_group(subgrp)
|
||||
for it in things_to_save:
|
||||
if it in [ "gf_struct_sumk", "gf_struct_solver",
|
||||
"solver_to_sumk", "sumk_to_solver", "solver_to_sumk_block"]:
|
||||
warn("It is not recommended to save '{}' individually. Save 'block_structure' instead.".format(it))
|
||||
try:
|
||||
ar[subgrp][it] = getattr(self, it)
|
||||
except:
|
||||
mpi.report("%s not found, and so not saved." % it)
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'a') as ar:
|
||||
if not subgrp in ar: ar.create_group(subgrp)
|
||||
for it in things_to_save:
|
||||
if it in [ "gf_struct_sumk", "gf_struct_solver",
|
||||
"solver_to_sumk", "sumk_to_solver", "solver_to_sumk_block"]:
|
||||
warn("It is not recommended to save '{}' individually. Save 'block_structure' instead.".format(it))
|
||||
try:
|
||||
ar[subgrp][it] = getattr(self, it)
|
||||
except:
|
||||
mpi.report("%s not found, and so not saved." % it)
|
||||
|
||||
def load(self, things_to_load, subgrp='user_data'):
|
||||
r"""
|
||||
|
@ -258,16 +255,15 @@ class SumkDFT(object):
|
|||
|
||||
if not (mpi.is_master_node()):
|
||||
return # do nothing on nodes
|
||||
ar = HDFArchive(self.hdf_file, 'r')
|
||||
if not subgrp in ar:
|
||||
mpi.report("Loading %s failed!" % subgrp)
|
||||
list_to_return = []
|
||||
for it in things_to_load:
|
||||
try:
|
||||
list_to_return.append(ar[subgrp][it])
|
||||
except:
|
||||
raise ValueError, "load: %s not found, and so not loaded." % it
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file, 'r') as ar:
|
||||
if not subgrp in ar:
|
||||
mpi.report("Loading %s failed!" % subgrp)
|
||||
list_to_return = []
|
||||
for it in things_to_load:
|
||||
try:
|
||||
list_to_return.append(ar[subgrp][it])
|
||||
except:
|
||||
raise ValueError, "load: %s not found, and so not loaded." % it
|
||||
return list_to_return
|
||||
|
||||
################
|
||||
|
@ -1727,7 +1723,9 @@ class SumkDFT(object):
|
|||
dens = mpi.all_reduce(mpi.world, dens, lambda x, y: x + y)
|
||||
mpi.barrier()
|
||||
|
||||
return dens
|
||||
if abs(dens.imag) > 1e-20:
|
||||
mpi.report("Warning: Imaginary part in density will be ignored ({})".format(str(abs(dens.imag))))
|
||||
return dens.real
|
||||
|
||||
def set_mu(self, mu):
|
||||
r"""
|
||||
|
@ -1766,7 +1764,7 @@ class SumkDFT(object):
|
|||
|
||||
"""
|
||||
F = lambda mu: self.total_density(
|
||||
mu=mu, iw_or_w=iw_or_w, broadening=broadening)
|
||||
mu=mu, iw_or_w=iw_or_w, broadening=broadening).real
|
||||
density = self.density_required - self.charge_below
|
||||
|
||||
self.chemical_potential = dichotomy.dichotomy(function=F,
|
||||
|
@ -1822,10 +1820,9 @@ class SumkDFT(object):
|
|||
fermi_weights = 0
|
||||
band_window = 0
|
||||
if mpi.is_master_node():
|
||||
ar = HDFArchive(self.hdf_file,'r')
|
||||
fermi_weights = ar['dft_misc_input']['dft_fermi_weights']
|
||||
band_window = ar['dft_misc_input']['band_window']
|
||||
del ar
|
||||
with HDFArchive(self.hdf_file,'r') as ar:
|
||||
fermi_weights = ar['dft_misc_input']['dft_fermi_weights']
|
||||
band_window = ar['dft_misc_input']['band_window']
|
||||
fermi_weights = mpi.bcast(fermi_weights)
|
||||
band_window = mpi.bcast(band_window)
|
||||
|
||||
|
|
|
@ -937,6 +937,9 @@ class SumkDFTTools(SumkDFT):
|
|||
|
||||
seebeck : dictionary of double
|
||||
Seebeck coefficient in each direction. If zero is not present in Om_mesh the Seebeck coefficient is set to NaN.
|
||||
|
||||
kappa : dictionary of double.
|
||||
thermal conductivity in each direction. If zero is not present in Om_mesh the thermal conductivity is set to NaN
|
||||
"""
|
||||
|
||||
if not (mpi.is_master_node()):
|
||||
|
@ -950,7 +953,10 @@ class SumkDFTTools(SumkDFT):
|
|||
for direction in self.directions}
|
||||
A1 = {direction: numpy.full((n_q,), numpy.nan)
|
||||
for direction in self.directions}
|
||||
A2 = {direction: numpy.full((n_q,), numpy.nan)
|
||||
for direction in self.directions}
|
||||
self.seebeck = {direction: numpy.nan for direction in self.directions}
|
||||
self.kappa = {direction: numpy.nan for direction in self.directions}
|
||||
self.optic_cond = {direction: numpy.full(
|
||||
(n_q,), numpy.nan) for direction in self.directions}
|
||||
|
||||
|
@ -960,21 +966,28 @@ class SumkDFTTools(SumkDFT):
|
|||
direction, iq=iq, n=0, beta=beta, method=method)
|
||||
A1[direction][iq] = self.transport_coefficient(
|
||||
direction, iq=iq, n=1, beta=beta, method=method)
|
||||
A2[direction][iq] = self.transport_coefficient(
|
||||
direction, iq=iq, n=2, beta=beta, method=method)
|
||||
print "A_0 in direction %s for Omega = %.2f %e a.u." % (direction, self.Om_mesh[iq], A0[direction][iq])
|
||||
print "A_1 in direction %s for Omega = %.2f %e a.u." % (direction, self.Om_mesh[iq], A1[direction][iq])
|
||||
print "A_2 in direction %s for Omega = %.2f %e a.u." % (direction, self.Om_mesh[iq], A2[direction][iq])
|
||||
if ~numpy.isnan(A1[direction][iq]):
|
||||
# Seebeck is overwritten if there is more than one Omega =
|
||||
# Seebeck and kappa are overwritten if there is more than one Omega =
|
||||
# 0 in Om_mesh
|
||||
self.seebeck[direction] = - \
|
||||
A1[direction][iq] / A0[direction][iq] * 86.17
|
||||
self.kappa[direction] = A2[direction][iq] - A1[direction][iq]*A1[direction][iq]/A0[direction][iq]
|
||||
self.kappa[direction] *= 293178.0
|
||||
self.optic_cond[direction] = beta * \
|
||||
A0[direction] * 10700.0 / numpy.pi
|
||||
for iq in xrange(n_q):
|
||||
print "Conductivity in direction %s for Omega = %.2f %f x 10^4 Ohm^-1 cm^-1" % (direction, self.Om_mesh[iq], self.optic_cond[direction][iq])
|
||||
if not (numpy.isnan(A1[direction][iq])):
|
||||
print "Seebeck in direction %s for Omega = 0.00 %f x 10^(-6) V/K" % (direction, self.seebeck[direction])
|
||||
print "kappa in direction %s for Omega = 0.00 %f W/(m * K)" % (direction, self.kappa[direction])
|
||||
|
||||
return self.optic_cond, self.seebeck, self.kappa
|
||||
|
||||
return self.optic_cond, self.seebeck
|
||||
|
||||
def fermi_dis(self, w, beta):
|
||||
r"""
|
||||
|
|
|
@ -58,16 +58,15 @@ class Symmetry:
|
|||
|
||||
if mpi.is_master_node():
|
||||
# Read the stuff on master:
|
||||
ar = HDFArchive(hdf_file, 'r')
|
||||
if subgroup is None:
|
||||
ar2 = ar
|
||||
else:
|
||||
ar2 = ar[subgroup]
|
||||
with HDFArchive(hdf_file, 'r') as ar:
|
||||
if subgroup is None:
|
||||
ar2 = ar
|
||||
else:
|
||||
ar2 = ar[subgroup]
|
||||
|
||||
for it in things_to_read:
|
||||
setattr(self, it, ar2[it])
|
||||
for it in things_to_read:
|
||||
setattr(self, it, ar2[it])
|
||||
del ar2
|
||||
del ar
|
||||
|
||||
# Broadcasting
|
||||
for it in things_to_read:
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
# Required python packages for this application (these should also be added to Dockerfile for Jenkins)
|
|
@ -1,4 +1,4 @@
|
|||
#!/bin/bash
|
||||
|
||||
@CMAKE_INSTALL_PREFIX@/bin/pytriqs -m triqs_dft_tools.converters.plovasp.converter $@
|
||||
python -m triqs_dft_tools.converters.plovasp.converter $@
|
||||
|
||||
|
|
|
@ -81,7 +81,6 @@ echo " Script name: $DMFT_SCRIPT"
|
|||
rm -f vasp.lock
|
||||
stdbuf -o 0 $MPIRUN_CMD -np $NPROC "$VASP_DIR" &
|
||||
|
||||
PYTRIQS=@CMAKE_INSTALL_PREFIX@/bin/pytriqs
|
||||
|
||||
$MPIRUN_CMD -np $NPROC $PYTRIQS -m triqs_dft_tools.converters.plovasp.sc_dmft $(jobs -p) $NITER $DMFT_SCRIPT 'plo.cfg' || kill %1
|
||||
$MPIRUN_CMD -np $NPROC python -m triqs_dft_tools.converters.plovasp.sc_dmft $(jobs -p) $NITER $DMFT_SCRIPT 'plo.cfg' || kill %1
|
||||
|
||||
|
|
|
@ -5,10 +5,16 @@ file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/${all_h5_files} DESTINATION ${CMAKE_CURREN
|
|||
FILE(COPY SrVO3.pmat SrVO3.struct SrVO3.outputs SrVO3.oubwin SrVO3.ctqmcout SrVO3.symqmc SrVO3.sympar SrVO3.parproj SrIrO3_rot.h5 hk_convert_hamiltonian.hk LaVO3-Pnma_hr.dat LaVO3-Pnma.inp DESTINATION ${CMAKE_CURRENT_BINARY_DIR})
|
||||
|
||||
# List all tests
|
||||
set(all_tests wien2k_convert hk_convert w90_convert sumkdft_basic srvo3_Gloc srvo3_transp sigma_from_file blockstructure analyze_block_structure_from_gf analyze_block_structure_from_gf2)
|
||||
set(all_tests wien2k_convert hk_convert w90_convert sumkdft_basic srvo3_Gloc srvo3_transp sigma_from_file blockstructure analyse_block_structure_from_gf analyse_block_structure_from_gf2)
|
||||
|
||||
set(python_executable python)
|
||||
|
||||
if(${TEST_COVERAGE})
|
||||
set(python_executable ${PYTHON_COVERAGE} run --append --source "${CMAKE_BINARY_DIR}/python" )
|
||||
endif()
|
||||
|
||||
foreach(t ${all_tests})
|
||||
add_test(NAME ${t} COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/${t}.py)
|
||||
add_test(NAME ${t} COMMAND ${python_executable} ${CMAKE_CURRENT_SOURCE_DIR}/${t}.py)
|
||||
endforeach()
|
||||
|
||||
# Set the PythonPath : put the build dir first (in case there is an installed version).
|
||||
|
@ -17,4 +23,3 @@ set_property(TEST ${all_tests} PROPERTY ENVIRONMENT PYTHONPATH=${CMAKE_BINARY_DI
|
|||
|
||||
# VASP converter tests
|
||||
add_subdirectory(plovasp)
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
from pytriqs.gf import *
|
||||
from sumk_dft import SumkDFT
|
||||
from triqs_dft_tools.sumk_dft import SumkDFT
|
||||
from scipy.linalg import expm
|
||||
import numpy as np
|
||||
from pytriqs.utility.comparison_tests import assert_gfs_are_close, assert_arrays_are_close, assert_block_gfs_are_close
|
||||
|
@ -32,13 +32,13 @@ G_new = SK.analyse_block_structure_from_gf(G)
|
|||
# the new block structure
|
||||
block_structure2 = SK.block_structure.copy()
|
||||
|
||||
with HDFArchive('analyze_block_structure_from_gf.out.h5','w') as ar:
|
||||
with HDFArchive('analyse_block_structure_from_gf.out.h5','w') as ar:
|
||||
ar['bs1'] = block_structure1
|
||||
ar['bs2'] = block_structure2
|
||||
|
||||
# check whether the block structure is the same as in the reference
|
||||
with HDFArchive('analyze_block_structure_from_gf.out.h5','r') as ar,\
|
||||
HDFArchive('analyze_block_structure_from_gf.ref.h5','r') as ar2:
|
||||
with HDFArchive('analyse_block_structure_from_gf.out.h5','r') as ar,\
|
||||
HDFArchive('analyse_block_structure_from_gf.ref.h5','r') as ar2:
|
||||
assert ar['bs1'] == ar2['bs1'], 'bs1 not equal'
|
||||
a1 = ar['bs2']
|
||||
a2 = ar2['bs2']
|
||||
|
@ -73,12 +73,6 @@ for d in SK.deg_shells[0]:
|
|||
for i in range(len(normalized_gfs)):
|
||||
for j in range(i+1,len(normalized_gfs)):
|
||||
assert_arrays_are_close(normalized_gfs[i].data, normalized_gfs[j].data, 1.e-5)
|
||||
# the tails have to be compared using a relative error
|
||||
for o in range(normalized_gfs[i].tail.order_min,normalized_gfs[i].tail.order_max+1):
|
||||
if np.abs(normalized_gfs[i].tail[o][0,0]) < 1.e-10:
|
||||
continue
|
||||
assert np.max(np.abs((normalized_gfs[i].tail[o]-normalized_gfs[j].tail[o])/(normalized_gfs[i].tail[o][0,0]))) < 1.e-5, \
|
||||
"tails are different"
|
||||
|
||||
#######################################################################
|
||||
# Second test #
|
|
@ -1,9 +1,9 @@
|
|||
from pytriqs.gf import *
|
||||
from sumk_dft import SumkDFT
|
||||
from triqs_dft_tools.sumk_dft import SumkDFT
|
||||
import numpy as np
|
||||
from pytriqs.utility.comparison_tests import assert_block_gfs_are_close
|
||||
|
||||
# here we test the SK.analyze_block_structure_from_gf function
|
||||
# here we test the SK.analyse_block_structure_from_gf function
|
||||
# with GfReFreq, GfReTime
|
||||
|
||||
|
||||
|
@ -35,13 +35,13 @@ Hloc[8:,8:] = Hloc1
|
|||
V = get_random_hermitian(2) # the hopping elements from impurity to bath
|
||||
b1 = np.random.rand() # the bath energy of the first bath level
|
||||
b2 = np.random.rand() # the bath energy of the second bath level
|
||||
delta = GfReFreq(window=(-5,5), indices=range(2), n_points=1001)
|
||||
delta = GfReFreq(window=(-10,10), indices=range(2), n_points=1001)
|
||||
delta[0,0] << (V[0,0]*V[0,0].conjugate()*inverse(Omega-b1)+V[0,1]*V[0,1].conjugate()*inverse(Omega-b2+0.02j))/2.0
|
||||
delta[0,1] << (V[0,0]*V[1,0].conjugate()*inverse(Omega-b1)+V[0,1]*V[1,1].conjugate()*inverse(Omega-b2+0.02j))/2.0
|
||||
delta[1,0] << (V[1,0]*V[0,0].conjugate()*inverse(Omega-b1)+V[1,1]*V[0,1].conjugate()*inverse(Omega-b2+0.02j))/2.0
|
||||
delta[1,1] << (V[1,0]*V[1,0].conjugate()*inverse(Omega-b1)+V[1,1]*V[1,1].conjugate()*inverse(Omega-b2+0.02j))/2.0
|
||||
# construct G
|
||||
G = BlockGf(name_block_generator=[('ud',GfReFreq(window=(-5,5), indices=range(10), n_points=1001))], make_copies=False)
|
||||
G = BlockGf(name_block_generator=[('ud',GfReFreq(window=(-10,10), indices=range(10), n_points=1001))], make_copies=False)
|
||||
for i in range(0,10,2):
|
||||
G['ud'][i:i+2,i:i+2] << inverse(Omega-delta+0.02j)
|
||||
G['ud'] << inverse(inverse(G['ud']) - Hloc)
|
||||
|
@ -88,7 +88,9 @@ Gt = BlockGf(name_block_generator = [(name,
|
|||
n_points=len(block.mesh),
|
||||
indices=block.indices)) for name, block in G], make_copies=False)
|
||||
|
||||
Gt['ud'].set_from_inverse_fourier(G['ud'])
|
||||
known_moments = np.zeros((2,10,10), dtype=np.complex)
|
||||
known_moments[1,:] = np.eye(10)
|
||||
Gt['ud'].set_from_inverse_fourier(G['ud'], known_moments)
|
||||
|
||||
G_new = SK.analyse_block_structure_from_gf([Gt])
|
||||
G_new_symm = G_new[0].copy()
|
|
@ -1,8 +1,8 @@
|
|||
from sumk_dft import *
|
||||
from triqs_dft_tools.sumk_dft import *
|
||||
from pytriqs.utility.h5diff import h5diff
|
||||
from pytriqs.gf import *
|
||||
from pytriqs.utility.comparison_tests import assert_block_gfs_are_close
|
||||
from block_structure import BlockStructure
|
||||
from triqs_dft_tools.block_structure import BlockStructure
|
||||
|
||||
SK = SumkDFT('blockstructure.in.h5',use_dft_blocks=True)
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ from pytriqs.archive import *
|
|||
from pytriqs.utility.h5diff import h5diff
|
||||
import pytriqs.utility.mpi as mpi
|
||||
|
||||
from converters import *
|
||||
from triqs_dft_tools.converters import *
|
||||
|
||||
Converter = HkConverter(filename='hk_convert_hamiltonian.hk',hdf_filename='hk_convert.out.h5')
|
||||
|
||||
|
|
|
@ -1,18 +1,19 @@
|
|||
# load triqs helper to set up tests
|
||||
set(all_tests
|
||||
inpconf
|
||||
# plocar_io
|
||||
plotools
|
||||
proj_group
|
||||
proj_shell
|
||||
vaspio
|
||||
atm)
|
||||
atm
|
||||
plotools
|
||||
converter
|
||||
)
|
||||
|
||||
FILE(COPY ${all_tests} DESTINATION ${CMAKE_CURRENT_BINARY_DIR})
|
||||
FILE(COPY run_suite.py DESTINATION ${CMAKE_CURRENT_BINARY_DIR})
|
||||
|
||||
foreach(t ${all_tests})
|
||||
add_test(NAME ${t} COMMAND python run_suite.py ${t})
|
||||
add_test(NAME ${t} COMMAND ${python_executable} run_suite.py ${t})
|
||||
endforeach()
|
||||
|
||||
set_property(TEST ${all_tests} PROPERTY ENVIRONMENT PYTHONPATH=${CMAKE_BINARY_DIR}/python:$ENV{PYTHONPATH} )
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
import os
|
||||
|
||||
import numpy as np
|
||||
from converters.plovasp.atm import dos_tetra_weights_3d
|
||||
from triqs_dft_tools.converters.plovasp.atm import dos_tetra_weights_3d
|
||||
import mytest
|
||||
|
||||
################################################################################
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
[General]
|
||||
BASENAME = converter/one_site
|
||||
|
||||
[Shell 1]
|
||||
LSHELL = 2
|
||||
IONS = 2
|
||||
EWINDOW = -15.0 5.0
|
||||
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
[General]
|
||||
BASENAME = converter/lunio3
|
||||
|
||||
[Shell 1]
|
||||
LSHELL = 2
|
||||
IONS = [5, 6] [7, 8]
|
||||
EWINDOW = -0.6 2.7
|
||||
TRANSFILE = converter/lunio3/rot_dz2_dx2
|
||||
NORMALIZE = True
|
||||
|
|
@ -0,0 +1,131 @@
|
|||
Automatically generated mesh
|
||||
18
|
||||
Reciprocal lattice
|
||||
0.00000000000000 0.00000000000000 0.00000000000000 1
|
||||
0.33333333333333 0.00000000000000 0.00000000000000 1
|
||||
-0.33333333333333 0.00000000000000 -0.00000000000000 1
|
||||
0.00000000000000 0.33333333333333 0.00000000000000 1
|
||||
0.33333333333333 0.33333333333333 0.00000000000000 1
|
||||
-0.33333333333333 0.33333333333333 -0.00000000000000 1
|
||||
0.00000000000000 -0.33333333333333 0.00000000000000 1
|
||||
0.33333333333333 -0.33333333333333 0.00000000000000 1
|
||||
-0.33333333333333 -0.33333333333333 -0.00000000000000 1
|
||||
0.00000000000000 0.00000000000000 0.50000000000000 1
|
||||
0.33333333333333 0.00000000000000 0.50000000000000 1
|
||||
-0.33333333333333 0.00000000000000 0.50000000000000 1
|
||||
0.00000000000000 0.33333333333333 0.50000000000000 1
|
||||
0.33333333333333 0.33333333333333 0.50000000000000 1
|
||||
-0.33333333333333 0.33333333333333 0.50000000000000 1
|
||||
0.00000000000000 -0.33333333333333 0.50000000000000 1
|
||||
0.33333333333333 -0.33333333333333 0.50000000000000 1
|
||||
-0.33333333333333 -0.33333333333333 0.50000000000000 1
|
||||
Tetrahedra
|
||||
108 0.00925925925926
|
||||
1 1 2 5 10
|
||||
1 1 4 5 10
|
||||
1 4 5 10 13
|
||||
1 2 5 10 11
|
||||
1 5 10 11 14
|
||||
1 5 10 13 14
|
||||
1 2 3 6 11
|
||||
1 2 5 6 11
|
||||
1 5 6 11 14
|
||||
1 3 6 11 12
|
||||
1 6 11 12 15
|
||||
1 6 11 14 15
|
||||
1 1 3 4 12
|
||||
1 3 4 6 12
|
||||
1 4 6 12 15
|
||||
1 1 4 10 12
|
||||
1 4 10 12 13
|
||||
1 4 12 13 15
|
||||
1 4 5 8 13
|
||||
1 4 7 8 13
|
||||
1 7 8 13 16
|
||||
1 5 8 13 14
|
||||
1 8 13 14 17
|
||||
1 8 13 16 17
|
||||
1 5 6 9 14
|
||||
1 5 8 9 14
|
||||
1 8 9 14 17
|
||||
1 6 9 14 15
|
||||
1 9 14 15 18
|
||||
1 9 14 17 18
|
||||
1 4 6 7 15
|
||||
1 6 7 9 15
|
||||
1 7 9 15 18
|
||||
1 4 7 13 15
|
||||
1 7 13 15 16
|
||||
1 7 15 16 18
|
||||
1 2 7 8 16
|
||||
1 1 2 7 16
|
||||
1 1 2 10 16
|
||||
1 2 8 16 17
|
||||
1 2 11 16 17
|
||||
1 2 10 11 16
|
||||
1 3 8 9 17
|
||||
1 2 3 8 17
|
||||
1 2 3 11 17
|
||||
1 3 9 17 18
|
||||
1 3 12 17 18
|
||||
1 3 11 12 17
|
||||
1 1 7 9 18
|
||||
1 1 3 9 18
|
||||
1 1 3 12 18
|
||||
1 1 7 16 18
|
||||
1 1 10 16 18
|
||||
1 1 10 12 18
|
||||
1 1 10 11 14
|
||||
1 1 10 13 14
|
||||
1 1 4 13 14
|
||||
1 1 2 11 14
|
||||
1 1 2 5 14
|
||||
1 1 4 5 14
|
||||
1 2 11 12 15
|
||||
1 2 11 14 15
|
||||
1 2 5 14 15
|
||||
1 2 3 12 15
|
||||
1 2 3 6 15
|
||||
1 2 5 6 15
|
||||
1 3 10 12 13
|
||||
1 3 12 13 15
|
||||
1 3 6 13 15
|
||||
1 1 3 10 13
|
||||
1 1 3 4 13
|
||||
1 3 4 6 13
|
||||
1 4 13 14 17
|
||||
1 4 13 16 17
|
||||
1 4 7 16 17
|
||||
1 4 5 14 17
|
||||
1 4 5 8 17
|
||||
1 4 7 8 17
|
||||
1 5 14 15 18
|
||||
1 5 14 17 18
|
||||
1 5 8 17 18
|
||||
1 5 6 15 18
|
||||
1 5 6 9 18
|
||||
1 5 8 9 18
|
||||
1 6 13 15 16
|
||||
1 6 15 16 18
|
||||
1 6 9 16 18
|
||||
1 4 6 13 16
|
||||
1 4 6 7 16
|
||||
1 6 7 9 16
|
||||
1 7 11 16 17
|
||||
1 7 10 11 16
|
||||
1 1 7 10 11
|
||||
1 7 8 11 17
|
||||
1 2 7 8 11
|
||||
1 1 2 7 11
|
||||
1 8 12 17 18
|
||||
1 8 11 12 17
|
||||
1 2 8 11 12
|
||||
1 8 9 12 18
|
||||
1 3 8 9 12
|
||||
1 2 3 8 12
|
||||
1 9 10 16 18
|
||||
1 9 10 12 18
|
||||
1 3 9 10 12
|
||||
1 7 9 10 16
|
||||
1 1 7 9 10
|
||||
1 1 3 9 10
|
|
@ -0,0 +1,28 @@
|
|||
LuNiO3 low-T
|
||||
1.0
|
||||
5.1234998703 0.0000000000 0.0000000000
|
||||
0.0000000000 5.5089001656 0.0000000000
|
||||
-0.0166880521 0.0000000000 7.3551808822
|
||||
Lu Ni O
|
||||
4 4 12
|
||||
Direct
|
||||
0.977199972 0.077000000 0.252999991
|
||||
0.022800028 0.922999978 0.746999979
|
||||
0.522800028 0.577000022 0.247000009
|
||||
0.477199972 0.423000008 0.753000021
|
||||
0.500000000 0.000000000 0.000000000
|
||||
0.000000000 0.500000000 0.500000000
|
||||
0.500000000 0.000000000 0.500000000
|
||||
0.000000000 0.500000000 0.000000000
|
||||
0.110100001 0.462700009 0.244100004
|
||||
0.889899969 0.537299991 0.755900025
|
||||
0.389899999 0.962700009 0.255899996
|
||||
0.610100031 0.037299991 0.744099975
|
||||
0.693300009 0.313699991 0.053900000
|
||||
0.306699991 0.686300039 0.946099997
|
||||
0.806699991 0.813699961 0.446099997
|
||||
0.193300009 0.186300009 0.553900003
|
||||
0.185100004 0.201600000 0.943799973
|
||||
0.814899981 0.798399985 0.056200027
|
||||
0.314899981 0.701600015 0.556200027
|
||||
0.685100019 0.298399985 0.443799973
|