From ade00aa4e802e84cae22519124be2095e69f423f Mon Sep 17 00:00:00 2001 From: Pierre-Francois Loos Date: Wed, 16 Nov 2022 09:25:52 +0100 Subject: [PATCH] equations in response letter --- Response_Letter/Response_Letter.tex | 44 +++++++++++++++++++++++++---- 1 file changed, 38 insertions(+), 6 deletions(-) diff --git a/Response_Letter/Response_Letter.tex b/Response_Letter/Response_Letter.tex index c5e1652..02ef1b6 100644 --- a/Response_Letter/Response_Letter.tex +++ b/Response_Letter/Response_Letter.tex @@ -77,12 +77,44 @@ bla bla bla {I quite like the discussion beginning with Eq.~(29), but it makes me wonder about the major findings of the present manuscript. Specifically, I think this manuscript exploits the following numerical observation. Consider any eigenvalue problem and partition it into two spaces, - -Assuming existence of the inverse, this can be trivially rewritten as an eigenvalue problem for a matrix in the “A” subspace, -where $T = Y X^{-1}$. Of course, this is not useful without $T$. So what determines T without access to all the eigenvectors contained in X, Y? We can rewrite the two equations contained in (1) as +\begin{equation} + \begin{pmatrix} + \boldsymbol{A} & \boldsymbol{V} \\ + \boldsymbol{V}^T & \boldsymbol{B} \\ + \end{pmatrix} + \begin{pmatrix} + \boldsymbol{X} \\ + \boldsymbol{Y} \\ + \end{pmatrix} + = + \begin{pmatrix} + \boldsymbol{X} \\ + \boldsymbol{Y} \\ + \end{pmatrix} + \boldsymbol{\Omega} +\end{equation} +Assuming existence of the inverse, this can be trivially rewritten as an eigenvalue problem for a matrix in the $\boldsymbol{A}$ subspace, +\begin{equation} + \boldsymbol{A} + \boldsymbol{V} \boldsymbol{T} +\end{equation} +where $\boldsymbol{T} = \boldsymbol{Y} \boldsymbol{X}^{-1}$. Of course, this is not useful without $\boldsymbol{T}$. So what determines $\boldsymbol{T}$ without access to all the eigenvectors contained in $\boldsymbol{X}$, $\boldsymbol{Y}$? +We can rewrite the two equations contained in (1) as +\begin{gather} + \boldsymbol{A} + \boldsymbol{V} \boldsymbol{T} = \boldsymbol{X} \boldsymbol{\Omega} \boldsymbol{X}^{-1} + \boldsymbol{V}^T + \boldsymbol{B} \boldsymbol{T} = \boldsymbol{T} \boldsymbol{X} \boldsymbol{\Omega} \boldsymbol{X}^{-1} +\end{gather} or, upon combining the above equations, -The above is some nonlinear equation that can be solved for the unknown T. It looks superficially like a CCSD equation (which could be solved, e.g., by iteration), but clearly there is no formal connection. Does the fact that any linear eigenvalue equation can be partitioned to produce a nonlinear eigenvalue equation, or equivalently recast into the determination of the solution to a system of nonlinear equations, imply any formal connection with CC theory (as the section title implies)? Or is it a mathematically interesting observation that indicates an alternative route toward the determination of eigenvalues from a partitioned matrix? -As far as I can tell, the above general observation is applied in two distinct settings to derive the results in the present manuscript. For the BSE, the partitioning is used to eliminate the “deexcitation” space, similar to its elimination in the RPA problem. For the GW approximation, the partitioning is used to eliminate the 2h1p/2p1h space. However, I reemphasize that in both cases, the “CC-like” equations that result are analogous to ground-state CC theory, and so the formal connection between the theories is tenuous. +\begin{equation} + \boldsymbol{V}^T + \boldsymbol{B} \boldsymbol{T} - \boldsymbol{T} \boldsymbol{A} - \boldsymbol{T} \boldsymbol{V} \boldsymbol{T} = \boldsymbol{0} +\end{equation} +The above is some nonlinear equation that can be solved for the unknown $\boldsymbol{T}$. +It looks superficially like a CCSD equation (which could be solved, e.g., by iteration), but clearly there is no formal connection. +Does the fact that any linear eigenvalue equation can be partitioned to produce a nonlinear eigenvalue equation, or equivalently recast into the determination of the solution to a system of nonlinear equations, imply any formal connection with CC theory (as the section title implies)? +Or is it a mathematically interesting observation that indicates an alternative route toward the determination of eigenvalues from a partitioned matrix? +As far as I can tell, the above general observation is applied in two distinct settings to derive the results in the present manuscript. +For the BSE, the partitioning is used to eliminate the ``deexcitation'' space, similar to its elimination in the RPA problem. +For the $GW$ approximation, the partitioning is used to eliminate the 2h1p/2p1h space. +However, I reemphasize that in both cases, the ``CC-like'' equations that result are analogous to ground-state CC theory, and so the formal connection between the theories is tenuous. } \\ \alert{ @@ -112,7 +144,7 @@ I cannot fault the presentation of the analytic results, their correctness, or t I would recommend acceptance without reservation.} \\ \alert{ -bla bla bla +We thank the reviewer for these kind comments and supporting publication of the present Communication. }