import numpy as np
import matplotlib.pyplot as pl
import sparse_ir
import sys 
import math
T=0.1
wmax =1

beta = 1/T
#construction of the Kernel K

#Fermionic Basis

basisf = sparse_ir.FiniteTempBasis('F', beta, wmax)

matsf=sparse_ir.MatsubaraSampling(basisf)
tausf=sparse_ir.TauSampling(basisf)

#Bosonic Basis

basisb = sparse_ir.FiniteTempBasis('B', beta, wmax)
      
matsb=sparse_ir.MatsubaraSampling(basisb)
tausb=sparse_ir.TauSampling(basisb)
def rho(x):
    return 2/np.pi*np.sqrt(1-(x/wmax)**2)

rho_l=basisf.v.overlap(rho)

G_l_0=-basisf.s*rho_l  


#We compute G_iw two times as we will need G_iw_0 as a constant later on

G_iw_0=matsf.evaluate(G_l_0)
G_iw_f=matsf.evaluate(G_l_0)
#Iterations
i=0

#storage for E_iw to show convergence after iterations
E_iw_f_arr=[]

The GF2 & GW method#

Above we have successfully derived \(G^F(\bar{\tau}_k^F)\) from \(\hat {G}^F(i\bar{\omega}_n^F)\) by first obtaining the Green’s function’s basis representation \(G_l^F\) and secondly evaluating \(G_l^F\) on the \(\tau\)-sampling points, which are given by the computations of sparse_ir. Here, the self-consistent second order Green’s function theory (GF2) and the self-consistent GW method are are based this very mechanism (and the so called Hedin equations).[5][7]

The following image should give an overview on their iterative schemes[5]:


GF2_GW_pic.PNG

Evaluation and transformation in GW and GF2 is done in the same way (fitting and evaluating) for the quantities \(G,P,W\) and \(\Sigma\). The transition between these quantities is given by the five Hedin Equations as shown in the panel below (instead of \(P\) the Polarization is denoted with an \(\tilde{\chi}\)).[12] Because the vertex function (between \(P\) and \(W\)) is approximated as \(\Gamma=1\), we will only deal with four of them in the GW&GF2 method. By using sparse sampling and the IR basis it is possible to solve these diagrammatic equations efficiently without sustaining a loss in accuracy.[9]
Hedin_pic.PNG

The following code shows a step by step example to compute each of these quantities :

Let us start with computing \(G_l^F\) from the imaginary-frequency Green’s function we have obtained above. This will also be the starting point for every new iteration one might want to perform.

#Calculation of G_l_f using Least-Squares Fitting

G_l_f=matsf.fit(G_iw_f)

                
#G_tau,fermionic   

G_tau_f=tausf.evaluate(G_l_f)

The next step in order to perform a full iteration of GF2 & GW is the evaluation of the basis representation \(G_l^F\) on the \(\tau\)-sampling points. Contrary to before when we computed \(G(\bar{\tau}_k^F)\), these sampling point will be the bosonic sampling points \(\tau_k^B\). This allows us to switch to bosonic statistic and thus be able to calculate associated quantities like the Polarization \(P\).[5]

\[G(\bar{\tau}_k^B)= \sum_{l=0}^{L-1}G_l^F U_l^F(\bar{\tau}_k^B)\]
#G_tau, bosonic

G_tau_b=np.einsum('lt,l->t',basisf.u(tausb.tau),G_l_f)

#G_tau_b=tausb.evaluate(G_l_f)
pl.plot(tausb.tau,G_tau_b.real,label='real')
pl.plot(tausb.tau,G_tau_b.imag,label='imaginary')

pl.title(r'$G(\tau)$')
pl.xlabel(r'$\tau$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06b09c510>
../_images/2f64cc71f7fc6be44c8317af44f8baecbd8febfa30beb655361c4bb40d9bf6ef.png

The so-called Polarization \(P(\bar\tau_k^B)\) is given by the random phase approximation, a connection between two Green’s functions:

\[P(\bar{\tau}_k^B)=G(\bar{\tau}_k^B)*G(\beta-\bar{\tau}_k^B)\]
#Polarisation P, bosonic

P_tau_b=G_tau_b*(basisf.u(beta-tausb.tau).T@G_l_f)
pl.plot(tausb.tau,P_tau_b.imag,label='imaginary')
pl.plot(tausb.tau,P_tau_b.real,label='real')

pl.title(r' $P(\tau)$')
pl.xlabel(r'$\tau$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06af30e90>
../_images/6943cf145568a03ab47ad0ead52c966f69bd3392f05de6c881f5b51718d237f6.png

The same way we transformed \(\hat G^F(i\bar{\omega}_n^F)\) into \(G^B(\bar\tau_k^B)\) we are now able to transform \(P^B(\bar\tau_k^B)\) into \(\hat P^B(i\bar{\omega}_n^B)\) again using least Square fitting and evaluating \(P_l^B\) on the given sampling points (Matsubara frequencies).

\[P_l^B=\underset{P_l}{\arg\min} \sum_{k}|P(\bar{{\tau}}_k^B)-\sum_{l=0}^{N-1}\hat{U}_l^B (\bar{\tau}_k^B)P_l|^2\]
#P_l, bosonic


P_l_b=tausb.fit(P_tau_b)
pl.semilogy(np.abs(P_l_b),'+-')
pl.title('$P_l^B$ over l ')
pl.xticks(range(0,20))
pl.xlabel('l')
Text(0.5, 0, 'l')
../_images/3ceffd1f1b7326a21c8c36404517a6da283f28e1183691433806d9a9dce71f47.png
\[\hat{P}(i\bar{\omega}_k^B)= \sum_{l=0}^{N-1} P_l^B\hat{U}_l^B(i\bar{\omega}_n^B)\]
#P_iw, bosonic

#P_iw_b=np.einsum('lv,l->v', basisb.uhat(matsb.wn),P_l_b)

P_iw_b=matsb.evaluate(P_l_b)
pl.plot(matsb.wn,P_iw_b.imag,label='imaginary')
pl.plot(matsb.wn,P_iw_b.real,label='real')
pl.title(r'$P(i\omega)$')
pl.xlabel(r'$\omega$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06ad32fd0>
../_images/e3c1780d57cb2f22df67df605bda7fbb410575cccf1526d0e8de97f14b08bb72.png

Following \(P\) we will now calculate the Screened Interaction \(W\) with the following formula:

\[\hat{{W}}(i\bar{\omega}_n^B)= U + U\hat{P}(i\bar{\omega}_n^B)\hat{{W}}(i\bar{\omega}_n^B)\]

or

\[\hat{{W}}(i\bar{\omega}_n^B)= \frac{U}{1-U\hat{P}(i\bar{\omega}_n^B)}\]

This equation has the exact same form as the Dyson equation but instead of connecting the Green’s functions and the self energy it connects the Screened Coulomb Interaction \(W\) and the polarization operator \(P\). Because \(U\) is the bare Coulomb interaction and \(P\) is an object containing all irreducible processes (meaning in Feynmann diagrams cutting an interaction line U does not result in two sides) \(W\) can be understood as the sum of all possible Feymann diagrams with an interaction U between its two parts.[5][11] This becomes quite intuitive if we look at

\[W= U + UPW= U+UP(U+UP(U+UP(...)))=U+UPU+...\]
#W_iw, bosonic  

U=1/2

W_iw_b_U=U/(1-(U*P_iw_b))

W_iw_b=W_iw_b_U-U

#W_iw_b is the part depending on the frequency, any further calculations 
#will be done using this and not W_iw_b_U
pl.plot(matsb.wn,W_iw_b.imag,label='imaginary')
pl.plot(matsb.wn,W_iw_b.real,label='real')
pl.title(r'$W(i\omega)$')
pl.xlabel(r'$\omega$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06ad5e150>
../_images/4e7832d580cb3f58e1075a88c2281a655e6fedaa5dd3609f988e62c26f9d201e.png
\[W_l^B\]
#W_l, bosonic

W_l_b=matsb.fit(W_iw_b)
pl.semilogy(np.abs(W_l_b),'+-')
pl.title('$W_l^B$ over l ')
pl.xticks(range(0,20))
pl.xlabel('l')
Text(0.5, 0, 'l')
../_images/e4064dbc4ac307f4d3f3667fafcb047c9e2d30c802a9c97f90c70f751c5f0a73.png

In the next step we are changing back into fermionic statistics:

\[{W}(\bar{\tau}_k^F)= \sum_{l=0}^{N-1} W_l ^BU_l^B(\bar{\tau}_k^F) \]
#W_tau_f, fermionic

W_tau_f=np.einsum('lt,l->t',basisb.u(tausf.tau),W_l_b)

#W_tau_f=tausf.evaluate(W_l_b)
pl.plot(tausf.tau,W_tau_f.imag,label='imaginary')
pl.plot(tausf.tau,W_tau_f.real,label='real')
pl.title(r'$W(\tau)$')
pl.xlabel(r'$\tau$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06ab62fd0>
../_images/d117d37e2a5cbc7a5d3f70d07a53a6926fd04375dea1ed0e690011f6f4cd6e2a.png

After changing back into fermionic statistics the next quantity that is being dealt with is the so-called self energy \(\Sigma\). When we first introduced the Dyson equation in the first chapter, we saw \(V\) as some sort of pertubation. The self-energy is very much alike as it describes the correlation effects of a many-body system. Here we will calculate \(\tilde{\Sigma}(\bar{\tau}_k^F)\) using \(\Sigma^{GW}\) so that

\[{\Sigma}(\bar{\tau}_k^F)=-G(\bar{\tau}_k^F)*{W}(\bar{\tau}_k^F).\]

This can again be rewritten into its equivalent form

\[\Sigma=-G(U+UP(U+UP(...)))= -(GU+GUPU+GUPUPU+...)\]
#E_tau , fermionic   

E_tau_f=G_tau_f*W_tau_f
pl.plot(tausf.tau,E_tau_f.imag,label='imaginary')
pl.plot(tausf.tau,E_tau_f.real,label='real')
pl.title(r'$\Sigma(\tau)$')
pl.xlabel(r'$\tau$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06abac190>
../_images/e7e961d46326d46dfd6dc238509c178f71ec1209e5a884691ed9dae649660da1.png
\[{\Sigma}_l^F\]
#E_l, fermionic

E_l_f=tausf.fit(E_tau_f)
pl.semilogy(np.abs(E_l_f),'+-')
pl.title('$\Sigma_l^F$ over l ')
pl.xticks(range(0,20))
pl.xlabel('l')
Text(0.5, 0, 'l')
../_images/2313257ea3ed14664cc2c06230b33e14d43b6ee07246e6c0c593d4b1d3a12c8d.png

We will calculate

\[\hat{\Sigma}(i\bar{\omega}_n^F)=-UG(\beta)+\sum_{l=0}^{N-1}{\Sigma}_l^F \hat{U}_l^F(i\bar{\omega}_n^F)\]

with \(UG(\beta)\) as the so called Hartee Fock term.

#E_iw, fermionic

#E_iw_f=p.einsum('lw,l->w',basisf.uhat(matsf.wn),E_l_f)-U*(basisf.u(beta).T@G_l_f)

E_iw_f_U=matsf.evaluate(E_l_f)-U*(basisf.u(beta).T@G_l_f)
E_iw_f=matsf.evaluate(E_l_f)


#store E_iw_f of this iteration in E_iw_f_arr
E_iw_f_arr.append(E_iw_f_U)
fig, ax = pl.subplots(1,2, figsize=(12,5) )
ax[0].plot(matsf.wn,E_iw_f_U.imag)
ax[0].set_title('Imaginary part of $\Sigma(i\omega)$')
ax[0].set_xlabel(r'$\omega$')
ax[1].plot(matsf.wn,E_iw_f_U.real)
ax[1].set_title('Real part of $\Sigma(i\omega)$')
ax[1].set_xlabel(r'$\omega$')
Text(0.5, 0, '$\\omega$')
../_images/36e3c0abcb8b0b2300d49c3bc3052de39dbb44ef9770f6e6a325c97158704ff0.png

Last but not least we will obtain \(G^F(i\bar{\omega}_n^F)\) through the Dyson Equation of GW:

\[\hat{G}(i\bar{\omega}_k^F)=G_0(i\bar{\omega}_k^F)+G_0(i\bar{\omega}_k^F)\hat{\Sigma}(i\bar{\omega}_n^F)\hat{G}(i\bar{\omega}_k^F)\]

or

\[\hat{G}(i\bar{\omega}_k^F)=\frac{1}{G_0(i\bar{\omega}_k^F)^{-1}-\hat{\Sigma}(i\bar{\omega}_n^F)}\]

We have now calculated the full Green’s function of the system with \(\Sigma\) as a set of (one-particle) irreducible processes connected by \(G_0\).[11]

#G_iw, fermonic     -> Dyson Equation

G_iw_f=((G_iw_0)**-1-E_iw_f)**-1    
pl.plot(matsf.wn,G_iw_f.imag,label='imaginary')
pl.plot(matsf.wn,G_iw_f.real,label='real')
pl.title(r'$G(i\omega)$')
pl.xlabel(r'$\omega$')
pl.legend()
<matplotlib.legend.Legend at 0x7fa06ab8afd0>
../_images/694df2bba328cb932359f8a6ce08cff5a7f3c68160da14e64a4cd92bb7507bf4.png
if (i>0):
    pl.plot(matsf.wn,E_iw_f_arr[i].imag,label='current')
    pl.title(r'$\Sigma(i\omega)$ in the current and previous iteration')
    pl.plot(matsf.wn,E_iw_f_arr[i-1].imag,'.',label='previous')
    pl.legend()
    pl.show()
    pl.plot(matsf.wn,np.abs(E_iw_f_arr[i]-E_iw_f_arr[i-1]))
    pl.title(r'$|\Sigma_i(i\omega)-\Sigma_{i-1}(i\omega)|$' )
    
#Iterations 
i+=1
print('full iterations: ',i)
full iterations:  1
print(G_iw_f)
[ 2.07806360e-18+0.03657118j  3.31916878e-19+0.10932936j
 -8.59280574e-18+0.18513256j  5.47680338e-18+0.24022776j
 -4.50903710e-18+0.28186922j  2.10892056e-17+0.34036864j
  6.10886753e-18+0.4278915j   2.25032005e-18+0.57065346j
 -1.69861463e-17+0.83239627j  9.05172560e-17+1.38269615j
  7.62400416e-17-1.38269615j -1.54955306e-17-0.83239627j
  5.43243553e-18-0.57065346j  6.63080433e-18-0.4278915j
  2.09349407e-17-0.34036864j -4.54682983e-18-0.28186922j
  5.31668120e-18-0.24022776j -8.62614007e-18-0.18513256j
  3.05041330e-19-0.10932936j  2.05045978e-18-0.03657118j]

With this we have completed one full iteration of GF2 & GW. Repeating this over and over again will show convergence and thus give the desired approximation to the self-energy.

Conclusion#

The above code illustrated a full iteration of GW & GF2 with sparse_ir.

Taking advantage of the Singular Value Expansion of the imaginary time Green’s function allowed for the construction of the Intermediate Representation: A basis for the Matsubara Green’s function which makes it possible to switch back and forth between time and frequency domain.

Pairing this with the GW & GF2 approximations and the Hedin equations, we were able to make use of the IR. Because the Hedin equations are diagonal in one or the other representation, we had the IR basis work as a translator between these and ensure excact as well as fast calcuations for quantities like the self-energy, the polarization and the screened colomb interaction. All this while suffering a minimal loss in information.

Each quantity was computed after the same scheme: Calculation, transformation and Evaluation. First we derived the quantity in its time or frequency representation through a Hedin equation. Then we transformed it using Least Square fitting. Once we obtained the IR expansion coefficients we were able to evaluate the function on the corresponding sampling points given by sparse_ir.

Of course, in order to increase the accuracy of our calculations more iterations will be needed.

Naturally this idea of using the Intermediate Representation in order to facilitate calculations can be expanded in various directions. One could go beyond single orbital calculations with the use of matrices or implement another type of vertex-approximation. Undoubtedly not anywhere near all options have been exhausted, not for the use of the IR Basis nor other machine learning methods used in physics.

Sources#

[1] H.Bruus, Karsten Flensberg,Many-body Quantum Theory in Condensed Matter Physics (Oxford University Press Inc., New York, 2004)
[2] J.W.Negele, H.Orland, Quantum Many-Particle Systems (Westview Press, 1998)
[3] G.D.Mahan,Many-Particle Physics (Kluwer Academic/Plenum Publishers, New York, 2000)
[4] P.C.Hansen,Dicrete Inverse Problems(Society for Industrial and Applied Mathematics, Philadelphia,2010)
[5] J.Li, M.Wallerberger, N.Chikano, C.Yeh, E.Gull and H.Shinaoka, Sparse sampling approach to efficient ab initio calculations at finite temperature, Phys. Rev. B 101, 035144 (2020)
[6] M.Wallerberger, H. Shinaoka, A.Kauch, Solving the Bethe–Salpeter equation with exponential convergence, Phys. Rev. Research 3, 033168 (2021)
[7] H. Shinaoka, N. Chikano, E. Gull, J. Li, T. Nomoto, J. Otsuki, M. Wallerberger, T. Wang, K. Yoshimi, Efficient ab initio many-body calculations based on sparse modeling of Matsubara Green’s function (2021)
[8] F. Aryasetiawan and O. Gunnarsson, The GW method, Rep. Prog. Phys. 61 (1998) 237
[9] Matteo Gatti, The GW approximation, TDDFT school, Benasque (2014)
[10] C.Friedrich, Many-Body Pertubation Theory;The GW approximation, Peter Grünberg Institut and Institute for Advanced Simulation, Jülich
[11] K. Held, C. Taranto, G. Rohringer, and A. Toschi, Hedin Equations, GW, GW+DMFT, and All That (Modeling and Simulation Vol. 1, Forschungszentrum Jülich, 2011)
[12] X. Gonze, B. Amadon, P.-M. Anglade, J.-M. Beuken, F. Bottin, P. Boulanger, F. Bruneval, D. Caliste, R. Caracas, M. Côté, T. Deutsch, L. Genovese, Ph. Ghosez,M. Giantomassi, S. Goedecker, D.R. Hamann, P. Hermet, F. Jollet, G. Jomard, S. Leroux, M. Mancini, S. Mazevet, M.J.T. Oliveira, G. Onida, Y. Pouillon, T. Rangel,G.-M. Rignanese, D. Sangalli, R. Shaltaf, M. Torrent, M.J. Verstraete, G. Zerah, J.W. Zwanziger, ABINIT: First-principles approach to material and nanosystem properties, Computer Physics Communications 180 (2009) 2582–2615
[13] H. Shinaoka, J. Otsuki, M. Ohzeki and K. Yoshimi, Compressing Green’s function using intermediate representation between imaginary-time and real-frequency domains, Physical Review B 96(3), 035147 (2017), doi:10.1103/physrevb.96.035147.
[14] J. Otsuki, M. Ohzeki, H. Shinaoka and K. Yoshimi, Sparse modeling approach to analytical continuation of imaginary-time quantum Monte Carlo data, Physical Review E 95(6), 061302® (2017), doi:10.1103/physreve.95.061302.