Kernels
The IR basis is nothing but the singular value expansion of a suitable integral kernel K mediating the change from real frequencies to imaginary times:
where:
is the spectral function and w(ω) is a weight function. The integral is defined on the interval [-ωmax, ωmax], where ωmax(=Λ/β) is a frequency cutoff.
Different kernels yield different IR basis functions. The sparse-ir library defines two kernels:
sparse_ir.LogisticKernel
: continuation of fermionic/bosonic spectral functions with w(ω)=1 for fermions and w(ω)=1/tanh(ω/ωmax) for bosons.
sparse_ir.RegularizedBoseKernel
: continuation of bosonic spectral functions with w(ω)=1/ω.
By default, sparse_ir.LogisticKernel
is used.
Kernels can be fed directly into sparse_ir.FiniteTempBasis
to
get the intermediate representation.
Predefined kernels
- class sparse_ir.LogisticKernel(lambda_)
Fermionic/bosonic analytical continuation kernel.
In dimensionless variables
x = 2*τ/β - 1
,y = β*ω/Λ
, the integral kernel is a function on[-1, 1] x [-1, 1]
:\[K(x, y) = \frac{\exp(-\Lambda y(x + 1)/2)}{1 + \exp(-\Lambda y)}\]LogisticKernel is a fermionic analytic continuation kernel. Nevertheless, one can model the τ dependence of a bosonic correlation function as follows:
\[\int \frac{\exp(-\Lambda y(x + 1)/2)}{1 - \exp(-\Lambda y)} \rho(y) dy = \int K(x, y) \frac{\rho'(y)}{\tanh(\Lambda y/2)} dy\]i.e., a rescaling of the spectral function with the weight function:
\[w(y) = \frac1{\tanh(\Lambda y/2)}.\]- __call__(x, y, x_plus=None, x_minus=None)
Evaluate kernel at point (x, y)
For given
x, y
, return the value ofK(x, y)
. The arguments may be numpy arrays, in which case the function shall be evaluated over the broadcast arrays.The parameters
x_plus
andx_minus
, if given, shall contain the values ofx - xmin
andxmax - x
, respectively. This is useful if either difference is to be formed and cancellation expected.
- property conv_radius
Convergence radius of the Matsubara basis asymptotic model.
For improved relative numerical accuracy, the IR basis functions on the Matsubara axis
basis.uhat(n)
can be evaluated from an asymptotic expression forabs(n) > conv_radius
. Ifconv_radius
isNone
, then the asymptotics are unused (the default).
- get_symmetrized(sign)
Return symmetrized kernel
K(x, y) + sign * K(x, -y)
.By default, this returns a simple wrapper over the current instance which naively performs the sum. You may want to override this to avoid cancellation.
- property is_centrosymmetric
Kernel is centrosymmetric.
Returns true if and only if
K(x, y) == K(-x, -y)
for all values ofx
andy
. This allows the kernel to be block-diagonalized, speeding up the singular value expansion by a factor of 4. Defaults to false.
- sve_hints(eps)
Provide discretisation hints for the SVE routines.
Advises the SVE routines of discretisation parameters suitable in tranforming the (infinite) SVE into an (finite) SVD problem.
See: :class:
AbstractSVEHints
.
- class sparse_ir.RegularizedBoseKernel(lambda_)
Regularized bosonic analytical continuation kernel.
In dimensionless variables
x = 2*τ/β - 1
,y = β*ω/Λ
, the fermionic integral kernel is a function on[-1, 1] x [-1, 1]
:\[K(x, y) = \frac{y \exp(-\Lambda y(x + 1)/2)}{\exp(-\Lambda y) - 1}\]Care has to be taken in evaluating this expression around
y == 0
.- __call__(x, y, x_plus=None, x_minus=None)
Evaluate kernel at point (x, y)
For given
x, y
, return the value ofK(x, y)
. The arguments may be numpy arrays, in which case the function shall be evaluated over the broadcast arrays.The parameters
x_plus
andx_minus
, if given, shall contain the values ofx - xmin
andxmax - x
, respectively. This is useful if either difference is to be formed and cancellation expected.
- property conv_radius
Convergence radius of the Matsubara basis asymptotic model.
For improved relative numerical accuracy, the IR basis functions on the Matsubara axis
basis.uhat(n)
can be evaluated from an asymptotic expression forabs(n) > conv_radius
. Ifconv_radius
isNone
, then the asymptotics are unused (the default).
- get_symmetrized(sign)
Return symmetrized kernel
K(x, y) + sign * K(x, -y)
.By default, this returns a simple wrapper over the current instance which naively performs the sum. You may want to override this to avoid cancellation.
- property is_centrosymmetric
Kernel is centrosymmetric.
Returns true if and only if
K(x, y) == K(-x, -y)
for all values ofx
andy
. This allows the kernel to be block-diagonalized, speeding up the singular value expansion by a factor of 4. Defaults to false.
- sve_hints(eps)
Provide discretisation hints for the SVE routines.
Advises the SVE routines of discretisation parameters suitable in tranforming the (infinite) SVE into an (finite) SVD problem.
See: :class:
AbstractSVEHints
.
- weight_func(statistics: str) Callable[[ndarray], ndarray]
Return the weight function for given statistics
- property ypower
Power with which the y coordinate scales.
Custom kernels
Adding kernels to sparse_ir
is simple - at the very basic level, the
library expects a kernel K
to be able to provide two things:
the values through
K(x, y)
a set of SVE discretization hints through
K.hints()
Let us suppose you simply want to include a Gaussian default model on the real
axis instead of the default (flat) one. We create a new kernel by inheriting
from sparse_ir.kernel.AbstractKernel
and then simply wrap around a
fermionic kernel, modifying the values as needed:
import sparse_ir
import sparse_ir.kernel
class KernelFGauss(sparse_ir.kernel.AbstractKernel):
def __init__(self, lambda_, std):
self._inner = sparse_ir.LogisticKernel(lambda_)
self.lambda_ = lambda_
self.std = std
def __call__(self, x, y, x_plus=None, x_minus=None):
# First, get value of kernel
val = self._inner(x, y, x_plus, x_minus)
# Then multiply with default model
val *= np.exp(-.5 * (y / self.std)**2)
return val
def hints(self, eps):
return self._inner.hints(eps)
You can feed this kernel now directly to sparse_ir.FiniteTempBasis
:
K = GaussFKernel(10., 1.)
basis = sparse_ir.FiniteTempBasis(K, 'F')
print(basis.s)
This should get you started. For a fully-fledged and robust implementation, you should:
Make sure that your kernel does not lose precision in computing
K(x, y)
, as this directly affects the quality of the basis. This is also where the argumentsx_plus
andx_minus
may become useful.Optimize your discretization hints. To do so, it is useful to choose the segments in
x
andy
close to the asymptotic distribution of the roots of the respective singular functions. The Gauss order can then be determined from a convergence analysis.Check whether your kernel is centrosymmetric, and if so, override the
is_centrosymmetric
property. This yields a approximately four-times performance boost. However, check that the symmetrized versions of the kernels do not lose precision.
Base classes
- class sparse_ir.kernel.AbstractKernel
Integral kernel
K(x, y)
.Abstract base class for an integral kernel, i.e., a real binary function
K(x, y)
used in a Fredhold integral equation of the first kind:\[u(x) = \int K(x, y) v(y) dy\]where
x ∈ [xmin, xmax]
andy ∈ [ymin, ymax]
. For its SVE to exist, the kernel must be square-integrable, for its singular values to decay exponentially, it must be smooth.In general, the kernel is applied to a scaled spectral function ρ’(y) as:
\[\int K(x, y) \rho'(y) dy,\]where ρ’(y) = w(y) ρ(y).
- __call__(x, y, x_plus=None, x_minus=None)
Evaluate kernel at point (x, y)
For given
x, y
, return the value ofK(x, y)
. The arguments may be numpy arrays, in which case the function shall be evaluated over the broadcast arrays.The parameters
x_plus
andx_minus
, if given, shall contain the values ofx - xmin
andxmax - x
, respectively. This is useful if either difference is to be formed and cancellation expected.
- property conv_radius
Convergence radius of the Matsubara basis asymptotic model.
For improved relative numerical accuracy, the IR basis functions on the Matsubara axis
basis.uhat(n)
can be evaluated from an asymptotic expression forabs(n) > conv_radius
. Ifconv_radius
isNone
, then the asymptotics are unused (the default).
- get_symmetrized(sign)
Return symmetrized kernel
K(x, y) + sign * K(x, -y)
.By default, this returns a simple wrapper over the current instance which naively performs the sum. You may want to override this to avoid cancellation.
- property is_centrosymmetric
Kernel is centrosymmetric.
Returns true if and only if
K(x, y) == K(-x, -y)
for all values ofx
andy
. This allows the kernel to be block-diagonalized, speeding up the singular value expansion by a factor of 4. Defaults to false.
- sve_hints(eps)
Provide discretisation hints for the SVE routines.
Advises the SVE routines of discretisation parameters suitable in tranforming the (infinite) SVE into an (finite) SVD problem.
See: :class:
AbstractSVEHints
.
- weight_func(statistics: str) Callable[[ndarray], ndarray]
Return the weight function for given statistics
- property xrange
Tuple
(xmin, xmax)
delimiting the range of allowed x values
- property ypower
Power with which the y coordinate scales.
- property yrange
Tuple
(ymin, ymax)
delimiting the range of allowed y values
- class sparse_ir.kernel.ReducedKernel(inner, sign=1)
Restriction of centrosymmetric kernel to positive interval.
For a kernel
K
on[-1, 1] x [-1, 1]
that is centrosymmetric, i.e.K(x, y) == K(-x, -y)
, it is straight-forward to show that the left/right singular vectors can be chosen as either odd or even functions.Consequentially, they are singular functions of a reduced kernel
K_red
on[0, 1] x [0, 1]
that is given as either:K_red(x, y) == K(x, y) + sign * K(x, -y)
This kernel is what this class represents. The full singular functions can be reconstructed by (anti-)symmetrically continuing them to the negative axis.
- __call__(x, y, x_plus=None, x_minus=None)
Evaluate kernel at point (x, y)
For given
x, y
, return the value ofK(x, y)
. The arguments may be numpy arrays, in which case the function shall be evaluated over the broadcast arrays.The parameters
x_plus
andx_minus
, if given, shall contain the values ofx - xmin
andxmax - x
, respectively. This is useful if either difference is to be formed and cancellation expected.
- property conv_radius
Convergence radius of the Matsubara basis asymptotic model.
For improved relative numerical accuracy, the IR basis functions on the Matsubara axis
basis.uhat(n)
can be evaluated from an asymptotic expression forabs(n) > conv_radius
. Ifconv_radius
isNone
, then the asymptotics are unused (the default).
- get_symmetrized(sign)
Return symmetrized kernel
K(x, y) + sign * K(x, -y)
.By default, this returns a simple wrapper over the current instance which naively performs the sum. You may want to override this to avoid cancellation.
- property is_centrosymmetric
True iff K(x,y) = K(-x, -y)
- sve_hints(eps)
Provide discretisation hints for the SVE routines.
Advises the SVE routines of discretisation parameters suitable in tranforming the (infinite) SVE into an (finite) SVD problem.
See: :class:
AbstractSVEHints
.
- property xrange
Tuple
(xmin, xmax)
delimiting the range of allowed x values
- property ypower
Power with which the y coordinate scales.
- property yrange
Tuple
(ymin, ymax)
delimiting the range of allowed y values
- class sparse_ir.kernel.AbstractSVEHints
Discretization hints for singular value expansion of a given kernel.
- property ngauss
Gauss-Legendre order to use to guarantee accuracy
- property nsvals
Upper bound for number of singular values
Upper bound on the number of singular values above the given threshold, i.e., where
s[l] >= eps * s[0]
.
- property segments_x
Segments for piecewise polynomials on the
x
axis.List of segments on the
x
axis for the associated piecewise polynomial. Should reflect the approximate position of roots of a high-order singular function inx
.
- property segments_y
Segments for piecewise polynomials on the
y
axis.List of segments on the
y
axis for the associated piecewise polynomial. Should reflect the approximate position of roots of a high-order singular function iny
.