low_rank_toolbox.matrices

Low-Rank Matrix Representations.

This submodule provides efficient representations for low-rank matrices, enabling memory-efficient storage and fast operations on matrices that can be expressed as products of thin factors.

Classes

LowRankMatrix : Generic low-rank matrix base class QuasiSVD : Quasi-SVD factorization (U @ S @ V.T) where S is dense SVD : Singular Value Decomposition (U @ diag(s) @ V.T) QR : QR factorization (Q @ R)

Authors: Benjamin Carrel and Rik Vorhaar, University of Geneva, 2022

class low_rank_toolbox.matrices.LowRankMatrix(*matrices, **extra_data)[source]

Bases: LinearOperator

Meta class for dealing with low rank matrices in different formats.

Inherits from scipy.sparse.linalg.LinearOperator for seamless integration with iterative solvers and standard numerical linear algebra tools.

Do not use this class directly, but rather use its subclasses.

We always decompose a matrix as a product of smaller matrices. These smaller matrices are stored in self._matrices.

LinearOperator Integration

As a subclass of scipy.sparse.linalg.LinearOperator, LowRankMatrix provides efficient matrix-vector and matrix-matrix products without forming the full dense matrix. This enables direct use with scipy’s iterative solvers and other numerical algorithms.

Key LinearOperator features: - Matrix-vector multiplication: A @ v or A.matvec(v) - Adjoint multiplication: A.H @ v or A.rmatvec(v) - Matrix-matrix multiplication: A @ B or A.matmat(B) - Lazy composition: A + B, A @ B (when B is LinearOperator) - Shape and dtype attributes: A.shape, A.dtype

Examples

Using with scipy iterative solvers:

>>> from scipy.sparse.linalg import gmres, LinearOperator
>>> from low_rank_toolbox import SVD
>>> import numpy as np
>>>
>>> # Create a well-conditioned low-rank matrix
>>> U, _ = np.linalg.qr(np.random.randn(1000, 10))
>>> s = np.logspace(0, -1, 10)  # Better conditioned
>>> V, _ = np.linalg.qr(np.random.randn(1000, 10))
>>> # Add diagonal regularization for better conditioning
>>> A = SVD(U, s, V)
>>> A_reg = A + 0.1 * LinearOperator((1000, 1000), matvec=lambda x: x)
>>>
>>> # Solve Ax = b using GMRES (never forms the full matrix)
>>> b = np.random.randn(1000)
>>> x, info = gmres(A_reg, b, rtol=1e-6, atol=1e-6, maxiter=100)
>>> assert info == 0  # Verify convergence

Lazy composition with other operators:

>>> from scipy.sparse import diags
>>> from scipy.sparse.linalg import aslinearoperator
>>>
>>> # Create a diagonal operator for better conditioning
>>> D = diags([0.5 for i in range(1000)])
>>> D_op = aslinearoperator(D)
>>>
>>> # Lazy sum - doesn't form full matrix
>>> B = A + D_op  # Returns _SumLinearOperator
>>>
>>> # Use in iterative solver
>>> x2, info2 = gmres(B, b, rtol=1e-6, atol=1e-6, maxiter=100)
>>> assert info2 == 0  # Verify convergence

Matrix-vector products (efficient, no dense formation):

>>> v = np.random.randn(1000)
>>> y = A @ v  # Efficient: never forms full 1000x1000 matrix
>>>
>>> # Adjoint product
>>> z = A.H @ v  # Hermitian transpose product

Custom preconditioners:

>>> # Use the regularized matrix from above
>>> def precondition(v):
...     # Simple diagonal preconditioner
...     return v
>>>
>>> # Create LinearOperator from function
>>> M = LinearOperator((1000, 1000), matvec=precondition)
>>>
>>> # Use as preconditioner
>>> x3, info3 = gmres(A_reg, b, M=M, rtol=1e-6, atol=1e-6, maxiter=100)
>>> assert info3 == 0  # Verify convergence

Notes

  • All matrix-vector products are computed efficiently in O(rank) operations

  • Full matrix is never formed unless explicitly requested with .full()

  • Subclasses (SVD, QR, QuasiSVD) inherit all LinearOperator functionality

  • Compatible with all scipy.sparse.linalg iterative solvers (gmres, cg, bicgstab, etc.)

property H

Hermitian transpose (conjugate transpose) of the matrix.

property T: LowRankMatrix

Transpose of the matrix (reverse order and transpose each factor).

__init__(*matrices, **extra_data)[source]

Initialize a low-rank matrix from a sequence of factor matrices.

Parameters:
  • *matrices (ndarray) – Sequence of matrices whose product forms the low-rank matrix. At least two matrices must be provided, and their shapes must align for matrix multiplication (i.e., matrices[i].shape[1] == matrices[i+1].shape[0]).

  • **extra_data (dict) – Additional data to store with the matrix (e.g., poles, residues).

Raises:

ValueError – If fewer than 2 matrices are provided or if matrix shapes do not align.

allclose(other, rtol=1e-05, atol=1e-08)[source]

Check approximate equality with another matrix.

Parameters:
  • other (LowRankMatrix or ndarray) – Matrix to compare with.

  • rtol (float, optional) – Relative tolerance, by default 1e-5.

  • atol (float, optional) – Absolute tolerance, by default 1e-8.

Returns:

True if matrices are approximately equal within tolerances.

Return type:

bool

Notes

This forms the full matrix for comparison, which may be inefficient.

approximation_error(reference, ord='fro')[source]

Compute approximation error ||X - reference|| efficiently.

Parameters:
  • reference (ndarray) – Reference matrix to compare against.

  • ord (str or int, optional) – Norm type, by default ‘fro’ (Frobenius).

Returns:

Approximation error in the specified norm.

Return type:

float

Notes

This forms the full matrix, which may be inefficient for very large matrices. For Frobenius norm, consider using norm_squared() for efficiency.

compress()[source]

Compress the factorization to maximize memory efficiency.

Return type:

LowRankMatrix

compress_()[source]

Compress the factorization in-place to maximize memory efficiency.

Returns:

Self (modified in-place).

Return type:

LowRankMatrix

Notes

This modifies the matrix in-place. Use compress() for a non-destructive version.

compression_ratio()[source]

Compute compression ratio relative to dense storage.

Returns:

Ratio of low-rank storage to dense storage (< 1 means savings).

Return type:

float

Notes

A ratio of 0.1 means the low-rank format uses 10% of the memory of the dense format, i.e., a 10x compression.

cond_estimate(method='power_iteration', n_iter=10)[source]

Estimate condition number without computing full SVD.

Parameters:
  • method (str, optional) – Method to use: ‘power_iteration’ (default) or ‘norm_ratio’.

  • n_iter (int, optional) – Number of iterations for power iteration method, by default 10.

Returns:

Estimated condition number (ratio of largest to smallest singular value).

Return type:

float

Notes

The ‘power_iteration’ method estimates the largest singular value using power iteration. The smallest is estimated using the inverse (via solving linear systems). This is approximate but avoids full SVD.

The ‘norm_ratio’ method uses matrix norms as bounds.

conj()[source]

Complex conjugate of the matrix.

Return type:

LowRankMatrix

copy()[source]

Create a deep copy of the matrix.

Returns:

A deep copy of this matrix with independent factor matrices.

Return type:

LowRankMatrix

static create_data_alias(key)[source]

Create a property that provides access to an entry in extra_data.

Parameters:

key (str) – Key for accessing the data in self._extra_data.

Returns:

Property object for accessing the extra data.

Return type:

property

Examples

Create a property to access extra data stored with the matrix:

>>> class RationalKrylov(LowRankMatrix):
...     poles = LowRankMatrix.create_data_alias('poles')
...     residues = LowRankMatrix.create_data_alias('residues')

Now my_matrix.poles accesses self._extra_data[‘poles’].

static create_matrix_alias(index, transpose=False, conjugate=False)[source]

Create a property that provides access to a factor matrix with optional transformations.

Parameters:
  • index (int) – Index of the factor matrix in self._matrices.

  • transpose (bool, optional) – Whether to transpose the matrix when accessed, by default False.

  • conjugate (bool, optional) – Whether to conjugate the matrix when accessed, by default False.

Returns:

Property object for accessing the transformed matrix.

Return type:

property

Examples

Create a property to access the first factor matrix:

>>> class MyLowRank(LowRankMatrix):
...     U = LowRankMatrix.create_matrix_alias(0)
...     V = LowRankMatrix.create_matrix_alias(1, transpose=True)

Now my_matrix.U accesses self._matrices[0] and my_matrix.V accesses self._matrices[1].T.

property deepshape: tuple

Shape tuple including all intermediate dimensions of the factorization.

Returns:

Tuple of dimensions showing the shape of each factor in the chain. For factors A (m×k), B (k×n), returns (m, k, n).

Return type:

tuple

diag()[source]

Extract diagonal without forming full matrix.

Returns:

Diagonal elements of the matrix.

Return type:

ndarray

Notes

This is computed efficiently by extracting only the diagonal elements during the matrix chain multiplication, avoiding forming the full matrix.

dot(other, side='right', dense_output=False)[source]

Matrix and vector multiplication

Parameters:
  • other (LowRankMatrix, ndarray, spmatrix) – Matrix or vector to multiply with.

  • side (str, optional) – Whether to multiply on the left or right, by default ‘right’.

  • dense_output (bool, optional) – Whether to return a dense matrix or a low-rank matrix, by default False.

Return type:

ndarray | LowRankMatrix

dot_sparse(sparse_other, side='usual', dense_output=False)[source]

Efficient multiplication with a sparse matrix.

Parameters:
  • sparse_other (spmatrix) – Sparse matrix to multiply with.

  • side (str, optional) – ‘right’ or ‘usual’: output = self @ sparse_other ‘left’ or ‘opposite’: output = sparse_other @ self

  • dense_output (bool, optional) – Whether to return a dense matrix, by default False.

Returns:

Result of the multiplication.

Return type:

ndarray | LowRankMatrix

expm_multiply(A, h, side='left', dense_output=False, **extra_args)[source]

Efficient action of sparse matrix exponential left: output = exp(h*A) @ self right: output = self @ exp(h*A)

Parameters:
  • A (spmatrix) – Sparse matrix in the exponential.

  • h (float) – Time step.

  • side (str, optional) – Whether to multiply on the left or right, by default ‘left’.

  • dense_output (bool, optional) – Whether to return a dense matrix or a low-rank matrix, by default False.

  • extra_args (dict, optional) – Extra arguments to pass to scipy.sparse.linalg.expm_multiply.

Returns:

Resulting matrix after applying the exponential action.

Return type:

ndarray | LowRankMatrix

flatten()[source]

Flatten the matrix to a 1D array.

Return type:

ndarray

classmethod from_dense(matrix)[source]

Alias for from_matrix(). Must be implemented by subclasses.

Parameters:

matrix (ndarray)

classmethod from_full(matrix)[source]

Alias for from_matrix(). Must be implemented by subclasses.

Parameters:

matrix (ndarray)

classmethod from_low_rank(low_rank_matrix)[source]

Convert a LowRankMatrix to this specific subclass format.

Parameters:

low_rank_matrix (LowRankMatrix) – Existing low-rank matrix to convert.

Returns:

New matrix of the target subclass type.

Return type:

LowRankMatrix

Notes

Subclasses should override this to perform format-specific conversions. For example, SVD.from_low_rank() would compute the SVD of the input. The base class implementation creates a generic LowRankMatrix.

Examples

>>> X_generic = LowRankMatrix(A, B)
>>> X_svd = SVD.from_low_rank(X_generic)  # Converts to SVD format
classmethod from_matrix(matrix)[source]

Create a low-rank matrix from a full matrix.

This method must be implemented by subclasses (e.g., SVD, QR). The base LowRankMatrix class requires at least 2 matrices for factorization.

Return type:

LowRankMatrix

Parameters:

matrix (ndarray)

full()[source]

Form the full dense matrix by multiplying all factors in optimal order.

Return type:

ndarray

gather(indices)[source]

Access specific matrix entries without forming the full matrix.

Parameters:

indices (ndarray or list) – Index specification. For single element: [row_idx, col_idx]. For multiple elements (fancy indexing): [row_indices, col_indices].

Returns:

The requested matrix element(s).

Return type:

float | ndarray

Notes

This is faster and more memory-efficient than forming the full matrix. Very useful for matrix completion tasks or estimating reconstruction error.

get_block(rows, cols)[source]

Extract block submatrix.

Parameters:
  • rows (slice) – Row slice specification.

  • cols (slice) – Column slice specification.

Returns:

The requested block submatrix.

Return type:

ndarray

Notes

This method forms the full matrix, which may be inefficient. It’s provided for convenience and compatibility.

hadamard(other)[source]

Element-wise (Hadamard) product with another matrix.

Parameters:

other (LowRankMatrix or ndarray) – Matrix to multiply element-wise.

Returns:

Dense result of element-wise multiplication.

Return type:

ndarray

property is_memory_efficient: bool

Check if the low-rank representation uses less memory than dense storage.

Returns:

True if low-rank format uses less memory than dense format, False otherwise.

Return type:

bool

Notes

This compares the total number of elements stored in the factorization versus the full m×n matrix. For empty matrices (zero size), returns True.

Examples

>>> A = LowRankMatrix(np.random.randn(1000, 10), np.random.randn(10, 1000))
>>> A.is_memory_efficient  # True: 20,000 elements vs 1,000,000
True
>>> B = LowRankMatrix(np.random.randn(100, 90), np.random.randn(90, 100))
>>> B.is_memory_efficient  # False: 18,000 elements vs 10,000
False
is_symmetric()[source]

Check if the matrix is symmetric.

Returns:

True if the matrix is square and symmetric, False otherwise.

Return type:

bool

Notes

This method forms the full matrix to check symmetry. For large matrices, this may be memory-intensive.

is_well_conditioned(threshold=10000000000.0)[source]

Check if the factorization is numerically well-conditioned.

Parameters:

threshold (float, optional) – Condition number threshold, by default 1e10.

Returns:

True if estimated condition number is below threshold.

Return type:

bool

Notes

This uses an approximate condition number estimate and may not be accurate for all cases. Consider this a heuristic check.

property length: int

Number of factor matrices in the factorization.

Returns:

The number of matrices in the product chain.

Return type:

int

classmethod load(filename)[source]

Load low-rank matrix from disk.

Parameters:

filename (str) – Path to the saved matrix file.

Returns:

Loaded low-rank matrix.

Return type:

LowRankMatrix

matvec(v)[source]

Matrix-vector product (public interface, backward compatibility).

Parameters:

v (ndarray) – Vector to multiply with.

Returns:

Result of matrix-vector multiplication.

Return type:

ndarray

memory_usage(unit='MB')[source]

Report actual memory used by the factorization.

Parameters:

unit (str, optional) – Unit for memory size: ‘B’, ‘KB’, ‘MB’, ‘GB’, by default ‘MB’.

Returns:

Memory usage in the specified unit.

Return type:

float

multi_add(others)[source]

Add multiple low-rank matrices efficiently.

Parameters:

others (Sequence[LowRankMatrix]) – Sequence of low-rank matrices to add.

Returns:

Dense matrix representing the sum.

Return type:

ndarray

Notes

While more efficient than repeated use of +, this still requires forming full dense matrices and may be memory-intensive.

multi_dot(others)[source]

Matrix multiplication of a sequence of matrices.

Parameters:

others (Sequence[LowRankMatrix | ndarray]) – Sequence of matrices to multiply.

Returns:

Low rank matrix representing the product.

Return type:

LowRankMatrix

property ndim: int

Number of dimensions (always 2 for matrices).

norm(ord='fro')[source]

Default implementation, overload this for some subclasses

Return type:

float

Parameters:

ord (str | int)

norm_squared()[source]

Compute squared Frobenius norm efficiently: ||X||²_F = tr(X^H X).

Returns:

Squared Frobenius norm of the matrix.

Return type:

float

Notes

This is more efficient than computing the norm and squaring it, as it avoids the square root operation. For complex matrices, uses Hermitian transpose (X^H X) to ensure real result.

power(n)[source]

Compute matrix power X^n efficiently using repeated squaring.

Parameters:

n (int) – Power to raise the matrix to. Must be non-negative.

Returns:

Matrix raised to the n-th power. Returns identity matrix (as ndarray) for n=0, self for n=1, and LowRankMatrix for n>1.

Return type:

LowRankMatrix | ndarray

Raises:

ValueError – If matrix is not square or n is negative.

Notes

This uses the binary exponentiation algorithm for efficiency. Time complexity is O(log n) matrix multiplications.

property rank: int

Rank of the low-rank factorization.

Returns:

The maximum possible rank given the factorization structure, computed as the minimum dimension across all factor matrices.

Return type:

int

Notes

This is an upper bound on the true numerical rank. The actual rank may be lower if the factors are rank-deficient. For the true numerical rank, compute SVD of the full matrix.

rmatvec(v)[source]

Adjoint matrix-vector product (public interface, backward compatibility).

Parameters:

v (ndarray) – Vector to multiply with.

Returns:

Result of adjoint matrix-vector multiplication.

Return type:

ndarray

save(filename)[source]

Save low-rank matrix to disk efficiently.

Parameters:

filename (str) – Path to save the matrix. Extension ‘.npz’ will be added if not present.

Notes

The matrix is saved in compressed NumPy format with all factor matrices and extra data preserved.

scale_(scalar)[source]

Scale the matrix in-place by a scalar.

Parameters:

scalar (float) – Scalar to multiply the matrix by.

Returns:

Self (modified in-place).

Return type:

LowRankMatrix

property size: int

Total number of elements stored in the factorization.

Returns:

Sum of elements across all factor matrices.

Return type:

int

Notes

This returns the storage cost, not the matrix size (m×n). For matrix dimensions, use .shape. For compression ratio, use .compression_ratio().

to_dense()[source]

Convert to dense matrix (alias for full).

Return type:

ndarray

to_full()[source]

Convert to dense matrix (alias for full).

Return type:

ndarray

to_sparse(format='csr', threshold=1e-10)[source]

Convert to sparse format, zeroing small entries.

Parameters:
  • format (str, optional) – Sparse matrix format: ‘csr’, ‘csc’, ‘coo’, by default ‘csr’.

  • threshold (float, optional) – Entries with absolute value below threshold are set to zero, by default 1e-10.

Returns:

Sparse matrix representation.

Return type:

spmatrix

Notes

This forms the full dense matrix first, which may be inefficient. Only use this if you expect the resulting matrix to be sparse.

todense()[source]

Convert to dense matrix (alias for full).

Return type:

ndarray

trace()[source]

Compute trace efficiently using cyclic property: tr(ABC) = tr(CAB) = tr(BCA).

Returns:

Trace of the matrix.

Return type:

float

Notes

For a product of matrices A₁A₂…Aₙ, this method uses the cyclic property of the trace to minimize computational cost. The trace is computed as tr(A₂…AₙA₁) by cyclically permuting to minimize the intermediate matrix sizes.

transpose()[source]

Transpose the matrix (alias for .T property).

Return type:

LowRankMatrix

Parameters:

matrices (ndarray)

class low_rank_toolbox.matrices.QR(Q, R, transposed=False, **extra_data)[source]

Bases: LowRankMatrix

QR decomposition for low-rank matrices.

Efficient storage and operations for matrices represented as the product of an orthogonal matrix Q and an upper triangular matrix R. Particularly well-suited for solving linear systems and least squares problems.

Mathematical Representation

Standard form: X = Q @ R Transposed form: X = R.H @ Q.H

where: - Q ∈ ℂ^(m×r) has orthonormal columns (Q.H @ Q = I_r) - R ∈ ℂ^(r×n) is upper triangular - r = rank ≤ min(m, n)

For real matrices, conjugate transpose (.H) is equivalent to regular transpose (.T).

Key Differences from SVD

  • Triangular structure: R is upper triangular (not diagonal)

  • Solve efficiency: O(r²) back substitution vs O(r³) for SVD

  • No singular values: Diagonal of R approximates importance but not sorted

  • Operations: Addition/subtraction preserve QR structure (unlike SVD which needs recomputation)

  • Memory: Same O(mr + rn) storage as SVD

Storage Efficiency

Full matrix: O(mn) storage QR: O(mr + rn) storage Memory savings when r << min(m, n)

For 1000×800 matrix with rank 10: - Dense: 800,000 elements - QR: 18,000 elements (44x compression) - SVD: 18,100 elements (similar)

Important Notes

  • CRITICAL: Q MUST have orthonormal columns (Q.H @ Q = I_r). This is NOT verified at initialization for performance. Invalid inputs produce incorrect results without warning.

  • Use .is_orthogonal() to verify orthonormality if needed

  • Use .is_upper_triangular() to verify R structure

  • R diagonal elements |R[i,i]| serve as importance indicators (but not sorted)

  • After operations, may return LowRankMatrix (orthogonality not preserved)

  • Transposed mode enables efficient representation of A.H without recomputation

Q

Orthogonal matrix with orthonormal columns

Type:

ndarray, shape (m, r)

R

Upper triangular matrix

Type:

ndarray, shape (r, n)

shape

Shape of the represented matrix (m, n)

Type:

tuple

rank

Number of columns in Q (rank of the decomposition)

Type:

int

_transposed

Whether in transposed mode (X = R.H @ Q.H)

Type:

bool

Properties
----------
T

Transpose of the matrix

Type:

QR

H

Conjugate transpose (Hermitian) of the matrix

Type:

QR

conj

Complex conjugate of the matrix

Type:

QR

Methods Overview
----------------
Solving Linear Systems
  • solve(b) : Solve Xx = b via back/forward substitution

  • lstsq(b) : Least squares solution

  • pseudoinverse() : Moore-Penrose pseudoinverse

Matrix Operations
  • __add__ : Addition (preserves QR for matching transposed modes)

  • __sub__ : Subtraction (preserves QR for matching transposed modes)

  • __mul__ : Scalar or Hadamard multiplication

  • dot : Matrix-vector/matrix-matrix multiplication

  • hadamard : Element-wise multiplication

Validation
  • is_orthogonal() : Check Q orthonormality

  • is_upper_triangular() : Check R triangular structure

  • cond() : Condition number estimation

Truncation
  • truncate() : Remove columns with small R diagonal elements

Conversions
  • to_svd() : Convert to SVD format

  • from_svd() : Create from SVD

  • from_matrix() : Compute QR decomposition

  • from_low_rank() : QR of generic low-rank matrix

Norms
  • norm() : Matrix norms (Frobenius optimized for QR)

Class Methods
-------------
- from_matrix(A)
Type:

Compute QR decomposition of matrix A

- from_low_rank(LR)
Type:

Compute QR of low-rank matrix

- from_svd(svd)
Type:

Convert SVD to QR (Q=U, R=S@V.H)

- qr(A)
Type:

Alias for from_matrix

- generate_random(shape)
Type:

Generate random QR for testing

Examples

Creating QR from a matrix:

>>> import numpy as np
>>> from low_rank_toolbox.matrices import QR
>>>
>>> A = np.random.randn(100, 80)
>>> X = QR.from_matrix(A)
>>> print(X.shape)  # (100, 80)
>>> print(X.rank)   # 80
>>> np.allclose(X.full(), A)  # True

Solving linear systems (faster than SVD):

>>> b = np.random.randn(100)
>>> x = X.solve(b)
>>> np.allclose(A @ x, b)  # True
>>>
>>> # Least squares for overdetermined systems
>>> x_ls = X.lstsq(b)
>>> residual = np.linalg.norm(A @ x_ls - b)

Efficient operations:

>>> # Addition preserves QR structure (same transposed mode)
>>> Y = QR.from_matrix(np.random.randn(100, 80))
>>> Z = X + Y  # Returns QR (may auto-truncate to min(m,n))
>>>
>>> # Frobenius norm computed from R only
>>> norm_fro = X.norm('fro')  # O(rn) vs O(mnr) for full matrix

Transposed mode for efficient A.H representation:

>>> # Instead of computing QR(A.H), use transposed flag
>>> X_t = QR.from_matrix(A, transposed=True)
>>> np.allclose(X_t.full(), A.T)  # True (for real A)
>>>
>>> # Conjugate transpose flips mode
>>> X_h = X.H  # Returns transposed QR
>>> assert X_h._transposed != X._transposed

Validation and diagnostics:

>>> X.is_orthogonal()  # Verify Q orthonormality
>>> X.is_upper_triangular()  # Verify R structure
>>> cond_approx = X.cond(exact=False)  # Fast diagonal approximation
>>> cond_exact = X.cond(exact=True)   # Exact via SVD of R

Truncation for rank reduction:

>>> X_trunc = X.truncate(r=10)  # Keep first 10 columns
>>> X_trunc = X.truncate(atol=1e-10)  # Remove small R[i,i]

Conversion between formats:

>>> from low_rank_toolbox.matrices import SVD
>>> X_svd = X.to_svd()  # Convert to SVD (requires SVD of R)
>>> Y = QR.from_svd(X_svd)  # Convert back (Q=U, R=S@V.H)

Memory efficiency:

>>> X.compression_ratio()  # < 1.0 means memory savings
>>> X.memory_usage('MB')  # Actual memory used

See also

SVD

Singular Value Decomposition (diagonal middle matrix)

QuasiSVD

Generalized SVD with non-diagonal middle matrix

LowRankMatrix

Base class for low-rank representations

References

Notes

  • QR is particularly efficient for solving linear systems and least squares

  • For repeated solves with same A, QR is faster than LU or SVD

  • Column pivoting available via scipy.linalg.qr(pivoting=True) but not integrated into this class (see documentation for details)

  • Operations exploit orthogonality and triangular structure for efficiency

  • Condition number can be estimated quickly from R diagonal (O(r) vs O(r³))

property H: QR

Return the conjugate transpose (Hermitian) of the QR matrix.

Returns:

Conjugate transpose with flipped transposed mode

Return type:

QR

Notes

If X = Q @ R, then X.H = R.H @ Q.H (transposed mode) If X = R.H @ Q.H, then X.H = (R.H @ Q.H).H = Q @ R (standard mode)

The transposed flag is always flipped (unlike .conj which preserves it). For real matrices, .H is equivalent to .T

Examples

>>> A = np.random.randn(100, 80) + 1j * np.random.randn(100, 80)
>>> X = QR.from_matrix(A)
>>> X_h = X.H
>>> np.allclose(X_h.full(), A.T.conj())  # True
>>> assert X_h._transposed != X._transposed  # Mode flipped
>>>
>>> # Double conjugate transpose returns to original mode
>>> X_hh = X.H.H
>>> assert X_hh._transposed == X._transposed
>>> np.allclose(X_hh.full(), X.full())  # True
property Q: ndarray

Orthogonal matrix Q with orthonormal columns.

Returns:

The orthogonal factor with Q.H @ Q = I_r.

Return type:

ndarray, shape (m, r)

Notes

In standard mode (transposed=False): directly returns stored Q. In transposed mode (transposed=True): reconstructs Q from internal storage.

ASSUMPTION: Orthonormality is assumed, not verified. Use .is_orthogonal() to validate if needed.

Examples

>>> X = QR.from_matrix(np.random.randn(100, 80))
>>> Q = X.Q
>>> print(Q.shape)  # (100, 80)
>>> print(np.allclose(Q.T @ Q, np.eye(80)))  # True
property R: ndarray

Upper triangular matrix R.

Returns:

The upper triangular factor where R[i, j] ≈ 0 for i > j.

Return type:

ndarray, shape (r, n)

Notes

In standard mode (transposed=False): directly returns stored R. In transposed mode (transposed=True): reconstructs R from internal storage.

The diagonal elements |R[i,i]| indicate column importance (but unlike SVD, they are NOT sorted by magnitude).

ASSUMPTION: Upper triangular structure is assumed. Use .is_upper_triangular() to validate if needed.

Examples

>>> X = QR.from_matrix(np.random.randn(100, 80))
>>> R = X.R
>>> print(R.shape)  # (80, 80)
>>> print(np.allclose(np.tril(R, -1), 0))  # True (lower triangle is zero)
>>> print(np.diag(R))  # Diagonal elements (importance indicators)
property T: QR

Return the transpose of the QR matrix.

For real matrices, .T is equivalent to .H For complex matrices, .T transposes without conjugating.

Returns:

Transposed QR matrix

Return type:

QR

Notes

If X = Q @ R, then X.T = R.T @ Q.T = conj(R).H @ conj(Q).H If X = R.H @ Q.H, then X.T = (R.H @ Q.H).T = Q.conj() @ R.conj() = conj(Q) @ conj(R)

To get transpose (not conjugate transpose), we conjugate Q and R and flip the mode.

Examples

>>> A = np.random.randn(100, 80)
>>> X = QR.from_matrix(A)
>>> X_t = X.T
>>> np.allclose(X_t.full(), A.T)  # True
>>> assert X_t._transposed != X._transposed  # Mode flipped
__init__(Q, R, transposed=False, **extra_data)[source]

Initialize a QR decomposition.

Parameters:
  • Q (ndarray) – Orthogonal matrix (m x r) with orthonormal columns. ASSUMED to have orthonormal columns: Q.H @ Q = I_r. Not verified at initialization for performance.

  • R (ndarray) – Upper triangular matrix (r x n)

  • transposed (bool, optional) – If True, represents X = R.H @ Q.H instead of X = Q @ R, by default False For real matrices, this is equivalent to X = R.T @ Q.T

  • **extra_data – Additional data to store with the matrix

Notes

For the standard form (transposed=False): X = Q @ R For the transposed form (transposed=True): X = R.H @ Q.H

The transposed form is useful when you have computed QR(A.H) and want to represent A efficiently.

cond(p=2, exact=False)[source]

Compute condition number efficiently using R matrix.

For QR = Q @ R with orthogonal Q:

cond(Q @ R) = cond(R)

The condition number is preserved by orthogonal transformations.

Parameters:
  • p (int or str, optional) – Norm type. Default is 2 (spectral norm). - 2: Spectral condition number (ratio of max/min singular values) - ‘fro’: Frobenius norm condition number - 1, -1, inf, -inf: Other matrix norms

  • exact (bool, optional) – If True, compute exact condition number using numpy.linalg.cond. If False (default), use fast diagonal approximation for p=2. For other norms, always uses exact computation.

Returns:

Condition number of the matrix

Return type:

float

Raises:

ValueError – If R has zeros on diagonal (singular matrix)

Notes

Fast approximation (p=2, exact=False): For upper triangular R, estimates the 2-norm condition number from diagonal elements: cond_2(R) ≈ max(|diag(R)|) / min(|diag(R)|)

This is a LOWER BOUND for the true condition number. The exact value requires SVD of R: cond_2(R) = σ_max(R) / σ_min(R)

The approximation is exact for diagonal matrices and reasonable for well-conditioned matrices, but can significantly underestimate the condition number for ill-conditioned non-diagonal triangular matrices.

Exact computation (exact=True or other norms): Uses numpy.linalg.cond(R, p) which computes the exact condition number. This requires SVD for p=2, which is O(r³).

Computational cost: - p=2, exact=False: O(r) - diagonal extraction only - p=2, exact=True: O(r³) - requires SVD of R - Other norms: O(r²) to O(r³) depending on norm type

Caching: Results are cached for repeated queries with the same (norm, exact) pair.

Singularity: If any diagonal element is zero (within machine precision), the approximation returns np.inf to indicate singular matrix.

Why use approximation? For large ranks, computing exact condition number via SVD is expensive. The diagonal approximation provides a quick lower bound that’s sufficient for many applications (checking if matrix is well-conditioned).

Examples

>>> A = np.random.randn(100, 80)
>>> X = QR.from_matrix(A)
>>>
>>> # Fast approximation
>>> cond_approx = X.cond(2)  # O(r)
>>>
>>> # Exact computation
>>> cond_exact = X.cond(2, exact=True)  # O(r³)
>>>
>>> # For diagonal matrices, approximation is exact
>>> A_diag = np.diag([1e10, 1e5, 1e0, 1e-5, 1e-10])
>>> X_diag = QR.from_matrix(A_diag)
>>> print(X_diag.cond(2))  # 1e20 (exact for diagonal)
>>>
>>> # Other norms
>>> cond_fro = X.cond('fro')  # Always exact

See also

numpy.linalg.cond

General condition number computation

QuasiSVD.cond_estimate

Similar method for QuasiSVD

is_upper_triangular

Check if R is upper triangular

property conj: QR

Return the complex conjugate of the QR matrix.

Returns:

Complex conjugate of the QR matrix with same transposed mode

Return type:

QR

Notes

If X = Q @ R, then conj(X) = conj(Q) @ conj(R) If X = R.H @ Q.H, then conj(X) = conj(R).H @ conj(Q).H = conj(R.H) @ conj(Q.H)

The transposed flag is preserved (unlike .T which flips it).

Examples

>>> A = np.random.randn(100, 80) + 1j * np.random.randn(100, 80)
>>> X = QR.from_matrix(A)
>>> X_conj = X.conj
>>> np.allclose(X_conj.full(), A.conj())  # True
>>> assert X_conj._transposed == X._transposed  # Mode preserved
dot(other, side='right', dense_output=False)[source]

Matrix and vector multiplication

Parameters:
  • other (LowRankMatrix, ndarray, spmatrix) – Matrix or vector to multiply with.

  • side (str, optional) – Whether to multiply on the left or right, by default ‘right’.

  • dense_output (bool, optional) – Whether to return a dense matrix or a low-rank matrix, by default False.

Return type:

QR | LowRankMatrix | ndarray

classmethod from_low_rank(low_rank, transposed=False, **extra_data)[source]

Compute the QR decomposition from a low-rank representation.

Parameters:
  • low_rank (LowRankMatrix) – Low rank matrix to decompose

  • transposed (bool, optional) – If True, compute transposed form, by default False

  • **extra_data – Additional data to store

Returns:

QR decomposition object

Return type:

QR

classmethod from_matrix(matrix, mode='economic', transposed=False, **extra_data)[source]

Compute the QR decomposition of a matrix.

Parameters:
  • matrix (ndarray) – Matrix to decompose

  • mode (str, optional) – QR mode (‘economic’ or ‘complete’), by default ‘economic’

  • transposed (bool, optional) – If True, return the transposed form X = R.H @ Q.H, by default False Standard form (False): X = Q @ R Transposed form (True): X = R.H @ Q.H = matrix.H

  • **extra_data – Additional data to store

Returns:

QR decomposition object where full() reconstructs the input matrix

Return type:

QR

Examples

>>> A = np.random.randn(100, 80)
>>> X = QR.from_matrix(A)  # X = Q @ R, X.full() == A
>>> X_T = QR.from_matrix(A, transposed=True)  # X_T = R.H @ Q.H, X_T.full() == A.H
classmethod from_svd(svd, transposed=False, **extra_data)[source]

Convert SVD to QR format.

Simply uses Q = U and R = S @ V.H from the SVD representation. This is efficient and preserves orthogonality.

Parameters:
  • svd (SVD) – SVD object to convert from (X = U @ diag(s) @ V.H)

  • transposed (bool, optional) – If True, compute transposed form, by default False

  • **extra_data – Additional data to store (overrides svd._extra_data)

Returns:

QR decomposition object where full() reconstructs the SVD matrix

Return type:

QR

Notes

Algorithm:

For standard mode (transposed=False):
  • Set Q = U (already orthogonal from SVD)

  • Set R = diag(s) @ V.H = S @ V.H

  • Result: X = Q @ R = U @ S @ V.H (same as SVD)

For transposed mode (transposed=True):
  • Same Q and R as above

  • Interpretation: X = R.H @ Q.H = V @ S @ U.H = (U @ S @ V.H).H

Computational cost: - Matrix multiplication S @ V.H: O(r²n) where r = rank - No QR decomposition needed (already have orthogonal Q) - Total: O(r²n)

Advantages over to_svd(): - Cheaper: O(r²n) vs O(r²(m+n)) - No loss of orthogonality (U is already orthogonal) - Preserves singular value information in R structure - No SVD computation needed - Preserves singular value structure in R

When to use: - When you have an SVD and need QR format - When you want to exploit triangular structure of R - For efficient solve operations (QR is faster than SVD)

Examples

>>> A = np.random.randn(100, 80)
>>> X_svd = SVD.from_matrix(A)
>>> X_qr = QR.from_svd(X_svd)
>>> print(X_qr.shape)  # (100, 80)
>>> print(X_qr.rank)   # Same as X_svd.rank
>>> np.allclose(X_svd.full(), X_qr.full())  # True
>>> # Verify Q is orthogonal and R is upper triangular
>>> print(X_qr.is_orthogonal())  # True
>>> print(X_qr.is_upper_triangular())  # False (R = S @ V.H not triangular!)
>>> # Round-trip conversion
>>> X_svd2 = X_qr.to_svd()
>>> np.allclose(X_svd.full(), X_svd2.full())  # True

See also

to_svd

Convert QR to SVD format (requires SVD computation of R)

SVD

Singular Value Decomposition class

classmethod generate_random(shape, transposed=False, seed=None, **extra_data)[source]

Generate a random QR decomposition.

Parameters:
  • shape (tuple) – Shape of the matrix to generate (m, n)

  • transposed (bool, optional) – If True, generate transposed form, by default False

  • seed (int, optional) – Random seed for reproducibility, by default None

  • **extra_data – Additional data to store

Returns:

Random QR decomposition object

Return type:

QR

Examples

>>> X = QR.generate_random((100, 80), seed=42)
>>> print(X.shape)  # (100, 80)
>>> print(X.is_orthogonal())  # True
hadamard(other)[source]

Element-wise (Hadamard) product with another matrix.

Parameters:

other (QR or ndarray) – Other matrix to perform Hadamard product with

Returns:

Resulting matrix after Hadamard product

Return type:

QR, or ndarray

Notes

The rank will be multiplied by the rank of the other matrix. This operation may be inefficient for large ranks. In that case a warning is raised, and one should consider converting to dense with .todense() first.

is_orthogonal(tol=1e-12)[source]

Check if Q has orthonormal columns within a tolerance.

Parameters:

tol (float, optional) – Tolerance for orthogonality check, by default 1e-12

Returns:

True if Q.H @ Q is approximately equal to identity

Return type:

bool

Notes

Verifies that Q.H @ Q = I_k (orthonormal columns), where k is the number of columns in Q. This is always true when QR is constructed via from_matrix() or qr(), but may not hold if Q and R are provided manually. Note that Q may have more columns than the reported rank after Hadamard products.

Computational cost: O(k²m) where k is the number of columns in Q. Result is cached for the default tolerance.

is_upper_triangular(tol=1e-12)[source]

Check if R is upper triangular within a tolerance.

Parameters:

tol (float, optional) – Tolerance for triangularity check, by default 1e-12

Returns:

True if all elements below the diagonal of R are approximately zero

Return type:

bool

Notes

Verifies that R[i,j] ≈ 0 for all i > j (strict lower triangular part is zero). This is always true when QR is constructed via from_matrix() or qr(), but may not hold if Q and R are provided manually.

For methods like solve() and lstsq(), having an upper triangular R enables efficient back substitution. If R is not upper triangular, these methods may produce incorrect results.

Computational cost: O(r²) where r = min(rank, n) is the number of rows in R. Result is cached for the default tolerance.

Examples

>>> X = QR.from_matrix(A)
>>> X.is_upper_triangular()  # True
True
>>> Q = np.random.randn(10, 5)
>>> R = np.random.randn(5, 8)  # Not upper triangular
>>> Y = QR(Q, R)
>>> Y.is_upper_triangular()  # False
False
lstsq(b, rcond=None)[source]

Least squares solution via QR decomposition.

Computes x that minimizes ||Xx - b||_2

Parameters:
  • b (ndarray) – Right-hand side, shape (m,) or (m, k)

  • rcond (float, optional) – Relative condition number threshold for rank determination. Singular values smaller than rcond * largest_singular_value are treated as zero. If None, machine precision is used.

Returns:

Least squares solution, shape (n,) or (n, k)

Return type:

ndarray

Notes

For overdetermined systems (m > n), finds x minimizing ||Ax - b||_2 For underdetermined systems (m < n), finds minimum norm solution

Uses QR decomposition: - For X = Q @ R: x = R^+ @ Q.H @ b where R^+ is pseudoinverse of R - Handles rank deficiency by zeroing small diagonal elements of R

Examples

>>> A = np.random.randn(100, 80)
>>> b = np.random.randn(100)
>>> X = QR.from_matrix(A)
>>> x = X.lstsq(b)
>>> residual = np.linalg.norm(A @ x - b)
norm(ord='fro')[source]

Compute matrix norm with QR-specific optimizations.

Parameters:

ord (str or int, optional) – Norm type. Default is ‘fro’ (Frobenius). - ‘fro’: Frobenius norm (optimized: ||Q @ R||_F = ||R||_F for orthogonal Q) - 2: Spectral norm (requires full matrix) - 1, -1, inf, -inf: Other matrix norms (require full matrix)

Returns:

Matrix norm

Return type:

float

Notes

For orthogonal Q:

||Q @ R||_F = ||R||_F (Frobenius norm preserved) ||Q @ R||_2 ≠ ||R||_2 (spectral norm NOT preserved)

Results are cached for efficiency.

Examples

>>> X = QR.from_matrix(A)
>>> X.norm('fro')  # Efficient: computed from R only
>>> X.norm(2)      # Requires full matrix
pseudoinverse(rcond=None)[source]

Compute Moore-Penrose pseudoinverse using QR factorization.

For X = Q @ R, computes X^+ = R^+ @ Q.H

Parameters:

rcond (float, optional) – Cutoff for small singular values in R. Singular values smaller than rcond * largest_singular_value are set to zero. If None, machine precision is used.

Returns:

Pseudoinverse matrix, shape (n, m)

Return type:

ndarray

Notes

The pseudoinverse satisfies: 1. X @ X^+ @ X = X 2. X^+ @ X @ X^+ = X^+ 3. (X @ X^+).H = X @ X^+ 4. (X^+ @ X).H = X^+ @ X

For full rank matrices: - Overdetermined (m > n): X^+ = (X.H @ X)^{-1} @ X.H = R^{-1} @ Q.H - Underdetermined (m < n): X^+ = X.H @ (X @ X.H)^{-1}

Computational cost: O(n²m) for computing R^+ and matrix multiplication

Examples

>>> A = np.random.randn(100, 80)
>>> X = QR.from_matrix(A)
>>> X_pinv = X.pseudoinverse()
>>> np.allclose(A @ X_pinv @ A, A)  # Property 1
True
>>> np.allclose(X_pinv @ A @ X_pinv, X_pinv)  # Property 2
True
classmethod qr(matrix, mode='economic', transposed=False, **extra_data)[source]

Compute a QR decomposition (alias for from_matrix).

Parameters:
  • matrix (ndarray) – Matrix to decompose

  • mode (str, optional) – QR mode (‘economic’ or ‘complete’), by default ‘economic’

  • transposed (bool, optional) – If True, compute transposed form, by default False

  • **extra_data – Additional data to store

Returns:

QR decomposition object

Return type:

QR

solve(b, method='direct')[source]

Solve Xx = b using QR factorization.

For X = Q @ R, solve via:

Q @ R @ x = b R @ x = Q.T @ b x = R^{-1} @ Q.T @ b (back substitution)

Parameters:
  • b (ndarray) – Right-hand side, shape (m,) or (m, k)

  • method (str, optional) – ‘direct’ (default): Use back substitution (fast, requires full rank) ‘lstsq’: Use least squares (handles rank deficiency)

Returns:

Solution x, shape (n,) or (n, k)

Return type:

ndarray

Raises:
  • ValueError – If matrix dimensions don’t match RHS dimensions

  • np.linalg.LinAlgError – If matrix is singular (method=’direct’)

Notes

QR solve is faster than SVD solve: O(n²) vs O(n³) Best for overdetermined systems (more equations than unknowns)

For standard mode (X = Q @ R):
  • Solve R @ x = Q.H @ b via back substitution

For transposed mode (X = R.H @ Q.H):
  • Solve R.H @ Q.H @ x = b

  • Equivalent to Q @ R @ y = b where y = conj(x)

Examples

>>> A = np.random.randn(100, 80)
>>> b = np.random.randn(100)
>>> X = QR.from_matrix(A)
>>> # For overdetermined systems (more rows than columns), use lstsq instead:
>>> x = X.lstsq(b)
>>> # For square full-rank systems:
>>> A_square = np.random.randn(80, 80)
>>> X_square = QR.from_matrix(A_square)
>>> b_square = np.random.randn(80)
>>> x = X_square.solve(b_square)
>>> np.allclose(A_square @ x, b_square)
True
to_svd()[source]

Convert QR to SVD format.

Computes SVD of R: R = U_R @ S @ V_R.H Then: X = Q @ R = (Q @ U_R) @ S @ V_R.H

Returns:

Equivalent SVD representation with same shape and rank

Return type:

SVD

Notes

Algorithm:

For standard mode (X = Q @ R):
  1. Compute SVD of R: R = U_R @ diag(s) @ V_R.H

  2. Construct: X = Q @ R = (Q @ U_R) @ diag(s) @ V_R.H

  3. Return SVD(Q @ U_R, s, V_R)

For transposed mode (X = R.H @ Q.H):
  1. Compute SVD of R: R = U_R @ diag(s) @ V_R.H

  2. X.H = (R.H @ Q.H).H = Q @ R = Q @ U_R @ diag(s) @ V_R.H

  3. Return SVD(Q @ U_R, s, V_R)

Computational cost: - SVD of R: O(r²n) where r = rank, n = number of columns - Matrix multiplication Q @ U_R: O(mr²) where m = number of rows - Total: O(r²(m + n))

Orthogonality: - Input: Q has orthonormal columns, R is upper triangular - Output: U = Q @ U_R has orthonormal columns (product of orthogonal matrices) - Output: V = V_R has orthonormal columns (from SVD) - Result is a valid SVD with orthonormal U and V

Transposed mode: Both standard and transposed modes produce the same SVD representation. This is because SVD naturally represents X, not X.H.

Examples

>>> A = np.random.randn(100, 80)
>>> X_qr = QR.from_matrix(A)
>>> X_svd = X_qr.to_svd()
>>> print(X_svd.shape)  # (100, 80)
>>> print(X_svd.rank)   # Same as X_qr.rank
>>> np.allclose(X_qr.full(), X_svd.full())  # True
>>> # Transposed mode
>>> X_qr_t = QR.from_matrix(A, transposed=True)
>>> X_svd_t = X_qr_t.to_svd()
>>> np.allclose(X_qr_t.full(), X_svd_t.full())  # True

See also

from_svd

Convert SVD to QR format

SVD

Singular Value Decomposition class

truncate(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]

Truncate QR to lower rank by removing columns with small R diagonal elements.

The rank is prioritized over the tolerance. The relative tolerance is prioritized over absolute tolerance. NOTE: By default, the truncation is done with respect to the machine precision (DEFAULT_ATOL).

Parameters:
  • r (int, optional) – Target rank (number of columns to keep). If None, use tolerance.

  • rtol (float, optional) – Relative tolerance. Remove columns where |R[i,i]| < rtol * max(|R[i,i]|). Default is DEFAULT_RTOL (None).

  • atol (float, optional) – Absolute tolerance. Remove columns where |R[i,i]| < atol. Default is DEFAULT_ATOL (~2.22e-14).

  • inplace (bool, optional) – If True, modify in place. Default False.

Returns:

Truncated QR decomposition

Return type:

QR

Notes

For QR, truncation keeps the first r columns of Q and first r rows of R. The diagonal elements of R, |R[i,i]|, serve as importance indicators.

Unlike SVD where singular values are sorted, R diagonal elements are NOT necessarily in decreasing order. However, for numerically stable QR decompositions, large diagonal elements indicate more important directions.

Algorithm: 1. If r is specified: keep first r columns/rows 2. If rtol is specified: keep columns where |R[i,i]| > rtol * max(|R[i,i]|) 3. If atol is specified: keep columns where |R[i,i]| > atol

Priority: r > rtol > atol

Examples

>>> X = QR.from_matrix(A)
>>> X_trunc = X.truncate(r=10)  # Keep only first 10 columns
>>> X_trunc = X.truncate(rtol=1e-10)  # Remove columns with small R[i,i]
>>> X_trunc = X.truncate(atol=1e-12)  # Absolute threshold

See also

SVD.truncate

Similar method for SVD (uses singular values)

Parameters:
  • Q (ndarray)

  • R (ndarray)

  • transposed (bool)

class low_rank_toolbox.matrices.QuasiSVD(U, S, V, **extra_data)[source]

Bases: LowRankMatrix

Quasi Singular Value Decomposition (Quasi-SVD) for low-rank matrices.

A generalization of the SVD where the middle matrix S is not necessarily diagonal. This representation is useful for operations that preserve orthogonality of U and V but may produce non-diagonal or even singular middle matrices.

Mathematical Representation

X = U @ S @ V.T

where: - U ∈ ℝ^(m×r) should have orthonormal columns (U^T @ U = I_r) - V ∈ ℝ^(n×q) should have orthonormal columns (V^T @ V = I_q) - S ∈ ℝ^(r×q) is a general matrix (not necessarily diagonal, may be singular)

Key Differences from SVD

  • SVD: S is diagonal with non-negative singular values, guaranteed non-singular

  • QuasiSVD: S is a general matrix, may be non-diagonal and potentially singular

  • QuasiSVD can represent intermediate results of matrix operations efficiently

  • Converting QuasiSVD → SVD requires O(r³) operations (SVD of S)

Storage Efficiency

Full matrix: O(mn) storage QuasiSVD: O(mr + rq + nq) storage Memory savings when r, q << min(m, n)

Important Notes

  • U and V are ASSUMED to have orthonormal columns (not verified at initialization)

  • Use .is_orthogonal() to verify orthonormality if needed

  • After matrix operations, orthogonality of U and V is preserved when possible, otherwise returns LowRankMatrix

  • After addition/subtraction, S may become singular or ill-conditioned

  • Use .truncate() to convert to SVD and remove small/zero singular values

  • Use .to_svd() to convert to diagonal form without truncation

U

Left matrix (assumed to have orthonormal columns)

Type:

ndarray, shape (m, r)

S

Middle matrix (general matrix, may be singular)

Type:

ndarray, shape (r, q)

V

Right matrix (assumed to have orthonormal columns)

Type:

ndarray, shape (n, q)

Vh

Hermitian conjugate of V (V.T.conj())

Type:

ndarray, shape (q, n)

Vt

Transpose of V (without conjugate)

Type:

ndarray, shape (q, n)

Ut

Transpose of U (without conjugate)

Type:

ndarray, shape (r, m)

Uh

Hermitian conjugate of U (U.T.conj())

Type:

ndarray, shape (r, m)

K

Product U @ S, computed on demand and not cached

Type:

ndarray, shape (m, r)

L

Product V @ S.T, computed on demand and not cached

Type:

ndarray, shape (r, n)

Properties
----------
shape

Shape of the represented matrix (m, n)

Type:

tuple

rank

Current rank of the representation (not necessarily matrix rank)

Type:

int

sing_vals[source]

Singular values of S (computed on demand and cached)

Type:

ndarray

Return type:

ndarray

Methods Overview
----------------
Core Operations
  • __add__, __sub__ : Addition/subtraction maintaining low-rank structure

  • __mul__ : Scalar or Hadamard (element-wise) multiplication

  • dot : Matrix-vector or matrix-matrix multiplication

  • hadamard : Element-wise multiplication with another matrix

Conversion & Truncation
  • to_svd() : Convert to SVD with diagonal S

  • truncate() : Convert to SVD and remove small singular values

Properties & Checks
  • is_symmetric() : Check if matrix is symmetric

  • is_orthogonal() : Check if U and V are orthonormal

  • is_singular() : Check if S is singular

  • norm() : Compute matrix norm (Frobenius, 2-norm, nuclear)

Class Methods
  • multi_add() : Efficient addition of multiple QuasiSVD matrices

  • multi_dot() : Efficient multiplication of multiple QuasiSVD matrices

  • generalized_nystroem() : Randomized low-rank approximation

Configuration
-------------
Default behavior controlled by
- DEFAULT_ATOL (machine precision)
Type:

Absolute tolerance for truncation

- DEFAULT_RTOL (None)
Type:

Relative tolerance for truncation

Examples

>>> import numpy as np
>>> from low_rank_toolbox.matrices import QuasiSVD
>>>
>>> # Create orthonormal matrices
>>> m, n, r = 100, 80, 10
>>> U, _ = np.linalg.qr(np.random.randn(m, r))
>>> V, _ = np.linalg.qr(np.random.randn(n, r))
>>> S = np.random.randn(r, r)
>>>
>>> # Create QuasiSVD
>>> X = QuasiSVD(U, S, V)
>>> print(X.shape)  # (100, 80)
>>> print(X.rank)   # 10
>>>
>>> # Operations preserve low-rank structure
>>> Y = X + X  # rank = 20 (sum of ranks)
>>> Z = X @ X.T  # matrix multiplication
>>>
>>> # Convert to SVD (diagonal S)
>>> X_svd = X.to_svd()
>>>
>>> # Truncate small singular values
>>> X_trunc = X.truncate(rtol=1e-10)
>>>
>>> # Addition of multiple matrices
>>> W = QuasiSVD.multi_add([X, -X])  # rank = 2*rank(X), call .truncate() to reduce

See also

SVD

Singular Value Decomposition with diagonal middle matrix

LowRankMatrix

Base class for low-rank matrix representations

References

Notes

  • This class is designed for numerical linear algebra applications

  • Operations may accumulate numerical errors; consider periodic truncation

  • Orthonormality of U and V is assumed but NOT enforced at initialization

  • S may become singular after operations like addition or subtraction

  • Use .is_orthogonal() to verify the orthonormality assumption

  • Use .is_singular() to check if S has become singular

  • For exact SVD with guaranteed diagonal non-singular S, use the SVD class

  • The representation is not unique: (U, S, V) and (UQ, Q^T S R, VR^T) represent the same matrix for any orthogonal Q, R

property H: QuasiSVD

Hermitian conjugate (conjugate transpose) of the QuasiSVD matrix.

For X = U @ S @ V.H, the Hermitian conjugate is:

X.H = (U @ S @ V.H).H = V @ S.H @ U.H = V @ S*.T @ U.H

Returns QuasiSVD(V, S.T.conj(), U) which stores [V, S*.T, U.H] and represents V @ S*.T @ U.H.

This is equivalent to X.T.conj() or X.conj().T.

Returns:

Hermitian conjugate in QuasiSVD format

Return type:

QuasiSVD

Notes

For real matrices, X.H is equivalent to X.T.

property K: ndarray

Compute and return the product U @ S on demand.

Returns:

Product U @ S, shape (m, r)

Return type:

ndarray

property L: ndarray

Compute and return the product V @ S.T on demand.

Returns:

Product V @ S.T, shape (n, r)

Return type:

ndarray

property S
property T: QuasiSVD

Transpose of the QuasiSVD matrix (without conjugation).

For X = U @ S @ V.H, the transpose is:

X.T = (U @ S @ V.H).T = V* @ S.T @ U.T

where V* denotes conjugate (not Hermitian).

Returns QuasiSVD(V.conj(), S.T, U.conj()) which stores [V*, S.T, U*.H] and represents V* @ S.T @ U.T.

Returns:

Transposed matrix in QuasiSVD format

Return type:

QuasiSVD

Notes

For real matrices, this is equivalent to QuasiSVD(V, S.T, U). For complex matrices, U and V must be conjugated.

property U
property Uh
property Ut
property V
property Vh
property Vt
__init__(U, S, V, **extra_data)[source]

Create a low-rank matrix stored by its SVD: Y = U @ S @ V.T

NOTE: U and V must be orthonormal NOTE: S is not necessarly diagonal and can be rectangular NOTE: The user must give V and not V.T or V.H

Parameters:
  • U (ndarray) – Left singular vectors, shape (m, r)

  • S (ndarray) – Non-singular matrix, shape (r, q)

  • V (ndarray) – Right singular vectors, shape (n, q)

Raises:
  • ValueError – If matrix dimensions do not match for multiplication.

  • TypeError – If S is provided as a 1D array or diagonal matrix (use SVD class instead).

Warning

MemoryEfficiencyWarning

If the low-rank representation uses more memory than dense storage.

cond_estimate()[source]

Condition number of the QuasiSVD matrix, estimated from the singular values.

Returns:

Condition number of the matrix.

Return type:

float

conj()[source]

Complex conjugate of the QuasiSVD matrix.

Returns QuasiSVD instead of generic LowRankMatrix.

Returns:

Complex conjugate in QuasiSVD format

Return type:

QuasiSVD

diag()[source]

Extract the diagonal of the matrix efficiently.

For QuasiSVD: diag[i] = U[i,:] @ S @ V[i,:].T This is O(mr²) instead of O(mn) for the full matrix.

Returns:

The diagonal elements

Return type:

ndarray

dot(other, side='right', dense_output=False)[source]

Matrix multiplication between SVD and other. The output is an SVD or a Matrix, depending on the type of other. If two QuasiSVD are multiplied, the new rank is the minimum of the two ranks.

If side is ‘right’ or ‘usual’, compute self @ other. If side is ‘left’ or ‘opposite’, compute other @ self.

Parameters:
  • other (QuasiSVD or ndarray) – Matrix to multiply

  • side (str, optional) – ‘left’ or ‘right’, by default ‘right’

  • dense_output (bool, optional) – If True, return a dense matrix. False by default.

Returns:

Result of the matrix multiplication

Return type:

QuasiSVD or ndarray

expm(inplace=False, **extra_data)[source]

Compute the matrix exponential exp(X) = e^X.

For QuasiSVD X = U @ S @ V.H, if X is Hermitian (V.H == U), then:

exp(X) = U @ exp(S) @ U.H

where exp(S) is the matrix exponential of S.

This is more efficient than general matrix exponentiation when the middle matrix S is small (r << n).

Parameters:
  • inplace (bool, optional) – If True, modify the current object in-place. Default is False.

  • **extra_data – Additional keyword arguments passed to __init__ for the new object.

Returns:

Matrix exponential in QuasiSVD format.

Return type:

QuasiSVD

Raises:

Notes

Computational cost: O(r³) for matrix exponential of r×r matrix S (vs O(n³) for general n×n matrix).

This method currently only supports Hermitian matrices where V.H == U. For general matrices, the eigenvalue decomposition would be needed.

Examples

>>> S = np.array([[1.0, 0.1], [0.1, 0.5]])
>>> U, _ = np.linalg.qr(np.random.randn(100, 2))
>>> X = QuasiSVD(U, S, U)  # Symmetric/Hermitian
>>> # Note: Current implementation has a bug - references self.s which doesn't exist
>>> # Use SVD.expm() for diagonal S matrices instead
>>> # X_exp = X.expm()  # Will raise AttributeError

See also

sqrtm

Matrix square root

SVD.expm

Optimized version for diagonal S

classmethod from_qr(qr)[source]

Convert QR format to QuasiSVD.

Parameters:

qr (QR) – QR representation to convert

Returns:

QuasiSVD representation

Return type:

QuasiSVD

Examples

>>> X_qr = QR(Q, R)
>>> X = QuasiSVD.from_qr(X_qr)
classmethod generate_random(shape, rank, seed=1234, is_symmetric=False, **extra_data)[source]

Generate a random QuasiSVD matrix with orthonormal U and V.

Parameters:
  • shape (tuple) – Shape of the matrix (m, n)

  • rank (int) – Rank of the matrix

  • seed (int, optional) – Random seed for reproducibility, by default 1234

  • is_symmetric (bool, optional) – If True, generate a symmetric matrix (only for square shapes), by default False

Returns:

Random QuasiSVD matrix

Return type:

QuasiSVD

Raises:

ValueError – If is_symmetric is True but shape is not square.

hadamard(other)[source]

Hadamard product between two QuasiSVD matrices

The new rank is the multiplication of the two ranks, at maximum. NOTE: if the expected rank is too large, dense matrices are used for the computation, but the output is still an SVD.

Parameters:

other (QuasiSVD, LowRankMatrix or ndarray) – Matrix to multiply

Returns:

Result of the Hadamard product.

Return type:

QuasiSVD or SVD or ndarray

is_orthogonal()[source]

Check if U and V have orthonormal columns.

Returns:

True if both U.H @ U ≈ I and V.H @ V ≈ I, False otherwise.

Return type:

bool

Notes

Result is cached after first computation. Uses np.allclose with default tolerances. Orthogonality is ASSUMED at initialization but not enforced. This method verifies the assumption.

is_singular()[source]

Check if middle matrix S is numerically singular.

Returns:

True if S is singular (condition number >= 1/machine_eps), False otherwise.

Return type:

bool

Notes

Result is cached after first computation. Uses condition number test: cond(S) >= 1/ε where ε is machine precision. S may become singular after operations like addition or subtraction.

is_symmetric()[source]

Check if the QuasiSVD matrix is symmetric.

Returns:

True if matrix is symmetric (square and U = V), False otherwise.

Return type:

bool

Notes

For QuasiSVD to be symmetric, U and V must be identical (within tolerance). This is a necessary condition when X = U @ S @ V.T. Uses np.allclose for comparison (tolerates small numerical errors).

lstsq(b, rtol=None, atol=np.float64(2.220446049250313e-14))[source]

Compute the least-squares solution to Xx ≈ b.

Minimizes ||Xx - b||₂ using the pseudoinverse: x = X⁺ @ b

Parameters:
  • b (ndarray) – Right-hand side vector or matrix. Shape (m,) or (m, k).

  • rtol (float, optional) – Relative tolerance for pseudoinverse computation. Default is None (uses machine precision * max(m,n)).

  • atol (float, optional) – Absolute tolerance for pseudoinverse. Default is DEFAULT_ATOL.

Returns:

Least-squares solution x. Shape (n,) or (n, k).

Return type:

ndarray

Examples

>>> X = QuasiSVD.generate_random((100, 80), 20)
>>> b = np.random.randn(100)
>>> x = X.lstsq(b)
>>> # x minimizes ||X @ x - b||
>>> residual = np.linalg.norm(X.dot(x) - b)

See also

pseudoinverse

Compute pseudoinverse

solve

Solve linear system

classmethod multi_add(matrices)[source]

Addition of multiple QuasiSVD matrices.

This is efficiently done by stacking the U, S, V matrices and then re-orthogonalizing. The resulting rank is at most the sum of all input ranks.

Parameters:

matrices (List[QuasiSVD]) – Matrices to add

Returns:

Sum of the matrices.

Return type:

QuasiSVD

Notes

  • Adding matrices increases rank: rank(X + Y) <= rank(X) + rank(Y)

  • X - X has rank 2*rank(X) but represents zero. Call .truncate() to reduce rank.

Examples

>>> Z = QuasiSVD.multi_add([X, -X])  # rank = 2*rank(X), but represents zero
>>> Z = Z.truncate(atol=1e-14)  # rank ~ 0
classmethod multi_dot(matrices)[source]

Matrix multiplication of several QuasiSVD matrices.

The rank of the output is the minimum of the ranks of the first and last inputs, at maximum. The matrices are multiplied all at once, so this method is more efficient than multiplying them one by one.

Parameters:

matrices (List[QuasiSVD]) – Matrices to multiply

Returns:

Product of the matrices.

Return type:

QuasiSVD

norm(ord='fro')[source]

Calculate matrix norm with caching and optimization for orthogonal case.

Parameters:

ord (str or int, default='fro') – Order of the norm. Supported values: - ‘fro’: Frobenius norm (default) - 2: Spectral norm (largest singular value) - ‘nuc’: Nuclear norm (sum of singular values) - Other values: computed via np.linalg.norm(self.full(), ord=ord)

Returns:

The computed norm value.

Return type:

float

Notes

Result is cached after first computation for each norm type. For orthogonal U and V, common norms are computed efficiently from singular values of S (O(r³)) instead of forming full matrix (O(mnr)). - Frobenius: ||X||_F = ||S||_F - Spectral (2-norm): ||X||_2 = σ_max(S) - Nuclear: ||X||_* = Σ σ_i(S)

norm_squared()[source]

Compute the squared Frobenius norm efficiently.

For orthogonal U and V: ||X||²_F = ||S||²_F This is O(r²) instead of O(mnr) for generic computation.

Returns:

The squared Frobenius norm

Return type:

float

numerical_health_check(verbose=True)[source]

Check the numerical health of the QuasiSVD representation.

This method checks: - Orthogonality of U and V - Condition number of S - Presence of near-zero singular values - Memory efficiency

Parameters:

verbose (bool, optional) – If True, print a summary. Default is True.

Returns:

Dictionary with health metrics: - ‘orthogonal_U’: bool, whether U is orthogonal - ‘orthogonal_V’: bool, whether V is orthogonal - ‘orthogonality_error_U’: float, ||U.H @ U - I||_F - ‘orthogonality_error_V’: float, ||V.H @ V - I||_F - ‘condition_number_S’: float, condition number of S - ‘is_singular’: bool, whether S is numerically singular - ‘min_singular_value’: float, smallest singular value of S - ‘max_singular_value’: float, largest singular value of S - ‘singular_value_ratio’: float, max/min singular values - ‘compression_ratio’: float, storage efficiency - ‘memory_efficient’: bool, whether low-rank is beneficial - ‘recommendations’: list of str, suggested actions

Return type:

dict

Examples

>>> X = QuasiSVD.generate_random((100, 80), 10)
>>> health = X.numerical_health_check()
>>> if not health['orthogonal_U']:
...     X = X.reorthogonalize()
property numerical_rank: int

Numerical rank of the QuasiSVD matrix.

The numerical rank is defined as the number of non-negligible singular values up to the machine precision.

Returns:

Numerical rank of the matrix.

Return type:

int

plot_singular_value_decay(semilogy=True, show=True, **kwargs)[source]

Plot the singular value decay of the middle matrix S.

This helps visualize the numerical rank and identify a good truncation threshold.

Parameters:
  • semilogy (bool, optional) – If True, use logarithmic scale for y-axis. Default is True.

  • show (bool, optional) – If True, call plt.show(). Default is True.

  • **kwargs – Additional keyword arguments passed to plt.plot()

Returns:

Matplotlib figure and axes objects

Return type:

fig, ax

Examples

>>> X = QuasiSVD.generate_random((100, 80), 20)
>>> fig, ax = X.plot_singular_value_decay()
>>> ax.axhline(1e-10, color='r', linestyle='--', label='Threshold')
>>> ax.legend()
project_onto_column_space(other, dense_output=False)[source]

Projection of other onto the column space of self.

The rank of the output is the rank of self, at maximum. Output is typically a QR object unless dense_output=True, in which case it is a dense ndarray.

The formula is given by:

P_U Y = UUh Y

where X = U S Vh is the SVD of matrix self and Y is the input (other) matrix to project.

Parameters:
  • other (ndarray or LowRankMatrix) – Matrix to project

  • dense_output (bool, optional) – Whether to return a dense matrix. False by default.

Returns:

Projection of other onto the column space of self. Returns QR if dense_output=False, ndarray otherwise.

Return type:

QR or ndarray

project_onto_interpolated_tangent_space(mode='online', dense_output=False, **kwargs)[source]

Oblique projection onto the tangent space at self using interpolation (DEIM-like methods).

The formula is given by:

P_X Y = M_u Y_u - M_u Y_uv M_v^* + Y_v M_v^*

where M_u = U (P_U^* U)^{-1}, M_v = V (P_V^* V)^{-1}, Y_u = Y[p_u, :], Y_v = Y[:, p_v] and Y_uv = Y[p_u, p_v] with p_u and p_v being the interpolation indices for U and V, respectively.

Here, self = U S Vh is the SVD of matrix self and Y is the input matrix to project.

Two usage modes:

  1. Offline mode (provide pre-computed interpolatory matrices): Provide Y_u, Y_v, Y_uv, M_u, M_v in kwargs.

  2. Online mode (compute interpolation on-the-fly): Provide Y in kwargs. Optionally provide cssp_method_u and/or cssp_method_v. The indices and interpolatory matrices are computed using the specified CSSP methods. If not provided, QDEIM is used by default.

Parameters:
  • mode (str, optional) – Either ‘online’ or ‘offline’. Default is ‘online’.

  • dense_output (bool, optional) – Whether to return a dense matrix. False by default.

  • **kwargs (dict) –

    For offline mode: Y_u, Y_v, Y_uv, M_u, M_v For online mode: Y (required), cssp_method_u (optional, default QDEIM),

    cssp_method_v (optional, default QDEIM), cssp_kwargs_u (optional), cssp_kwargs_v (optional)

Returns:

Oblique projection of Y onto the tangent space at self using interpolation.

Return type:

QuasiSVD

Examples

Offline mode:

>>> # Pre-compute interpolatory matrices
>>> p_u, M_u = DEIM(X.U, return_projector=True)
>>> p_v, M_v = DEIM(X.V, return_projector=True)
>>> Y_full = Y.full()
>>> Y_u = Y_full[p_u, :]
>>> Y_v = Y_full[:, p_v]
>>> Y_uv = Y_full[np.ix_(p_u, p_v)]
>>> result = X.project_onto_interpolated_tangent_space(
...     mode='offline',
...     Y_u=Y_u, Y_v=Y_v, Y_uv=Y_uv, M_u=M_u, M_v=M_v
... )

Online mode (with default QDEIM):

>>> result = X.project_onto_interpolated_tangent_space(
...     mode='online',
...     Y=Y
... )

Online mode (with custom CSSP methods):

>>> from low_rank_toolbox.cssp import DEIM, QDEIM
>>> result = X.project_onto_interpolated_tangent_space(
...     mode='online',
...     Y=Y,
...     cssp_method_u=DEIM,
...     cssp_method_v=QDEIM
... )
project_onto_row_space(other, dense_output=False)[source]

Projection of other onto the row space of self.

The rank of the output is the rank of self, at maximum. Output is typically a QR object unless dense_output=True, in which case it is a dense ndarray.

The formula is given by:

P_V Y = Y VVh

where X = U S Vh is the SVD of matrix self and Y is the input (other) matrix to project.

Parameters:
  • other (ndarray or LowRankMatrix) – Matrix to project

  • dense_output (bool, optional) – Whether to return a dense matrix. False by default.

Returns:

Projection of other onto the row space of self. Returns QR if dense_output=False, ndarray otherwise.

Return type:

QR or ndarray

project_onto_tangent_space(other, dense_output=False)[source]

Projection of other onto the tangent space at self.

The rank of the output is two times the rank of self, at maximum.

The formula is given by:

P_X Y = UUh Y - UUh Y VVh + Y VVh

where X = U S Vh is the SVD of matrix self and Y is the input (other) matrix to project.

Parameters:
  • other (ndarray or LowRankMatrix) – Matrix to project

  • dense_output (bool, optional) – Whether to return a dense matrix. False by default.

Returns:

Projection of other onto the tangent space at self.

Return type:

QuasiSVD

pseudoinverse(rtol=None, atol=np.float64(2.220446049250313e-14))[source]

Compute the Moore-Penrose pseudoinverse X⁺ efficiently.

For QuasiSVD X = U @ S @ V.T, the pseudoinverse is:

X⁺ = V @ S⁺ @ U.T

where S⁺ is the pseudoinverse of S computed via SVD.

Small singular values (< max(rtol*σ_max, atol)) are treated as zero.

Parameters:
  • rtol (float, optional) – Relative tolerance for determining zero singular values. Default is None (uses machine precision * max(m,n)).

  • atol (float, optional) – Absolute tolerance. Default is DEFAULT_ATOL (machine precision).

Returns:

Pseudoinverse in QuasiSVD format

Return type:

QuasiSVD

Examples

>>> X = QuasiSVD.generate_random((100, 80), 10)
>>> X_pinv = X.pseudoinverse()
>>> # Check: X @ X_pinv @ X ≈ X
>>> reconstruction = X.dot(X_pinv.dot(X.full(), dense_output=True), dense_output=True)
>>> np.allclose(X.full(), reconstruction)
True

See also

solve

Solve linear system Xx = b

lstsq

Least squares solution

rank_one_update(u, v, alpha=1.0)[source]

Efficient rank-1 update: X_new = X + alpha * u @ v.T

This adds a rank-1 matrix to the current QuasiSVD without forming the full matrix. The result has rank at most rank(X) + 1.

Parameters:
  • u (ndarray, shape (m,)) – Left vector for rank-1 update

  • v (ndarray, shape (n,)) – Right vector for rank-1 update

  • alpha (float, optional) – Scaling factor, default is 1.0

Returns:

Updated matrix with rank increased by at most 1

Return type:

QuasiSVD

Examples

>>> X = QuasiSVD.generate_random((100, 80), 5)
>>> u = np.random.randn(100)
>>> v = np.random.randn(80)
>>> X_new = X.rank_one_update(u, v, alpha=0.5)
>>> # X_new represents X + 0.5 * u @ v.T with rank at most 6
reorthogonalize(method='qr', inplace=False)[source]

Re-orthogonalize U and V if numerical drift has occurred.

After many operations, U and V may lose orthogonality due to numerical errors. This method restores orthogonality.

Parameters:
  • method (str, optional) – Method for re-orthogonalization: - ‘qr’: QR decomposition (default, stable and efficient) - ‘svd’: Full SVD (more expensive but more accurate)

  • inplace (bool, optional) – If True, modify the current object in-place. Otherwise, return a new object. Default is False.

Returns:

Re-orthogonalized QuasiSVD with same matrix values

Return type:

QuasiSVD

Examples

>>> X = QuasiSVD.generate_random((100, 80), 10)
>>> # ... many operations ...
>>> if not X.is_orthogonal():
...     X = X.reorthogonalize()
sing_vals()[source]

Singular values of the QuasiSVD matrix. Computed from S, then cached.

Returns:

Singular values of the middle matrix S in descending order. Shape is (min(r, q),) where S has shape (r, q).

Return type:

ndarray

Notes

Result is cached after first computation for efficiency. Computes via np.linalg.svdvals(S), which is O(r²q) for S with shape (r, q).

solve(b, method='lstsq')[source]

Solve the linear system Xx = b.

For square full-rank matrices, solves Xx = b. For rectangular or rank-deficient matrices, computes the least-squares solution.

Parameters:
  • b (ndarray) – Right-hand side vector or matrix. Shape (m,) or (m, k).

  • method (str, optional) –

    Solution method: - ‘lstsq’: Use pseudoinverse (default, works for any shape) - ‘direct’: Use factored form assuming orthogonality (faster but requires

    orthogonal U, V and invertible S)

Returns:

Solution x. Shape (n,) or (n, k).

Return type:

ndarray

Raises:

ValueError – If matrix is not square and method=’direct’. If dimensions are incompatible.

Examples

>>> X = QuasiSVD.generate_random((100, 100), 100)  # Full rank for unique solution
>>> b = np.random.randn(100)
>>> x = X.solve(b)
>>> # Check: X @ x ≈ b
>>> np.allclose(X.dot(x), b)
True
>>> # For rank-deficient systems, use lstsq instead:
>>> X_deficient = QuasiSVD.generate_random((100, 100), 20)
>>> x_ls = X_deficient.lstsq(b)

See also

pseudoinverse

Compute pseudoinverse

lstsq

Least squares solution

Notes

The default method is ‘lstsq’ which uses the pseudoinverse and is robust to non-orthogonal factors and rank deficiency. Use method=’direct’ only when you are certain that U and V are orthogonal and S is invertible.

sqrtm(inplace=False, **extra_data)[source]

Compute the matrix square root X^{1/2} such that X^{1/2} @ X^{1/2} = X.

For QuasiSVD X = U @ S @ V.T, the square root is:

X^{1/2} = U @ S^{1/2} @ V.T

where S^{1/2} is the matrix square root of S.

Parameters:

inplace (bool, optional) – If True, modify the current object in-place. Default is False.

Returns:

Matrix square root in QuasiSVD format

Return type:

QuasiSVD

property svd_type: str

Classify the type of SVD representation based on S dimensions.

Returns:

One of: - ‘full’: S has shape (m, n) - full SVD with all singular vectors - ‘reduced’: S has shape (min(m,n), min(m,n)) - reduced/economic SVD - ‘truncated’: S has shape (r, r) where r < min(m,n) - truncated SVD - ‘unconventional’: S has shape (r, k) where r ≠ k - general quasi-SVD

Return type:

str

Examples

>>> X = QuasiSVD.generate_random((100, 80), 50)
>>> print(X.svd_type)  # 'reduced' (50 = min(100,80))
>>> X_trunc = X.truncate(r=10)
>>> print(X_trunc.svd_type)  # 'truncated' (10 < min(100,80))
to_qr()[source]

Convert QuasiSVD to QR format.

Returns X = Q @ R where Q is orthogonal and R is upper triangular. This is useful for efficient column-space operations.

Returns:

QR representation of the matrix

Return type:

QR

Examples

>>> X = QuasiSVD.generate_random((100, 80), 10)
>>> X_qr = X.to_qr()
to_svd()[source]

Convert QuasiSVD to SVD by computing the SVD of the middle matrix S.

This operation computes U_s, s, V_s = svd(S) and returns:

SVD(self.U @ U_s, s, self.V @ V_s)

If you want to truncate small singular values, use the .truncate() method instead.

Returns:

An SVD object with diagonal S matrix

Return type:

SVD

Notes

This method imports SVD only at runtime to avoid circular imports. The computational cost is O(r³) where r is the rank of S.

trace()[source]

Compute the trace of the matrix efficiently.

For square QuasiSVD: trace(X) = trace(U @ S @ V.T) = trace(V.T @ U @ S) This is O(r³) instead of O(mn) for the full matrix.

For non-square matrices, trace is only defined on the square part.

Returns:

The trace of the matrix

Return type:

float

Raises:

ValueError – If matrix is not square

transpose()[source]

Transpose the matrix (returns QuasiSVD).

Return type:

QuasiSVD

truncate(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]

Truncate the QuasiSVD by converting to SVD and removing small singular values.

The QuasiSVD is first converted to SVD (diagonal S), then singular values are truncated based on rank or tolerance criteria.

Parameters:
  • r (int, optional) – Target rank. If specified, keep only the r largest singular values.

  • rtol (float, optional) – Relative tolerance. Singular values < rtol * max(singular_values) are removed.

  • atol (float, optional) – Absolute tolerance. Singular values < atol are removed. Default is DEFAULT_ATOL (machine precision).

  • inplace (bool, optional) – If True, this parameter is ignored (QuasiSVD cannot be truncated in-place, must convert to SVD). Kept for API compatibility. Default is False.

Returns:

Truncated SVD object

Return type:

SVD

Notes

Priority of truncation criteria (from highest to lowest): 1. r (explicit rank) 2. rtol (relative tolerance) 3. atol (absolute tolerance)

Examples

>>> X = QuasiSVD(U, S, V)
>>> X_trunc = X.truncate(r=10)  # Keep top 10 singular values
>>> X_trunc = X.truncate(rtol=1e-6)  # Keep s_i > 1e-6 * s_max
>>> X_trunc = X.truncate(atol=1e-10)  # Keep s_i > 1e-10
Parameters:
  • U (ndarray)

  • S (ndarray)

  • V (ndarray)

class low_rank_toolbox.matrices.SVD(U, s, V, **extra_data)[source]

Bases: QuasiSVD

Singular Value Decomposition (SVD) for low-rank matrices.

The gold standard for low-rank matrix representation with guaranteed diagonal structure and non-negative singular values. Inherits from QuasiSVD but enforces diagonal middle matrix for optimal storage.

Mathematical Representation

X = U @ diag(s) @ V.H = U @ S @ V.H

where: - U ∈ ℝ^(m×r) has orthonormal columns (U^H @ U = I_r) - s ∈ ℝ^r contains non-negative singular values in descending order (s[0] ≥ s[1] ≥ … ≥ s[r-1] ≥ 0) - V ∈ ℝ^(n×r) has orthonormal columns (V^H @ V = I_r) - S = diag(s) is an r×r matrix with singular values on the diagonal

Key Differences from QuasiSVD

  • Diagonal structure: S is always diagonal (guaranteed by construction)

  • Singular values: Non-negative by definition, stored in descending order

  • Efficiency: Optimized operations exploit diagonal structure

  • Operations: Addition, multiplication, and Hadamard preserve SVD format when possible

Storage Efficiency

Full matrix: O(mn) storage SVD: O(mr + r² + nr) storage (same as QuasiSVD) Memory savings when r << min(m, n)

For 1000×1000 matrix with rank 10: - Dense: 1,000,000 elements - SVD: 20,100 elements (50x compression) - QuasiSVD: 20,100 elements

Important Notes

  • CRITICAL: U and V MUST have orthonormal columns.

    This is NOT verified at initialization for performance. Invalid inputs produce incorrect results without warning.

  • Use .is_orthogonal() to verify orthonormality if needed

  • Singular values are stored internally as a diagonal matrix S (2D array)

  • The property s extracts the diagonal as a 1D vector when needed

  • After operations, may return QuasiSVD or LowRankMatrix (orthogonality not preserved)

  • For non-diagonal S, use QuasiSVD instead

U

Left singular vectors (orthonormal columns)

Type:

ndarray, shape (m, r)

s

Singular values (1D vector, non-negative, descending order)

Type:

ndarray, shape (r,)

S

Diagonal matrix of singular values (property, computed on demand)

Type:

ndarray, shape (r, r)

V

Right singular vectors (orthonormal columns)

Type:

ndarray, shape (n, r)

Vh

Hermitian conjugate of V (V.T.conj())

Type:

ndarray, shape (r, n)

Vt

Transpose of V (without conjugate)

Type:

ndarray, shape (r, n)

Ut

Transpose of U (without conjugate)

Type:

ndarray, shape (r, m)

Uh

Hermitian conjugate of U (U.T.conj())

Type:

ndarray, shape (r, m)

Properties
----------
shape

Shape of the represented matrix (m, n)

Type:

tuple

rank

Number of singular values (length of s)

Type:

int

sing_vals[source]

Singular values (alias for s)

Type:

ndarray

Return type:

ndarray

T

Transpose of the matrix (returns SVD)

Type:

SVD

Methods Overview
----------------
Core Operations
  • __add__ : Addition preserving SVD structure when possible

  • __sub__ : Subtraction preserving SVD structure when possible

  • __mul__ : Scalar or Hadamard (element-wise) multiplication preserving SVD structure when possible

  • dot : Matrix-vector or matrix-matrix multiplication preserving SVD structure when possible

  • hadamard : Element-wise multiplication with another matrix preserving SVD structure when possible

Truncation
  • truncate() : Remove small singular values

  • truncate_perpendicular() : Remove large singular values and keep smallest singular values

Class Methods
  • from_matrix() : Convert any matrix to SVD

  • from_quasiSVD() : Convert QuasiSVD to SVD

  • from_low_rank() : Convert LowRankMatrix to SVD

  • reduced_svd() : Compute reduced/economic SVD

  • truncated_svd() : Compute truncated SVD with specified rank or (relative/absolute) tolerance

  • generate_random() : Generate random SVD with specified singular values

Optimized Operations
  • norm() : Compute norms efficiently using singular values

  • norm_squared() : Squared Frobenius norm from singular values

  • trace() : Efficient trace computation

  • diag() : Extract diagonal efficiently

Configuration
-------------
Default behavior controlled by
- DEFAULT_ATOL (machine precision)
Type:

Absolute tolerance for truncation

- DEFAULT_RTOL (None)
Type:

Relative tolerance for truncation

Examples

Creating an SVD from scratch:

>>> import numpy as np
>>> from low_rank_toolbox.matrices import SVD
>>>
>>> # Create orthonormal matrices
>>> m, n, r = 100, 80, 10
>>> U, _ = np.linalg.qr(np.random.randn(m, r))
>>> V, _ = np.linalg.qr(np.random.randn(n, r))
>>> s = np.logspace(0, -2, r)  # Singular values (1D array)
>>>
>>> # Create SVD
>>> X = SVD(U, s, V)
>>> print(X.shape)  # (100, 80)
>>> print(X.rank)   # 10
>>> print(X.s.shape)  # (10,) - 1D vector!
>>> print(X.S.shape)  # (10, 10) - 2D diagonal matrix (property)

Computing SVD from a matrix:

>>> A = np.random.randn(100, 80)
>>> X_reduced = SVD.reduced_svd(A)  # Reduced SVD
>>> X_truncated = SVD.truncated_svd(A, r=10)  # Keep top 10 singular values
>>> X_auto = SVD.truncated_svd(A, rtol=1e-6)  # Adaptive truncation

Efficient operations:

>>> # Operations exploit diagonal structure and orthogonality
>>> norm_fro = X.norm('fro')  # sqrt(sum(s²)) - O(r)
>>> norm_squared = X.norm_squared()  # sum(s²) - O(r)
>>> norm_2 = X.norm(2)  # max(s) - instant!
>>> norm_nuc = X.norm('nuc')  # sum(s) - O(r) instead of O(mnr)
>>>
>>> # Addition preserves SVD structure
>>> Y = X + X  # Returns SVD
>>> Z = X @ X.T  # Matrix multiplication, returns SVD
>>> 0 = X - X  # Subtraction, returns SVD of rank 2*rank(X). Call .truncate() to reduce rank.

Truncation:

>>> # Remove small singular values
>>> X_trunc = X.truncate(r=5)  # Keep top 5
>>> X_trunc = X.truncate(rtol=1e-10)  # Relative tolerance (based on max singular value)
>>> X_trunc = X.truncate(atol=1e-12)  # Absolute tolerance (raw threshold, independent of max singular value)
>>>
>>> # Get perpendicular component (residual)
>>> X_perp = X.truncate_perpendicular(r=5)  # Keep last (r-5) singular values

Random matrices with controlled spectrum:

>>> # Generate test matrices
>>> s_decay = np.logspace(0, -10, 20)  # Exponential decay
>>> X_test = SVD.generate_random((100, 100), s_decay, is_symmetric=True)
>>> # Perfect for testing algorithms!

Conversion between formats:

>>> from low_rank_toolbox.matrices import QuasiSVD, LowRankMatrix
>>>
>>> # From QuasiSVD (non-diagonal S)
>>> X_quasi = QuasiSVD(U, S_full, V)  # S_full is general matrix
>>> X_svd = SVD.from_quasiSVD(X_quasi)  # Diagonalize S
>>>
>>> # From generic low-rank
>>> X_lr = LowRankMatrix(A, B, C)
>>> X_svd = SVD.from_low_rank(X_lr)  # Compute SVD

Memory efficiency:

>>> X.compression_ratio()  # < 1.0 means memory savings
>>> X.memory_usage('MB')  # Actual memory used
>>> X.is_memory_efficient  # True if saves memory

See also

QuasiSVD

Generalized SVD with non-diagonal middle matrix

LowRankMatrix

Base class for low-rank matrix representations

QR

QR factorization format

Randomized

References

Notes

  • This class is designed for numerical linear algebra applications

  • Singular values are ALWAYS stored as a 1D array for memory efficiency, otherwise use QuasiSVD

  • The 2D diagonal matrix S is computed on-the-fly when needed (via property)

  • Operations exploit orthogonality and diagonal structure for efficiency

  • After general operations with non-SVD matrices (e.g., X + Y), may return QuasiSVD or LowRankMatrix

  • Use .truncate() to insure (numerical) invertibility or reduce rank

property H: SVD

Hermitian conjugate (conjugate transpose) of the SVD matrix.

For X = U @ diag(s) @ V.H, the Hermitian conjugate is:
X.H = (U @ diag(s) @ V.H).H

= V @ diag(s) @ U.H

This is equivalent to X.T.conj() or X.conj().T.

Returns:

Hermitian conjugate in SVD format.

Return type:

SVD

Notes

This is O(1) as it only swaps references, doesn’t copy data. For real matrices, X.H == X.T.

Examples

>>> # For complex matrix
>>> A = np.random.randn(10, 8) + 1j * np.random.randn(10, 8)
>>> X = SVD.reduced_svd(A)
>>> X_H = X.H
>>> assert np.allclose(X_H.full(), A.T.conj())
property T: SVD

Transpose of the SVD matrix, returning another SVD.

For X = U @ diag(s) @ V.H (where V.H is Hermitian conjugate of V), the transpose is:

X.T = (U @ diag(s) @ V.H).T

= (V.H).T @ diag(s).T @ U.T = V* @ diag(s) @ U.T (where V* is conjugate of V)

Note: This is the transpose WITHOUT conjugation. For Hermitian conjugate (conjugate transpose), use X.H instead.

Returns:

Transposed matrix in SVD format (not QuasiSVD).

Return type:

SVD

Notes

This is O(1) as it only swaps references, doesn’t copy data. The result is guaranteed to be an SVD (diagonal structure preserved).

Examples

>>> X = SVD.generate_random((100, 80), np.ones(10))
>>> print(X.shape)  # (100, 80)
>>> print(X.T.shape)  # (80, 100)
>>> assert isinstance(X.T, SVD)  # True
__init__(U, s, V, **extra_data)[source]

Create a low-rank matrix stored by its SVD: X = U @ diag(s) @ V.T

This is the primary constructor for the SVD class. It efficiently stores singular values as a 1D vector rather than a full diagonal matrix.

Parameters:
  • U (ndarray) – Left singular vectors, shape (m, r). ASSUMED to have orthonormal columns: U.T @ U = I_r. Not verified at initialization for performance.

  • s (ndarray) – Singular values. Accepts two formats: - 1D array (shape (r,)): Vector of singular values (RECOMMENDED) - 2D array (shape (r, r) or (r, k)): Diagonal or nearly-diagonal matrix

  • V (ndarray) – Right singular vectors, shape (n, k). ASSUMED to have orthonormal columns: V.T @ V = I_k. Not verified at initialization for performance. NOTE: Provide V, not V.T or V.H (conjugate transpose).

  • **extra_data (dict, optional) – Additional metadata to store with the matrix (e.g., poles, residues). Internal flags: - ‘_skip_diagonal_check’: Skip diagonal verification (used internally) - ‘_skip_memory_check’: Skip memory efficiency check

Raises:
  • ValueError

    • If input dimensions are incompatible for multiplication - If number of singular values doesn’t match min(r, k) - If U or V are not 2D arrays

  • TypeError

    • If U, V, or s are not numpy arrays - If s is not 1D or 2D

Notes

Storage: Singular values are stored internally as a diagonal matrix S (2D array). The input parameter s can be provided in either 1D or 2D format for convenience.

Input format flexibility: - If s is 1D: converted to diagonal matrix S for storage - If s is 2D diagonal: stored directly as S - If s is 2D rectangular (r ≠ k): stored as rectangular matrix with filled diagonal

Orthogonality assumption: U and V are ASSUMED to have orthonormal columns. This is NOT verified at initialization for performance. Use .is_orthogonal() to verify if needed.

Parent class: Calls QuasiSVD.__init__ with a flag to suppress the “use SVD class” warning (since we ARE the SVD class).

Examples

Recommended usage (1D singular values):

>>> import numpy as np
>>> from low_rank_toolbox.matrices import SVD
>>>
>>> m, n, r = 100, 80, 10
>>> U, _ = np.linalg.qr(np.random.randn(m, r))
>>> s = np.logspace(0, -2, r)  # 1D vector
>>> V, _ = np.linalg.qr(np.random.randn(n, r))
>>>
>>> X = SVD(U, s, V)
>>> print(X.s.shape)  # (10,) - extracted from diagonal of S
>>> print(X.S.shape)  # (10, 10) - stored 2D matrix

Alternative usage (2D diagonal matrix):

>>> S_diag = np.diag(s)  # 2D diagonal matrix
>>> X = SVD(U, S_diag, V)  # Also works
>>> print(X.s.shape)  # (10,) - extracted from diagonal of S

From numpy.linalg.svd:

>>> A = np.random.randn(100, 80)
>>> U, s, Vt = np.linalg.svd(A, full_matrices=False)
>>> X = SVD(U, s, Vt.T)  # Note: Vt.T to get V!
dot(other, side='right', dense_output=False)[source]

Matrix multiplication optimized for SVD x SVD.

Efficiently multiplies two SVD matrices while preserving the SVD structure. The resulting rank is min(rank(X), rank(Y)).

If side is ‘right’ or ‘usual’, compute self @ other. If side is ‘left’ or ‘opposite’, compute other @ self.

Parameters:
  • other (SVD, LowRankMatrix, or ndarray) – Matrix to multiply with.

  • side (str, optional) – ‘left’ or ‘right’, by default ‘right’

  • dense_output (bool, optional) – If True, return dense ndarray. If False, return SVD/LowRankMatrix. Default is False.

Returns:

Result of matrix multiplication.

Return type:

SVD, LowRankMatrix, or ndarray

Notes

Algorithm for SVD @ SVD: Computes M = S1 @ V1.T @ U2 @ S2, then SVD of M. Computational cost: O(r^3) where r = min(r1, r2).

Examples

>>> X = SVD.generate_random((100, 80), np.ones(20))
>>> Y = SVD.generate_random((80, 60), np.ones(15))
>>> Z = X.dot(Y)
>>> print(Z.shape)  # (100, 60)
>>> print(Z.rank)   # min(20, 15) = 15
expm(inplace=False)[source]

Compute the matrix exponential exp(X) = e^X.

For SVD X = U @ diag(s) @ V.H, if X is Hermitian (U == V for real, U == V.conj() for complex), then:

exp(X) = U @ diag(exp(s)) @ U.H

where exp(s) is the element-wise exponential of singular values.

This is MUCH faster than matrix exponentiation for general matrices: - SVD.expm (symmetric): O(r) for computing exp(s) - General expm: O(n³) using Padé approximation

Parameters:

inplace (bool, optional) – If True, modify the current object in-place. Default is False.

Returns:

Matrix exponential in SVD format.

Return type:

SVD

Raises:

Notes

This method currently only supports symmetric matrices where U == V. For general matrices, the eigenvalue decomposition would be needed.

Examples

>>> X = SVD.generate_random((100, 100), np.array([1.0, 0.5, 0.1]), is_symmetric=True)
>>> X_exp = X.expm()
>>> print(X_exp.s)  # [e^1.0, e^0.5, e^0.1] ≈ [2.718, 1.649, 1.105]

See also

sqrtm

Matrix square root

classmethod from_low_rank(mat, **extra_data)[source]

Convert any LowRankMatrix to SVD format.

This performs QR decompositions on the outer factors and SVD on the middle product to obtain an efficient SVD representation.

Parameters:
  • mat (LowRankMatrix) – Low-rank matrix in any format (e.g., product of multiple matrices).

  • **extra_data (dict, optional) – Additional metadata to store with the result.

Returns:

SVD representation of the matrix.

Return type:

SVD

classmethod from_matrix(mat, **extra_data)[source]

Convert any matrix type to SVD format (automatic dispatch).

This is the universal converter that automatically selects the best method based on input type. Use this when you don’t know the input type.

Parameters:
  • mat (ndarray, LowRankMatrix, QuasiSVD, or SVD) – Matrix to convert to SVD format.

  • **extra_data (dict, optional) – Additional metadata to store with the result.

Returns:

SVD representation of the matrix.

Return type:

SVD

classmethod from_quasiSVD(mat)[source]

Convert QuasiSVD to SVD by diagonalizing the middle matrix.

This computes the SVD of the non-diagonal middle matrix S to obtain a true SVD representation with diagonal singular values.

Parameters:

mat (QuasiSVD) – QuasiSVD matrix to convert.

Returns:

SVD representation with diagonal singular values.

Return type:

SVD

Notes

Computational cost: O(r³) where r is the rank of S. This is equivalent to calling mat.to_svd().

full()[source]

Reconstruct the full dense matrix from its SVD representation.

Computes X = (U * s) @ V.H to obtain the full matrix.

Returns:

The full dense matrix represented by the SVD.

Return type:

ndarray, shape (m, n)

Notes

This operation is O(mnr) and should be avoided for large matrices. Use low-rank operations whenever possible.

Examples

>>> A = np.random.randn(100, 80)
>>> X = SVD.reduced_svd(A)
>>> A_reconstructed = X.full()
>>> assert np.allclose(A, A_reconstructed)  # True within numerical precision
classmethod full_svd(mat, **extra_data)[source]

Compute a full SVD

Return type:

SVD

Parameters:

mat (ndarray)

classmethod generate_random(shape, sing_vals, seed=1234, is_symmetric=False, **extra_data)[source]

Generate a random SVD with given singular values.

Parameters:
  • shape (tuple) – Shape of the matrix

  • sing_vals (ndarray) – Singular values

  • seed (int, optional) – Random seed, by default 1234

  • is_symmetric (bool, optional) – Whether the generated matrix is symmetric, by default False

Returns:

SVD matrix generated randomly

Return type:

SVD

hadamard(other)[source]

Faster version of the Hadamard product for SVDs.

Return type:

SVD | LowRankMatrix | ndarray

Parameters:

other (SVD | LowRankMatrix | ndarray)

lstsq(b, rtol=None, atol=np.float64(2.220446049250313e-14))[source]

Compute the least-squares solution to Xx ≈ b.

Minimizes ||Xx - b||₂ using the pseudoinverse: x = X⁺ @ b This is optimized for SVD with diagonal structure.

Parameters:
  • b (ndarray) – Right-hand side vector or matrix. Shape (m,) or (m, k).

  • rtol (float, optional) – Relative tolerance for pseudoinverse computation. Default is None (uses machine precision * max(m,n)).

  • atol (float, optional) – Absolute tolerance for pseudoinverse. Default is DEFAULT_ATOL.

Returns:

Least-squares solution x. Shape (n,) or (n, k).

Return type:

ndarray

Notes

This is faster than QuasiSVD.lstsq() due to diagonal S: - SVD.lstsq: O(mr + rk) for computing x - QuasiSVD.lstsq: O(r³ + mr + rk) due to SVD(S)

Examples

>>> X = SVD.generate_random((100, 80), np.logspace(0, -10, 20))
>>> b = np.random.randn(100)
>>> x = X.lstsq(b)
>>> # x minimizes ||X @ x - b||
>>> residual = np.linalg.norm(X @ x - b)

See also

pseudoinverse

Compute pseudoinverse

solve

Solve linear system (for full-rank matrices)

norm(ord='fro')[source]

Calculate matrix norm, optimized for SVD representation.

For orthogonal U and V, common norms are computed directly from singular values without forming the full matrix. This is MUCH faster: O(r) vs O(mnr).

Parameters:

ord (str or int, optional) – Order of the norm. Default is ‘fro’ (Frobenius). Optimized norms (computed from singular values): - ‘fro’: Frobenius norm = ||s||₂ - 2: Spectral norm (largest singular value) = max(s) - ‘nuc’: Nuclear norm (sum of singular values) = sum(s) Other norms fall back to full matrix computation.

Returns:

The requested norm of the matrix.

Return type:

float

Notes

Result caching: Norms are cached after first computation for efficiency. Each norm type is cached separately.

Orthogonality check: IMPORTANT: This method assumes U and V are orthogonal for efficient computation. If they are not, the computed norm is incorrect.

pseudoinverse(rtol=None, atol=np.float64(2.220446049250313e-14))[source]

Compute the Moore-Penrose pseudoinverse X⁺ efficiently.

For SVD X = U @ diag(s) @ V.H, the pseudoinverse is:

X⁺ = V @ diag(s⁺) @ U.H

where s⁺[i] = 1/s[i] if s[i] > threshold, else 0.

This is more efficient than QuasiSVD.pseudoinverse() since S is already diagonal. Small singular values (< max(rtol*σ_max, atol)) are treated as zero.

Parameters:
  • rtol (float, optional) – Relative tolerance for determining zero singular values. Default is None (uses machine precision * max(m,n)).

  • atol (float, optional) – Absolute tolerance. Default is DEFAULT_ATOL (machine precision).

Returns:

Pseudoinverse in SVD format (guaranteed diagonal structure).

Return type:

SVD

Notes

Computational cost: O(r) for computing s⁺ (vs O(r³) for QuasiSVD). The threshold is: max(rtol * max(s), atol)

Examples

>>> X = SVD.generate_random((100, 80), np.logspace(0, -5, 20))
>>> X_pinv = X.pseudoinverse()
>>> # Check: X @ X_pinv @ X ≈ X
>>> reconstruction = X @ X_pinv @ X
>>> np.allclose(X.full(), reconstruction.full())
True

See also

solve

Solve linear system Xx = b

lstsq

Least squares solution

classmethod reduced_svd(mat, **extra_data)[source]

Compute a reduced SVD of rank r

Return type:

SVD

Parameters:

mat (ndarray)

property s: ndarray

Return the singular values as a 1D array.

Extracts the diagonal from the stored diagonal matrix S.

Returns:

Singular values (diagonal of S). Returns a copy for writability.

Return type:

ndarray, shape (r,)

Notes

This extracts the diagonal from the 2D matrix S stored in _matrices[1]. For SVD, S is guaranteed to be diagonal. Returns a copy to allow modification.

sing_vals()[source]

Return the singular values as a 1D array.

This is an alias for the s property, provided for API consistency with QuasiSVD (which computes singular values on demand).

Returns:

Singular values in descending order (non-negative).

Return type:

ndarray, shape (r,)

Notes

For SVD, this is O(1) as it extracts the diagonal from S. For QuasiSVD, this is O(r³) as it computes svd(S).

Examples

>>> X = SVD.generate_random((100, 80), np.logspace(0, -5, 10))
>>> s = X.sing_vals()
>>> print(s)  # [1.0, 0.1, 0.01, ..., 1e-5]
>>> assert np.array_equal(s, X.s)  # True (same values)
classmethod singular_values(X)[source]

Extract singular values from any matrix type.

Return type:

ndarray

Parameters:

X (SVD | LowRankMatrix | ndarray)

solve(b, method='direct')[source]

Solve the linear system Xx = b efficiently using SVD structure.

For square full-rank matrices, solves Xx = b directly. For rectangular or rank-deficient matrices, computes the least-squares solution.

Parameters:
  • b (ndarray) – Right-hand side vector or matrix. Shape (m,) or (m, k).

  • method (str, optional) –

    Solution method: - ‘direct’: Use SVD factorization (default, very fast: O(mr + rk))

    x = V @ diag(1/s) @ U.T @ b

    • ’lstsq’: Use pseudoinverse (handles rank deficiency)

Returns:

Solution x. Shape (n,) or (n, k).

Return type:

ndarray

Raises:
  • ValueError – If dimensions are incompatible.

  • LinAlgError – If matrix is singular and method=’direct’.

Notes

The ‘direct’ method is faster than QuasiSVD.solve() since S is diagonal: - SVD.solve: O(mr + rk) where k is the number of RHS vectors - QuasiSVD.solve: O(mr² + r²k) due to S inversion

For rank-deficient matrices, use method=’lstsq’ or call lstsq() directly.

Examples

>>> X = SVD.generate_random((100, 100), np.ones(100))  # Full rank
>>> b = np.random.randn(100)
>>> x = X.solve(b)
>>> # Check: X @ x ≈ b
>>> np.allclose(X @ x, b)
True
>>> # For rank-deficient systems, may have larger residual:
>>> X_deficient = SVD.generate_random((100, 100), np.ones(20))  # Rank 20
>>> x_deficient = X_deficient.solve(b, method='lstsq')
>>> # Residual may be non-zero for rank-deficient case

See also

lstsq

Least squares solution (handles rank deficiency)

pseudoinverse

Compute Moore-Penrose pseudoinverse

sqrtm(inplace=False)[source]

Compute the matrix square root X^{1/2} such that X^{1/2} @ X^{1/2} = X.

For SVD X = U @ diag(s) @ V.H, the square root is:

X^{1/2} = U @ diag(√s) @ V.H

where √s is the element-wise square root of singular values.

This is MUCH simpler and faster than QuasiSVD.sqrtm() since S is diagonal.

Parameters:

inplace (bool, optional) – If True, modify the current object in-place. Default is False.

Returns:

Matrix square root in SVD format.

Return type:

SVD

Notes

Computational cost: O(r) for computing √s (vs O(r³) for QuasiSVD). For matrices with negative eigenvalues, the square root may be complex.

Examples

>>> X = SVD.generate_random((100, 100), np.array([4.0, 9.0, 16.0]), is_symmetric=True)
>>> X_sqrt = X.sqrtm()
>>> # Check: X_sqrt @ X_sqrt ≈ X
>>> reconstruction = X_sqrt @ X_sqrt
>>> np.allclose(X.full(), reconstruction.full())
True

See also

expm

Matrix exponential

truncate(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]

Truncate the SVD. The rank is prioritized over the tolerance. The relative tolerance is prioritized over absolute tolerance. NOTE: By default, the truncation is done with respect to the machine precision (DEFAULT_ATOL).

Parameters:
  • r (int, optional) – Rank, by default None

  • rtol (float, optional) – Relative tolerance, by default None. Uses the largest singular value as reference.

  • atol (float, optional) – Absolute tolerance, by default DEFAULT_ATOL.

  • inplace (bool, optional) – If True, modify the matrix inplace, by default False

Return type:

SVD

truncate_perpendicular(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]

Perpendicular truncation of SVD. (keep the minimal rank)

Rank is prioritized over tolerance. Relative tolerance is prioritized over absolute tolerance. NOTE: By default, the truncation is done with respect to the machine precision (DEFAULT_ATOL).

Parameters:
  • r (int, optional) – Rank, by default None

  • rtol (float, optional) – Relative tolerance, by default None. Uses the largest singular value as reference.

  • atol (float, optional) – Absolute tolerance, by default DEFAULT_ATOL.

  • inplace (bool, optional) – If True, modify the matrix inplace, by default False

Return type:

SVD

classmethod truncated_svd(mat, r=None, rtol=None, atol=np.float64(2.220446049250313e-14), **extra_data)[source]

Compute truncated SVD with automatic rank selection.

First converts the input to SVD format (if needed), then truncates small singular values based on rank or tolerance criteria.

Parameters:
  • mat (SVD, LowRankMatrix, or ndarray) – Input matrix to decompose and truncate.

  • r (int, optional) – Target rank. If specified, keep only the r largest singular values. Takes priority over tolerances. Default is None.

  • rtol (float, optional) – Relative tolerance. Singular values < rtol * σ_max are removed. Takes priority over atol. Default is DEFAULT_RTOL (None).

  • atol (float, optional) – Absolute tolerance. Singular values < atol are removed. Default is DEFAULT_ATOL (machine precision).

  • **extra_data (dict, optional) – Additional metadata to store.

Returns:

Truncated SVD with reduced rank.

Return type:

SVD

Notes

Truncation priority (from highest to lowest): 1. r (explicit rank) 2. rtol (relative tolerance) 3. atol (absolute tolerance)

Examples

Fixed rank truncation:

>>> A = np.random.randn(100, 80)
>>> X_r10 = SVD.truncated_svd(A, r=10)
>>> print(X_r10.rank)  # 10

Relative tolerance (adaptive):

>>> # Keep singular values > 1e-6 * max(singular values)
>>> X_rel = SVD.truncated_svd(A, rtol=1e-6)
>>> print(X_rel.rank)  # Depends on spectrum

Absolute tolerance:

>>> # Keep singular values > 1e-10
>>> X_abs = SVD.truncated_svd(A, atol=1e-10)
Parameters:
  • U (ndarray)

  • s (ndarray)

  • V (ndarray)

Modules

low_rank_matrix

Generic low-rank matrix base class and utilities.

qr

QR low-rank matrix class and functions.

quasi_svd

QuasiSVD low-rank matrix class and functions.

svd

SVD low-rank matrix class and functions.