low_rank_toolbox.matrices.SVD
- class low_rank_toolbox.matrices.SVD(U, s, V, **extra_data)[source]
Bases:
QuasiSVDSingular Value Decomposition (SVD) for low-rank matrices.
The gold standard for low-rank matrix representation with guaranteed diagonal structure and non-negative singular values. Inherits from QuasiSVD but enforces diagonal middle matrix for optimal storage.
Mathematical Representation
X = U @ diag(s) @ V.H = U @ S @ V.H
where: - U ∈ ℝ^(m×r) has orthonormal columns (U^H @ U = I_r) - s ∈ ℝ^r contains non-negative singular values in descending order (s[0] ≥ s[1] ≥ … ≥ s[r-1] ≥ 0) - V ∈ ℝ^(n×r) has orthonormal columns (V^H @ V = I_r) - S = diag(s) is an r×r matrix with singular values on the diagonal
Key Differences from QuasiSVD
Diagonal structure: S is always diagonal (guaranteed by construction)
Singular values: Non-negative by definition, stored in descending order
Efficiency: Optimized operations exploit diagonal structure
Operations: Addition, multiplication, and Hadamard preserve SVD format when possible
Storage Efficiency
Full matrix: O(mn) storage SVD: O(mr + r² + nr) storage (same as QuasiSVD) Memory savings when r << min(m, n)
For 1000×1000 matrix with rank 10: - Dense: 1,000,000 elements - SVD: 20,100 elements (50x compression) - QuasiSVD: 20,100 elements
Important Notes
- CRITICAL: U and V MUST have orthonormal columns.
This is NOT verified at initialization for performance. Invalid inputs produce incorrect results without warning.
Use .is_orthogonal() to verify orthonormality if needed
Singular values are stored internally as a diagonal matrix S (2D array)
The property s extracts the diagonal as a 1D vector when needed
After operations, may return QuasiSVD or LowRankMatrix (orthogonality not preserved)
For non-diagonal S, use QuasiSVD instead
- U
Left singular vectors (orthonormal columns)
- Type:
ndarray, shape (m, r)
- s
Singular values (1D vector, non-negative, descending order)
- Type:
ndarray, shape (r,)
- S
Diagonal matrix of singular values (property, computed on demand)
- Type:
ndarray, shape (r, r)
- V
Right singular vectors (orthonormal columns)
- Type:
ndarray, shape (n, r)
- Vh
Hermitian conjugate of V (V.T.conj())
- Type:
ndarray, shape (r, n)
- Vt
Transpose of V (without conjugate)
- Type:
ndarray, shape (r, n)
- Ut
Transpose of U (without conjugate)
- Type:
ndarray, shape (r, m)
- Uh
Hermitian conjugate of U (U.T.conj())
- Type:
ndarray, shape (r, m)
- Properties
- ----------
- Methods Overview
- ----------------
- Core Operations
__add__ : Addition preserving SVD structure when possible
__sub__ : Subtraction preserving SVD structure when possible
__mul__ : Scalar or Hadamard (element-wise) multiplication preserving SVD structure when possible
dot : Matrix-vector or matrix-matrix multiplication preserving SVD structure when possible
hadamard : Element-wise multiplication with another matrix preserving SVD structure when possible
- Truncation
truncate() : Remove small singular values
truncate_perpendicular() : Remove large singular values and keep smallest singular values
- Class Methods
from_matrix() : Convert any matrix to SVD
from_quasiSVD() : Convert QuasiSVD to SVD
from_low_rank() : Convert LowRankMatrix to SVD
reduced_svd() : Compute reduced/economic SVD
truncated_svd() : Compute truncated SVD with specified rank or (relative/absolute) tolerance
generate_random() : Generate random SVD with specified singular values
- Optimized Operations
norm() : Compute norms efficiently using singular values
norm_squared() : Squared Frobenius norm from singular values
trace() : Efficient trace computation
diag() : Extract diagonal efficiently
- Configuration
- -------------
- Default behavior controlled by
- - DEFAULT_ATOL (machine precision)
- Type:
Absolute tolerance for truncation
- - DEFAULT_RTOL (None)
- Type:
Relative tolerance for truncation
Examples
Creating an SVD from scratch:
>>> import numpy as np >>> from low_rank_toolbox.matrices import SVD >>> >>> # Create orthonormal matrices >>> m, n, r = 100, 80, 10 >>> U, _ = np.linalg.qr(np.random.randn(m, r)) >>> V, _ = np.linalg.qr(np.random.randn(n, r)) >>> s = np.logspace(0, -2, r) # Singular values (1D array) >>> >>> # Create SVD >>> X = SVD(U, s, V) >>> print(X.shape) # (100, 80) >>> print(X.rank) # 10 >>> print(X.s.shape) # (10,) - 1D vector! >>> print(X.S.shape) # (10, 10) - 2D diagonal matrix (property)
Computing SVD from a matrix:
>>> A = np.random.randn(100, 80) >>> X_reduced = SVD.reduced_svd(A) # Reduced SVD >>> X_truncated = SVD.truncated_svd(A, r=10) # Keep top 10 singular values >>> X_auto = SVD.truncated_svd(A, rtol=1e-6) # Adaptive truncation
Efficient operations:
>>> # Operations exploit diagonal structure and orthogonality >>> norm_fro = X.norm('fro') # sqrt(sum(s²)) - O(r) >>> norm_squared = X.norm_squared() # sum(s²) - O(r) >>> norm_2 = X.norm(2) # max(s) - instant! >>> norm_nuc = X.norm('nuc') # sum(s) - O(r) instead of O(mnr) >>> >>> # Addition preserves SVD structure >>> Y = X + X # Returns SVD >>> Z = X @ X.T # Matrix multiplication, returns SVD >>> 0 = X - X # Subtraction, returns SVD of rank 2*rank(X). Call .truncate() to reduce rank.
Truncation:
>>> # Remove small singular values >>> X_trunc = X.truncate(r=5) # Keep top 5 >>> X_trunc = X.truncate(rtol=1e-10) # Relative tolerance (based on max singular value) >>> X_trunc = X.truncate(atol=1e-12) # Absolute tolerance (raw threshold, independent of max singular value) >>> >>> # Get perpendicular component (residual) >>> X_perp = X.truncate_perpendicular(r=5) # Keep last (r-5) singular values
Random matrices with controlled spectrum:
>>> # Generate test matrices >>> s_decay = np.logspace(0, -10, 20) # Exponential decay >>> X_test = SVD.generate_random((100, 100), s_decay, is_symmetric=True) >>> # Perfect for testing algorithms!
Conversion between formats:
>>> from low_rank_toolbox.matrices import QuasiSVD, LowRankMatrix >>> >>> # From QuasiSVD (non-diagonal S) >>> X_quasi = QuasiSVD(U, S_full, V) # S_full is general matrix >>> X_svd = SVD.from_quasiSVD(X_quasi) # Diagonalize S >>> >>> # From generic low-rank >>> X_lr = LowRankMatrix(A, B, C) >>> X_svd = SVD.from_low_rank(X_lr) # Compute SVD
Memory efficiency:
>>> X.compression_ratio() # < 1.0 means memory savings >>> X.memory_usage('MB') # Actual memory used >>> X.is_memory_efficient # True if saves memory
See also
QuasiSVDGeneralized SVD with non-diagonal middle matrix
LowRankMatrixBase class for low-rank matrix representations
QRQR factorization format
RandomizedReferences
Notes
This class is designed for numerical linear algebra applications
Singular values are ALWAYS stored as a 1D array for memory efficiency, otherwise use QuasiSVD
The 2D diagonal matrix S is computed on-the-fly when needed (via property)
Operations exploit orthogonality and diagonal structure for efficiency
After general operations with non-SVD matrices (e.g., X + Y), may return QuasiSVD or LowRankMatrix
Use .truncate() to insure (numerical) invertibility or reduce rank
- __init__(U, s, V, **extra_data)[source]
Create a low-rank matrix stored by its SVD: X = U @ diag(s) @ V.T
This is the primary constructor for the SVD class. It efficiently stores singular values as a 1D vector rather than a full diagonal matrix.
- Parameters:
U (ndarray) – Left singular vectors, shape (m, r). ASSUMED to have orthonormal columns: U.T @ U = I_r. Not verified at initialization for performance.
s (ndarray) – Singular values. Accepts two formats: - 1D array (shape (r,)): Vector of singular values (RECOMMENDED) - 2D array (shape (r, r) or (r, k)): Diagonal or nearly-diagonal matrix
V (ndarray) – Right singular vectors, shape (n, k). ASSUMED to have orthonormal columns: V.T @ V = I_k. Not verified at initialization for performance. NOTE: Provide V, not V.T or V.H (conjugate transpose).
**extra_data (dict, optional) – Additional metadata to store with the matrix (e.g., poles, residues). Internal flags: - ‘_skip_diagonal_check’: Skip diagonal verification (used internally) - ‘_skip_memory_check’: Skip memory efficiency check
- Raises:
If input dimensions are incompatible for multiplication - If number of singular values doesn’t match min(r, k) - If U or V are not 2D arrays
If U, V, or s are not numpy arrays - If s is not 1D or 2D
Notes
Storage: Singular values are stored internally as a diagonal matrix S (2D array). The input parameter s can be provided in either 1D or 2D format for convenience.
Input format flexibility: - If s is 1D: converted to diagonal matrix S for storage - If s is 2D diagonal: stored directly as S - If s is 2D rectangular (r ≠ k): stored as rectangular matrix with filled diagonal
Orthogonality assumption: U and V are ASSUMED to have orthonormal columns. This is NOT verified at initialization for performance. Use .is_orthogonal() to verify if needed.
Parent class: Calls QuasiSVD.__init__ with a flag to suppress the “use SVD class” warning (since we ARE the SVD class).
Examples
Recommended usage (1D singular values):
>>> import numpy as np >>> from low_rank_toolbox.matrices import SVD >>> >>> m, n, r = 100, 80, 10 >>> U, _ = np.linalg.qr(np.random.randn(m, r)) >>> s = np.logspace(0, -2, r) # 1D vector >>> V, _ = np.linalg.qr(np.random.randn(n, r)) >>> >>> X = SVD(U, s, V) >>> print(X.s.shape) # (10,) - extracted from diagonal of S >>> print(X.S.shape) # (10, 10) - stored 2D matrix
Alternative usage (2D diagonal matrix):
>>> S_diag = np.diag(s) # 2D diagonal matrix >>> X = SVD(U, S_diag, V) # Also works >>> print(X.s.shape) # (10,) - extracted from diagonal of S
From numpy.linalg.svd:
>>> A = np.random.randn(100, 80) >>> U, s, Vt = np.linalg.svd(A, full_matrices=False) >>> X = SVD(U, s, Vt.T) # Note: Vt.T to get V!
Methods
__init__(U, s, V, **extra_data)Create a low-rank matrix stored by its SVD: X = U @ diag(s) @ V.T
adjoint()Hermitian adjoint.
allclose(other[, rtol, atol])Check approximate equality with another matrix.
approximation_error(reference[, ord])Compute approximation error ||X - reference|| efficiently.
compress()Compress the factorization to maximize memory efficiency.
compress_()Compress the factorization in-place to maximize memory efficiency.
compression_ratio()Compute compression ratio relative to dense storage.
cond_estimate()Condition number of the QuasiSVD matrix, estimated from the singular values.
conj()Complex conjugate of the QuasiSVD matrix.
copy()Create a deep copy of the matrix.
create_data_alias(key)Create a property that provides access to an entry in extra_data.
create_matrix_alias(index[, transpose, ...])Create a property that provides access to a factor matrix with optional transformations.
diag()Extract the diagonal of the matrix efficiently.
dot(other[, side, dense_output])Matrix multiplication optimized for SVD x SVD.
dot_sparse(sparse_other[, side, dense_output])Efficient multiplication with a sparse matrix.
expm([inplace])Compute the matrix exponential exp(X) = e^X.
expm_multiply(A, h[, side, dense_output])Efficient action of sparse matrix exponential left: output = exp(h*A) @ self right: output = self @ exp(h*A)
flatten()Flatten the matrix to a 1D array.
from_dense(matrix)Alias for from_matrix().
from_full(matrix)Alias for from_matrix().
from_low_rank(mat, **extra_data)Convert any LowRankMatrix to SVD format.
from_matrix(mat, **extra_data)Convert any matrix type to SVD format (automatic dispatch).
from_qr(qr)Convert QR format to QuasiSVD.
from_quasiSVD(mat)Convert QuasiSVD to SVD by diagonalizing the middle matrix.
full()Reconstruct the full dense matrix from its SVD representation.
full_svd(mat, **extra_data)Compute a full SVD
gather(indices)Access specific matrix entries without forming the full matrix.
generate_random(shape, sing_vals[, seed, ...])Generate a random SVD with given singular values.
get_block(rows, cols)Extract block submatrix.
hadamard(other)Faster version of the Hadamard product for SVDs.
is_orthogonal()Check if U and V have orthonormal columns.
is_singular()Check if middle matrix S is numerically singular.
is_symmetric()Check if the QuasiSVD matrix is symmetric.
is_well_conditioned([threshold])Check if the factorization is numerically well-conditioned.
load(filename)Load low-rank matrix from disk.
lstsq(b[, rtol, atol])Compute the least-squares solution to Xx ≈ b.
matmat(X)Matrix-matrix multiplication.
matvec(v)Matrix-vector product (public interface, backward compatibility).
memory_usage([unit])Report actual memory used by the factorization.
multi_add(matrices)Addition of multiple QuasiSVD matrices.
multi_dot(matrices)Matrix multiplication of several QuasiSVD matrices.
norm([ord])Calculate matrix norm, optimized for SVD representation.
norm_squared()Compute the squared Frobenius norm efficiently.
numerical_health_check([verbose])Check the numerical health of the QuasiSVD representation.
plot_singular_value_decay([semilogy, show])Plot the singular value decay of the middle matrix S.
power(n)Compute matrix power X^n efficiently using repeated squaring.
project_onto_column_space(other[, dense_output])Projection of other onto the column space of self.
project_onto_interpolated_tangent_space([...])Oblique projection onto the tangent space at self using interpolation (DEIM-like methods).
project_onto_row_space(other[, dense_output])Projection of other onto the row space of self.
project_onto_tangent_space(other[, dense_output])Projection of other onto the tangent space at self.
pseudoinverse([rtol, atol])Compute the Moore-Penrose pseudoinverse X⁺ efficiently.
rank_one_update(u, v[, alpha])Efficient rank-1 update: X_new = X + alpha * u @ v.T
reduced_svd(mat, **extra_data)Compute a reduced SVD of rank r
reorthogonalize([method, inplace])Re-orthogonalize U and V if numerical drift has occurred.
rmatmat(X)Adjoint matrix-matrix multiplication.
rmatvec(v)Adjoint matrix-vector product (public interface, backward compatibility).
save(filename)Save low-rank matrix to disk efficiently.
scale_(scalar)Scale the matrix in-place by a scalar.
Return the singular values as a 1D array.
Extract singular values from any matrix type.
solve(b[, method])Solve the linear system Xx = b efficiently using SVD structure.
sqrtm([inplace])Compute the matrix square root X^{1/2} such that X^{1/2} @ X^{1/2} = X.
to_dense()Convert to dense matrix (alias for full).
to_full()Convert to dense matrix (alias for full).
to_qr()Convert QuasiSVD to QR format.
to_sparse([format, threshold])Convert to sparse format, zeroing small entries.
to_svd()Convert QuasiSVD to SVD by computing the SVD of the middle matrix S.
todense()Convert to dense matrix (alias for full).
trace()Compute the trace of the matrix efficiently.
transpose()Transpose the matrix (returns QuasiSVD).
truncate([r, rtol, atol, inplace])Truncate the SVD.
truncate_perpendicular([r, rtol, atol, inplace])Perpendicular truncation of SVD.
truncated_svd(mat[, r, rtol, atol])Compute truncated SVD with automatic rank selection.
Attributes
Hermitian conjugate (conjugate transpose) of the SVD matrix.
KCompute and return the product U @ S on demand.
LCompute and return the product V @ S.T on demand.
Transpose of the SVD matrix, returning another SVD.
deepshapeShape tuple including all intermediate dimensions of the factorization.
is_memory_efficientCheck if the low-rank representation uses less memory than dense storage.
lengthNumber of factor matrices in the factorization.
ndimNumber of dimensions (always 2 for matrices).
numerical_rankNumerical rank of the QuasiSVD matrix.
Rank of the low-rank factorization.
Return the singular values as a 1D array.
sizeTotal number of elements stored in the factorization.
svd_typeClassify the type of SVD representation based on S dimensions.
- property H: SVD
Hermitian conjugate (conjugate transpose) of the SVD matrix.
- For X = U @ diag(s) @ V.H, the Hermitian conjugate is:
- X.H = (U @ diag(s) @ V.H).H
= V @ diag(s) @ U.H
This is equivalent to X.T.conj() or X.conj().T.
- Returns:
Hermitian conjugate in SVD format.
- Return type:
Notes
This is O(1) as it only swaps references, doesn’t copy data. For real matrices, X.H == X.T.
Examples
>>> # For complex matrix >>> A = np.random.randn(10, 8) + 1j * np.random.randn(10, 8) >>> X = SVD.reduced_svd(A) >>> X_H = X.H >>> assert np.allclose(X_H.full(), A.T.conj())
- property T: SVD
Transpose of the SVD matrix, returning another SVD.
For X = U @ diag(s) @ V.H (where V.H is Hermitian conjugate of V), the transpose is:
- X.T = (U @ diag(s) @ V.H).T
= (V.H).T @ diag(s).T @ U.T = V* @ diag(s) @ U.T (where V* is conjugate of V)
Note: This is the transpose WITHOUT conjugation. For Hermitian conjugate (conjugate transpose), use X.H instead.
- Returns:
Transposed matrix in SVD format (not QuasiSVD).
- Return type:
Notes
This is O(1) as it only swaps references, doesn’t copy data. The result is guaranteed to be an SVD (diagonal structure preserved).
Examples
>>> X = SVD.generate_random((100, 80), np.ones(10)) >>> print(X.shape) # (100, 80) >>> print(X.T.shape) # (80, 100) >>> assert isinstance(X.T, SVD) # True
- __init__(U, s, V, **extra_data)[source]
Create a low-rank matrix stored by its SVD: X = U @ diag(s) @ V.T
This is the primary constructor for the SVD class. It efficiently stores singular values as a 1D vector rather than a full diagonal matrix.
- Parameters:
U (ndarray) – Left singular vectors, shape (m, r). ASSUMED to have orthonormal columns: U.T @ U = I_r. Not verified at initialization for performance.
s (ndarray) – Singular values. Accepts two formats: - 1D array (shape (r,)): Vector of singular values (RECOMMENDED) - 2D array (shape (r, r) or (r, k)): Diagonal or nearly-diagonal matrix
V (ndarray) – Right singular vectors, shape (n, k). ASSUMED to have orthonormal columns: V.T @ V = I_k. Not verified at initialization for performance. NOTE: Provide V, not V.T or V.H (conjugate transpose).
**extra_data (dict, optional) – Additional metadata to store with the matrix (e.g., poles, residues). Internal flags: - ‘_skip_diagonal_check’: Skip diagonal verification (used internally) - ‘_skip_memory_check’: Skip memory efficiency check
- Raises:
If input dimensions are incompatible for multiplication - If number of singular values doesn’t match min(r, k) - If U or V are not 2D arrays
If U, V, or s are not numpy arrays - If s is not 1D or 2D
Notes
Storage: Singular values are stored internally as a diagonal matrix S (2D array). The input parameter s can be provided in either 1D or 2D format for convenience.
Input format flexibility: - If s is 1D: converted to diagonal matrix S for storage - If s is 2D diagonal: stored directly as S - If s is 2D rectangular (r ≠ k): stored as rectangular matrix with filled diagonal
Orthogonality assumption: U and V are ASSUMED to have orthonormal columns. This is NOT verified at initialization for performance. Use .is_orthogonal() to verify if needed.
Parent class: Calls QuasiSVD.__init__ with a flag to suppress the “use SVD class” warning (since we ARE the SVD class).
Examples
Recommended usage (1D singular values):
>>> import numpy as np >>> from low_rank_toolbox.matrices import SVD >>> >>> m, n, r = 100, 80, 10 >>> U, _ = np.linalg.qr(np.random.randn(m, r)) >>> s = np.logspace(0, -2, r) # 1D vector >>> V, _ = np.linalg.qr(np.random.randn(n, r)) >>> >>> X = SVD(U, s, V) >>> print(X.s.shape) # (10,) - extracted from diagonal of S >>> print(X.S.shape) # (10, 10) - stored 2D matrix
Alternative usage (2D diagonal matrix):
>>> S_diag = np.diag(s) # 2D diagonal matrix >>> X = SVD(U, S_diag, V) # Also works >>> print(X.s.shape) # (10,) - extracted from diagonal of S
From numpy.linalg.svd:
>>> A = np.random.randn(100, 80) >>> U, s, Vt = np.linalg.svd(A, full_matrices=False) >>> X = SVD(U, s, Vt.T) # Note: Vt.T to get V!
- dot(other, side='right', dense_output=False)[source]
Matrix multiplication optimized for SVD x SVD.
Efficiently multiplies two SVD matrices while preserving the SVD structure. The resulting rank is min(rank(X), rank(Y)).
If side is ‘right’ or ‘usual’, compute self @ other. If side is ‘left’ or ‘opposite’, compute other @ self.
- Parameters:
other (SVD, LowRankMatrix, or ndarray) – Matrix to multiply with.
side (str, optional) – ‘left’ or ‘right’, by default ‘right’
dense_output (bool, optional) – If True, return dense ndarray. If False, return SVD/LowRankMatrix. Default is False.
- Returns:
Result of matrix multiplication.
- Return type:
SVD, LowRankMatrix, or ndarray
Notes
Algorithm for SVD @ SVD: Computes M = S1 @ V1.T @ U2 @ S2, then SVD of M. Computational cost: O(r^3) where r = min(r1, r2).
Examples
>>> X = SVD.generate_random((100, 80), np.ones(20)) >>> Y = SVD.generate_random((80, 60), np.ones(15)) >>> Z = X.dot(Y) >>> print(Z.shape) # (100, 60) >>> print(Z.rank) # min(20, 15) = 15
- expm(inplace=False)[source]
Compute the matrix exponential exp(X) = e^X.
- For SVD X = U @ diag(s) @ V.H, if X is Hermitian (U == V for real, U == V.conj() for complex), then:
exp(X) = U @ diag(exp(s)) @ U.H
where exp(s) is the element-wise exponential of singular values.
This is MUCH faster than matrix exponentiation for general matrices: - SVD.expm (symmetric): O(r) for computing exp(s) - General expm: O(n³) using Padé approximation
- Parameters:
inplace (bool, optional) – If True, modify the current object in-place. Default is False.
- Returns:
Matrix exponential in SVD format.
- Return type:
- Raises:
ValueError – If matrix is not square.
NotImplementedError – If matrix is not symmetric (U != V).
Notes
This method currently only supports symmetric matrices where U == V. For general matrices, the eigenvalue decomposition would be needed.
Examples
>>> X = SVD.generate_random((100, 100), np.array([1.0, 0.5, 0.1]), is_symmetric=True) >>> X_exp = X.expm() >>> print(X_exp.s) # [e^1.0, e^0.5, e^0.1] ≈ [2.718, 1.649, 1.105]
See also
sqrtmMatrix square root
- classmethod from_low_rank(mat, **extra_data)[source]
Convert any LowRankMatrix to SVD format.
This performs QR decompositions on the outer factors and SVD on the middle product to obtain an efficient SVD representation.
- Parameters:
mat (LowRankMatrix) – Low-rank matrix in any format (e.g., product of multiple matrices).
**extra_data (dict, optional) – Additional metadata to store with the result.
- Returns:
SVD representation of the matrix.
- Return type:
- classmethod from_matrix(mat, **extra_data)[source]
Convert any matrix type to SVD format (automatic dispatch).
This is the universal converter that automatically selects the best method based on input type. Use this when you don’t know the input type.
- Parameters:
mat (ndarray, LowRankMatrix, QuasiSVD, or SVD) – Matrix to convert to SVD format.
**extra_data (dict, optional) – Additional metadata to store with the result.
- Returns:
SVD representation of the matrix.
- Return type:
- classmethod from_quasiSVD(mat)[source]
Convert QuasiSVD to SVD by diagonalizing the middle matrix.
This computes the SVD of the non-diagonal middle matrix S to obtain a true SVD representation with diagonal singular values.
- Parameters:
mat (QuasiSVD) – QuasiSVD matrix to convert.
- Returns:
SVD representation with diagonal singular values.
- Return type:
Notes
Computational cost: O(r³) where r is the rank of S. This is equivalent to calling mat.to_svd().
- full()[source]
Reconstruct the full dense matrix from its SVD representation.
Computes X = (U * s) @ V.H to obtain the full matrix.
- Returns:
The full dense matrix represented by the SVD.
- Return type:
ndarray, shape (m, n)
Notes
This operation is O(mnr) and should be avoided for large matrices. Use low-rank operations whenever possible.
Examples
>>> A = np.random.randn(100, 80) >>> X = SVD.reduced_svd(A) >>> A_reconstructed = X.full() >>> assert np.allclose(A, A_reconstructed) # True within numerical precision
- classmethod generate_random(shape, sing_vals, seed=1234, is_symmetric=False, **extra_data)[source]
Generate a random SVD with given singular values.
- hadamard(other)[source]
Faster version of the Hadamard product for SVDs.
- Return type:
SVD|LowRankMatrix|ndarray- Parameters:
other (SVD | LowRankMatrix | ndarray)
- lstsq(b, rtol=None, atol=np.float64(2.220446049250313e-14))[source]
Compute the least-squares solution to Xx ≈ b.
Minimizes ||Xx - b||₂ using the pseudoinverse: x = X⁺ @ b This is optimized for SVD with diagonal structure.
- Parameters:
- Returns:
Least-squares solution x. Shape (n,) or (n, k).
- Return type:
ndarray
Notes
This is faster than QuasiSVD.lstsq() due to diagonal S: - SVD.lstsq: O(mr + rk) for computing x - QuasiSVD.lstsq: O(r³ + mr + rk) due to SVD(S)
Examples
>>> X = SVD.generate_random((100, 80), np.logspace(0, -10, 20)) >>> b = np.random.randn(100) >>> x = X.lstsq(b) >>> # x minimizes ||X @ x - b|| >>> residual = np.linalg.norm(X @ x - b)
See also
pseudoinverseCompute pseudoinverse
solveSolve linear system (for full-rank matrices)
- norm(ord='fro')[source]
Calculate matrix norm, optimized for SVD representation.
For orthogonal U and V, common norms are computed directly from singular values without forming the full matrix. This is MUCH faster: O(r) vs O(mnr).
- Parameters:
ord (str or int, optional) – Order of the norm. Default is ‘fro’ (Frobenius). Optimized norms (computed from singular values): - ‘fro’: Frobenius norm = ||s||₂ - 2: Spectral norm (largest singular value) = max(s) - ‘nuc’: Nuclear norm (sum of singular values) = sum(s) Other norms fall back to full matrix computation.
- Returns:
The requested norm of the matrix.
- Return type:
Notes
Result caching: Norms are cached after first computation for efficiency. Each norm type is cached separately.
Orthogonality check: IMPORTANT: This method assumes U and V are orthogonal for efficient computation. If they are not, the computed norm is incorrect.
- pseudoinverse(rtol=None, atol=np.float64(2.220446049250313e-14))[source]
Compute the Moore-Penrose pseudoinverse X⁺ efficiently.
- For SVD X = U @ diag(s) @ V.H, the pseudoinverse is:
X⁺ = V @ diag(s⁺) @ U.H
where s⁺[i] = 1/s[i] if s[i] > threshold, else 0.
This is more efficient than QuasiSVD.pseudoinverse() since S is already diagonal. Small singular values (< max(rtol*σ_max, atol)) are treated as zero.
- Parameters:
- Returns:
Pseudoinverse in SVD format (guaranteed diagonal structure).
- Return type:
Notes
Computational cost: O(r) for computing s⁺ (vs O(r³) for QuasiSVD). The threshold is: max(rtol * max(s), atol)
Examples
>>> X = SVD.generate_random((100, 80), np.logspace(0, -5, 20)) >>> X_pinv = X.pseudoinverse() >>> # Check: X @ X_pinv @ X ≈ X >>> reconstruction = X @ X_pinv @ X >>> np.allclose(X.full(), reconstruction.full()) True
- property s: ndarray
Return the singular values as a 1D array.
Extracts the diagonal from the stored diagonal matrix S.
- Returns:
Singular values (diagonal of S). Returns a copy for writability.
- Return type:
ndarray, shape (r,)
Notes
This extracts the diagonal from the 2D matrix S stored in _matrices[1]. For SVD, S is guaranteed to be diagonal. Returns a copy to allow modification.
- sing_vals()[source]
Return the singular values as a 1D array.
This is an alias for the s property, provided for API consistency with QuasiSVD (which computes singular values on demand).
- Returns:
Singular values in descending order (non-negative).
- Return type:
ndarray, shape (r,)
Notes
For SVD, this is O(1) as it extracts the diagonal from S. For QuasiSVD, this is O(r³) as it computes svd(S).
Examples
>>> X = SVD.generate_random((100, 80), np.logspace(0, -5, 10)) >>> s = X.sing_vals() >>> print(s) # [1.0, 0.1, 0.01, ..., 1e-5] >>> assert np.array_equal(s, X.s) # True (same values)
- classmethod singular_values(X)[source]
Extract singular values from any matrix type.
- Return type:
- Parameters:
X (SVD | LowRankMatrix | ndarray)
- solve(b, method='direct')[source]
Solve the linear system Xx = b efficiently using SVD structure.
For square full-rank matrices, solves Xx = b directly. For rectangular or rank-deficient matrices, computes the least-squares solution.
- Parameters:
b (ndarray) – Right-hand side vector or matrix. Shape (m,) or (m, k).
method (str, optional) –
Solution method: - ‘direct’: Use SVD factorization (default, very fast: O(mr + rk))
x = V @ diag(1/s) @ U.T @ b
’lstsq’: Use pseudoinverse (handles rank deficiency)
- Returns:
Solution x. Shape (n,) or (n, k).
- Return type:
ndarray
- Raises:
ValueError – If dimensions are incompatible.
LinAlgError – If matrix is singular and method=’direct’.
Notes
The ‘direct’ method is faster than QuasiSVD.solve() since S is diagonal: - SVD.solve: O(mr + rk) where k is the number of RHS vectors - QuasiSVD.solve: O(mr² + r²k) due to S inversion
For rank-deficient matrices, use method=’lstsq’ or call lstsq() directly.
Examples
>>> X = SVD.generate_random((100, 100), np.ones(100)) # Full rank >>> b = np.random.randn(100) >>> x = X.solve(b) >>> # Check: X @ x ≈ b >>> np.allclose(X @ x, b) True
>>> # For rank-deficient systems, may have larger residual: >>> X_deficient = SVD.generate_random((100, 100), np.ones(20)) # Rank 20 >>> x_deficient = X_deficient.solve(b, method='lstsq') >>> # Residual may be non-zero for rank-deficient case
See also
lstsqLeast squares solution (handles rank deficiency)
pseudoinverseCompute Moore-Penrose pseudoinverse
- sqrtm(inplace=False)[source]
Compute the matrix square root X^{1/2} such that X^{1/2} @ X^{1/2} = X.
- For SVD X = U @ diag(s) @ V.H, the square root is:
X^{1/2} = U @ diag(√s) @ V.H
where √s is the element-wise square root of singular values.
This is MUCH simpler and faster than QuasiSVD.sqrtm() since S is diagonal.
- Parameters:
inplace (bool, optional) – If True, modify the current object in-place. Default is False.
- Returns:
Matrix square root in SVD format.
- Return type:
Notes
Computational cost: O(r) for computing √s (vs O(r³) for QuasiSVD). For matrices with negative eigenvalues, the square root may be complex.
Examples
>>> X = SVD.generate_random((100, 100), np.array([4.0, 9.0, 16.0]), is_symmetric=True) >>> X_sqrt = X.sqrtm() >>> # Check: X_sqrt @ X_sqrt ≈ X >>> reconstruction = X_sqrt @ X_sqrt >>> np.allclose(X.full(), reconstruction.full()) True
See also
expmMatrix exponential
- truncate(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]
Truncate the SVD. The rank is prioritized over the tolerance. The relative tolerance is prioritized over absolute tolerance. NOTE: By default, the truncation is done with respect to the machine precision (DEFAULT_ATOL).
- Parameters:
- Return type:
- truncate_perpendicular(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]
Perpendicular truncation of SVD. (keep the minimal rank)
Rank is prioritized over tolerance. Relative tolerance is prioritized over absolute tolerance. NOTE: By default, the truncation is done with respect to the machine precision (DEFAULT_ATOL).
- Parameters:
- Return type:
- classmethod truncated_svd(mat, r=None, rtol=None, atol=np.float64(2.220446049250313e-14), **extra_data)[source]
Compute truncated SVD with automatic rank selection.
First converts the input to SVD format (if needed), then truncates small singular values based on rank or tolerance criteria.
- Parameters:
mat (SVD, LowRankMatrix, or ndarray) – Input matrix to decompose and truncate.
r (int, optional) – Target rank. If specified, keep only the r largest singular values. Takes priority over tolerances. Default is None.
rtol (float, optional) – Relative tolerance. Singular values < rtol * σ_max are removed. Takes priority over atol. Default is DEFAULT_RTOL (None).
atol (float, optional) – Absolute tolerance. Singular values < atol are removed. Default is DEFAULT_ATOL (machine precision).
**extra_data (dict, optional) – Additional metadata to store.
- Returns:
Truncated SVD with reduced rank.
- Return type:
Notes
Truncation priority (from highest to lowest): 1. r (explicit rank) 2. rtol (relative tolerance) 3. atol (absolute tolerance)
Examples
Fixed rank truncation:
>>> A = np.random.randn(100, 80) >>> X_r10 = SVD.truncated_svd(A, r=10) >>> print(X_r10.rank) # 10
Relative tolerance (adaptive):
>>> # Keep singular values > 1e-6 * max(singular values) >>> X_rel = SVD.truncated_svd(A, rtol=1e-6) >>> print(X_rel.rank) # Depends on spectrum
Absolute tolerance:
>>> # Keep singular values > 1e-10 >>> X_abs = SVD.truncated_svd(A, atol=1e-10)
- Parameters:
U (ndarray)
s (ndarray)
V (ndarray)