low_rank_toolbox.matrices.QR
- class low_rank_toolbox.matrices.QR(Q, R, transposed=False, **extra_data)[source]
Bases:
LowRankMatrixQR decomposition for low-rank matrices.
Efficient storage and operations for matrices represented as the product of an orthogonal matrix Q and an upper triangular matrix R. Particularly well-suited for solving linear systems and least squares problems.
Mathematical Representation
Standard form: X = Q @ R Transposed form: X = R.H @ Q.H
where: - Q ∈ ℂ^(m×r) has orthonormal columns (Q.H @ Q = I_r) - R ∈ ℂ^(r×n) is upper triangular - r = rank ≤ min(m, n)
For real matrices, conjugate transpose (.H) is equivalent to regular transpose (.T).
Key Differences from SVD
Triangular structure: R is upper triangular (not diagonal)
Solve efficiency: O(r²) back substitution vs O(r³) for SVD
No singular values: Diagonal of R approximates importance but not sorted
Operations: Addition/subtraction preserve QR structure (unlike SVD which needs recomputation)
Memory: Same O(mr + rn) storage as SVD
Storage Efficiency
Full matrix: O(mn) storage QR: O(mr + rn) storage Memory savings when r << min(m, n)
For 1000×800 matrix with rank 10: - Dense: 800,000 elements - QR: 18,000 elements (44x compression) - SVD: 18,100 elements (similar)
Important Notes
CRITICAL: Q MUST have orthonormal columns (Q.H @ Q = I_r). This is NOT verified at initialization for performance. Invalid inputs produce incorrect results without warning.
Use .is_orthogonal() to verify orthonormality if needed
Use .is_upper_triangular() to verify R structure
R diagonal elements |R[i,i]| serve as importance indicators (but not sorted)
After operations, may return LowRankMatrix (orthogonality not preserved)
Transposed mode enables efficient representation of A.H without recomputation
- Q
Orthogonal matrix with orthonormal columns
- Type:
ndarray, shape (m, r)
- R
Upper triangular matrix
- Type:
ndarray, shape (r, n)
- Properties
- ----------
- Methods Overview
- ----------------
- Solving Linear Systems
solve(b) : Solve Xx = b via back/forward substitution
lstsq(b) : Least squares solution
pseudoinverse() : Moore-Penrose pseudoinverse
- Matrix Operations
__add__ : Addition (preserves QR for matching transposed modes)
__sub__ : Subtraction (preserves QR for matching transposed modes)
__mul__ : Scalar or Hadamard multiplication
dot : Matrix-vector/matrix-matrix multiplication
hadamard : Element-wise multiplication
- Validation
is_orthogonal() : Check Q orthonormality
is_upper_triangular() : Check R triangular structure
cond() : Condition number estimation
- Truncation
truncate() : Remove columns with small R diagonal elements
- Conversions
to_svd() : Convert to SVD format
from_svd() : Create from SVD
from_matrix() : Compute QR decomposition
from_low_rank() : QR of generic low-rank matrix
- Norms
norm() : Matrix norms (Frobenius optimized for QR)
- Class Methods
- -------------
- - from_matrix(A)
- Type:
Compute QR decomposition of matrix A
- - from_low_rank(LR)
- Type:
Compute QR of low-rank matrix
- - from_svd(svd)
- Type:
Convert SVD to QR (Q=U, R=S@V.H)
- - qr(A)
- Type:
Alias for from_matrix
- - generate_random(shape)
- Type:
Generate random QR for testing
Examples
Creating QR from a matrix:
>>> import numpy as np >>> from low_rank_toolbox.matrices import QR >>> >>> A = np.random.randn(100, 80) >>> X = QR.from_matrix(A) >>> print(X.shape) # (100, 80) >>> print(X.rank) # 80 >>> np.allclose(X.full(), A) # True
Solving linear systems (faster than SVD):
>>> b = np.random.randn(100) >>> x = X.solve(b) >>> np.allclose(A @ x, b) # True >>> >>> # Least squares for overdetermined systems >>> x_ls = X.lstsq(b) >>> residual = np.linalg.norm(A @ x_ls - b)
Efficient operations:
>>> # Addition preserves QR structure (same transposed mode) >>> Y = QR.from_matrix(np.random.randn(100, 80)) >>> Z = X + Y # Returns QR (may auto-truncate to min(m,n)) >>> >>> # Frobenius norm computed from R only >>> norm_fro = X.norm('fro') # O(rn) vs O(mnr) for full matrix
Transposed mode for efficient A.H representation:
>>> # Instead of computing QR(A.H), use transposed flag >>> X_t = QR.from_matrix(A, transposed=True) >>> np.allclose(X_t.full(), A.T) # True (for real A) >>> >>> # Conjugate transpose flips mode >>> X_h = X.H # Returns transposed QR >>> assert X_h._transposed != X._transposed
Validation and diagnostics:
>>> X.is_orthogonal() # Verify Q orthonormality >>> X.is_upper_triangular() # Verify R structure >>> cond_approx = X.cond(exact=False) # Fast diagonal approximation >>> cond_exact = X.cond(exact=True) # Exact via SVD of R
Truncation for rank reduction:
>>> X_trunc = X.truncate(r=10) # Keep first 10 columns >>> X_trunc = X.truncate(atol=1e-10) # Remove small R[i,i]
Conversion between formats:
>>> from low_rank_toolbox.matrices import SVD >>> X_svd = X.to_svd() # Convert to SVD (requires SVD of R) >>> Y = QR.from_svd(X_svd) # Convert back (Q=U, R=S@V.H)
Memory efficiency:
>>> X.compression_ratio() # < 1.0 means memory savings >>> X.memory_usage('MB') # Actual memory used
See also
SVDSingular Value Decomposition (diagonal middle matrix)
QuasiSVDGeneralized SVD with non-diagonal middle matrix
LowRankMatrixBase class for low-rank representations
References
Notes
QR is particularly efficient for solving linear systems and least squares
For repeated solves with same A, QR is faster than LU or SVD
Column pivoting available via scipy.linalg.qr(pivoting=True) but not integrated into this class (see documentation for details)
Operations exploit orthogonality and triangular structure for efficiency
Condition number can be estimated quickly from R diagonal (O(r) vs O(r³))
- __init__(Q, R, transposed=False, **extra_data)[source]
Initialize a QR decomposition.
- Parameters:
Q (ndarray) – Orthogonal matrix (m x r) with orthonormal columns. ASSUMED to have orthonormal columns: Q.H @ Q = I_r. Not verified at initialization for performance.
R (ndarray) – Upper triangular matrix (r x n)
transposed (bool, optional) – If True, represents X = R.H @ Q.H instead of X = Q @ R, by default False For real matrices, this is equivalent to X = R.T @ Q.T
**extra_data – Additional data to store with the matrix
Notes
For the standard form (transposed=False): X = Q @ R For the transposed form (transposed=True): X = R.H @ Q.H
The transposed form is useful when you have computed QR(A.H) and want to represent A efficiently.
Methods
__init__(Q, R[, transposed])Initialize a QR decomposition.
adjoint()Hermitian adjoint.
allclose(other[, rtol, atol])Check approximate equality with another matrix.
approximation_error(reference[, ord])Compute approximation error ||X - reference|| efficiently.
compress()Compress the factorization to maximize memory efficiency.
compress_()Compress the factorization in-place to maximize memory efficiency.
compression_ratio()Compute compression ratio relative to dense storage.
cond([p, exact])Compute condition number efficiently using R matrix.
cond_estimate([method, n_iter])Estimate condition number without computing full SVD.
copy()Create a deep copy of the matrix.
create_data_alias(key)Create a property that provides access to an entry in extra_data.
create_matrix_alias(index[, transpose, ...])Create a property that provides access to a factor matrix with optional transformations.
diag()Extract diagonal without forming full matrix.
dot(other[, side, dense_output])Matrix and vector multiplication
dot_sparse(sparse_other[, side, dense_output])Efficient multiplication with a sparse matrix.
expm_multiply(A, h[, side, dense_output])Efficient action of sparse matrix exponential left: output = exp(h*A) @ self right: output = self @ exp(h*A)
flatten()Flatten the matrix to a 1D array.
from_dense(matrix)Alias for from_matrix().
from_full(matrix)Alias for from_matrix().
from_low_rank(low_rank[, transposed])Compute the QR decomposition from a low-rank representation.
from_matrix(matrix[, mode, transposed])Compute the QR decomposition of a matrix.
from_svd(svd[, transposed])Convert SVD to QR format.
full()Form the full dense matrix by multiplying all factors in optimal order.
gather(indices)Access specific matrix entries without forming the full matrix.
generate_random(shape[, transposed, seed])Generate a random QR decomposition.
get_block(rows, cols)Extract block submatrix.
hadamard(other)Element-wise (Hadamard) product with another matrix.
is_orthogonal([tol])Check if Q has orthonormal columns within a tolerance.
is_symmetric()Check if the matrix is symmetric.
is_upper_triangular([tol])Check if R is upper triangular within a tolerance.
is_well_conditioned([threshold])Check if the factorization is numerically well-conditioned.
load(filename)Load low-rank matrix from disk.
lstsq(b[, rcond])Least squares solution via QR decomposition.
matmat(X)Matrix-matrix multiplication.
matvec(v)Matrix-vector product (public interface, backward compatibility).
memory_usage([unit])Report actual memory used by the factorization.
multi_add(others)Add multiple low-rank matrices efficiently.
multi_dot(others)Matrix multiplication of a sequence of matrices.
norm([ord])Compute matrix norm with QR-specific optimizations.
norm_squared()Compute squared Frobenius norm efficiently: ||X||²_F = tr(X^H X).
power(n)Compute matrix power X^n efficiently using repeated squaring.
pseudoinverse([rcond])Compute Moore-Penrose pseudoinverse using QR factorization.
qr(matrix[, mode, transposed])Compute a QR decomposition (alias for from_matrix).
rmatmat(X)Adjoint matrix-matrix multiplication.
rmatvec(v)Adjoint matrix-vector product (public interface, backward compatibility).
save(filename)Save low-rank matrix to disk efficiently.
scale_(scalar)Scale the matrix in-place by a scalar.
solve(b[, method])Solve Xx = b using QR factorization.
to_dense()Convert to dense matrix (alias for full).
to_full()Convert to dense matrix (alias for full).
to_sparse([format, threshold])Convert to sparse format, zeroing small entries.
to_svd()Convert QR to SVD format.
todense()Convert to dense matrix (alias for full).
trace()Compute trace efficiently using cyclic property: tr(ABC) = tr(CAB) = tr(BCA).
transpose()Transpose the matrix (alias for .T property).
truncate([r, rtol, atol, inplace])Truncate QR to lower rank by removing columns with small R diagonal elements.
Attributes
Return the conjugate transpose (Hermitian) of the QR matrix.
Orthogonal matrix Q with orthonormal columns.
Upper triangular matrix R.
Return the transpose of the QR matrix.
Return the complex conjugate of the QR matrix.
deepshapeShape tuple including all intermediate dimensions of the factorization.
is_memory_efficientCheck if the low-rank representation uses less memory than dense storage.
lengthNumber of factor matrices in the factorization.
ndimNumber of dimensions (always 2 for matrices).
Rank of the low-rank factorization.
sizeTotal number of elements stored in the factorization.
- property H: QR
Return the conjugate transpose (Hermitian) of the QR matrix.
- Returns:
Conjugate transpose with flipped transposed mode
- Return type:
Notes
If X = Q @ R, then X.H = R.H @ Q.H (transposed mode) If X = R.H @ Q.H, then X.H = (R.H @ Q.H).H = Q @ R (standard mode)
The transposed flag is always flipped (unlike .conj which preserves it). For real matrices, .H is equivalent to .T
Examples
>>> A = np.random.randn(100, 80) + 1j * np.random.randn(100, 80) >>> X = QR.from_matrix(A) >>> X_h = X.H >>> np.allclose(X_h.full(), A.T.conj()) # True >>> assert X_h._transposed != X._transposed # Mode flipped >>> >>> # Double conjugate transpose returns to original mode >>> X_hh = X.H.H >>> assert X_hh._transposed == X._transposed >>> np.allclose(X_hh.full(), X.full()) # True
- property Q: ndarray
Orthogonal matrix Q with orthonormal columns.
- Returns:
The orthogonal factor with Q.H @ Q = I_r.
- Return type:
ndarray, shape (m, r)
Notes
In standard mode (transposed=False): directly returns stored Q. In transposed mode (transposed=True): reconstructs Q from internal storage.
ASSUMPTION: Orthonormality is assumed, not verified. Use .is_orthogonal() to validate if needed.
Examples
>>> X = QR.from_matrix(np.random.randn(100, 80)) >>> Q = X.Q >>> print(Q.shape) # (100, 80) >>> print(np.allclose(Q.T @ Q, np.eye(80))) # True
- property R: ndarray
Upper triangular matrix R.
- Returns:
The upper triangular factor where R[i, j] ≈ 0 for i > j.
- Return type:
ndarray, shape (r, n)
Notes
In standard mode (transposed=False): directly returns stored R. In transposed mode (transposed=True): reconstructs R from internal storage.
The diagonal elements |R[i,i]| indicate column importance (but unlike SVD, they are NOT sorted by magnitude).
ASSUMPTION: Upper triangular structure is assumed. Use .is_upper_triangular() to validate if needed.
Examples
>>> X = QR.from_matrix(np.random.randn(100, 80)) >>> R = X.R >>> print(R.shape) # (80, 80) >>> print(np.allclose(np.tril(R, -1), 0)) # True (lower triangle is zero) >>> print(np.diag(R)) # Diagonal elements (importance indicators)
- property T: QR
Return the transpose of the QR matrix.
For real matrices, .T is equivalent to .H For complex matrices, .T transposes without conjugating.
- Returns:
Transposed QR matrix
- Return type:
Notes
If X = Q @ R, then X.T = R.T @ Q.T = conj(R).H @ conj(Q).H If X = R.H @ Q.H, then X.T = (R.H @ Q.H).T = Q.conj() @ R.conj() = conj(Q) @ conj(R)
To get transpose (not conjugate transpose), we conjugate Q and R and flip the mode.
Examples
>>> A = np.random.randn(100, 80) >>> X = QR.from_matrix(A) >>> X_t = X.T >>> np.allclose(X_t.full(), A.T) # True >>> assert X_t._transposed != X._transposed # Mode flipped
- __init__(Q, R, transposed=False, **extra_data)[source]
Initialize a QR decomposition.
- Parameters:
Q (ndarray) – Orthogonal matrix (m x r) with orthonormal columns. ASSUMED to have orthonormal columns: Q.H @ Q = I_r. Not verified at initialization for performance.
R (ndarray) – Upper triangular matrix (r x n)
transposed (bool, optional) – If True, represents X = R.H @ Q.H instead of X = Q @ R, by default False For real matrices, this is equivalent to X = R.T @ Q.T
**extra_data – Additional data to store with the matrix
Notes
For the standard form (transposed=False): X = Q @ R For the transposed form (transposed=True): X = R.H @ Q.H
The transposed form is useful when you have computed QR(A.H) and want to represent A efficiently.
- cond(p=2, exact=False)[source]
Compute condition number efficiently using R matrix.
- For QR = Q @ R with orthogonal Q:
cond(Q @ R) = cond(R)
The condition number is preserved by orthogonal transformations.
- Parameters:
p (int or str, optional) – Norm type. Default is 2 (spectral norm). - 2: Spectral condition number (ratio of max/min singular values) - ‘fro’: Frobenius norm condition number - 1, -1, inf, -inf: Other matrix norms
exact (bool, optional) – If True, compute exact condition number using numpy.linalg.cond. If False (default), use fast diagonal approximation for p=2. For other norms, always uses exact computation.
- Returns:
Condition number of the matrix
- Return type:
- Raises:
ValueError – If R has zeros on diagonal (singular matrix)
Notes
Fast approximation (p=2, exact=False): For upper triangular R, estimates the 2-norm condition number from diagonal elements: cond_2(R) ≈ max(|diag(R)|) / min(|diag(R)|)
This is a LOWER BOUND for the true condition number. The exact value requires SVD of R: cond_2(R) = σ_max(R) / σ_min(R)
The approximation is exact for diagonal matrices and reasonable for well-conditioned matrices, but can significantly underestimate the condition number for ill-conditioned non-diagonal triangular matrices.
Exact computation (exact=True or other norms): Uses numpy.linalg.cond(R, p) which computes the exact condition number. This requires SVD for p=2, which is O(r³).
Computational cost: - p=2, exact=False: O(r) - diagonal extraction only - p=2, exact=True: O(r³) - requires SVD of R - Other norms: O(r²) to O(r³) depending on norm type
Caching: Results are cached for repeated queries with the same (norm, exact) pair.
Singularity: If any diagonal element is zero (within machine precision), the approximation returns np.inf to indicate singular matrix.
Why use approximation? For large ranks, computing exact condition number via SVD is expensive. The diagonal approximation provides a quick lower bound that’s sufficient for many applications (checking if matrix is well-conditioned).
Examples
>>> A = np.random.randn(100, 80) >>> X = QR.from_matrix(A) >>> >>> # Fast approximation >>> cond_approx = X.cond(2) # O(r) >>> >>> # Exact computation >>> cond_exact = X.cond(2, exact=True) # O(r³) >>> >>> # For diagonal matrices, approximation is exact >>> A_diag = np.diag([1e10, 1e5, 1e0, 1e-5, 1e-10]) >>> X_diag = QR.from_matrix(A_diag) >>> print(X_diag.cond(2)) # 1e20 (exact for diagonal) >>> >>> # Other norms >>> cond_fro = X.cond('fro') # Always exact
See also
numpy.linalg.condGeneral condition number computation
QuasiSVD.cond_estimateSimilar method for QuasiSVD
is_upper_triangularCheck if R is upper triangular
- property conj: QR
Return the complex conjugate of the QR matrix.
- Returns:
Complex conjugate of the QR matrix with same transposed mode
- Return type:
Notes
If X = Q @ R, then conj(X) = conj(Q) @ conj(R) If X = R.H @ Q.H, then conj(X) = conj(R).H @ conj(Q).H = conj(R.H) @ conj(Q.H)
The transposed flag is preserved (unlike .T which flips it).
Examples
>>> A = np.random.randn(100, 80) + 1j * np.random.randn(100, 80) >>> X = QR.from_matrix(A) >>> X_conj = X.conj >>> np.allclose(X_conj.full(), A.conj()) # True >>> assert X_conj._transposed == X._transposed # Mode preserved
- dot(other, side='right', dense_output=False)[source]
Matrix and vector multiplication
- Parameters:
other (LowRankMatrix, ndarray, spmatrix) – Matrix or vector to multiply with.
side (str, optional) – Whether to multiply on the left or right, by default ‘right’.
dense_output (bool, optional) – Whether to return a dense matrix or a low-rank matrix, by default False.
- Return type:
QR|LowRankMatrix|ndarray
- classmethod from_low_rank(low_rank, transposed=False, **extra_data)[source]
Compute the QR decomposition from a low-rank representation.
- Parameters:
low_rank (LowRankMatrix) – Low rank matrix to decompose
transposed (bool, optional) – If True, compute transposed form, by default False
**extra_data – Additional data to store
- Returns:
QR decomposition object
- Return type:
- classmethod from_matrix(matrix, mode='economic', transposed=False, **extra_data)[source]
Compute the QR decomposition of a matrix.
- Parameters:
matrix (ndarray) – Matrix to decompose
mode (str, optional) – QR mode (‘economic’ or ‘complete’), by default ‘economic’
transposed (bool, optional) – If True, return the transposed form X = R.H @ Q.H, by default False Standard form (False): X = Q @ R Transposed form (True): X = R.H @ Q.H = matrix.H
**extra_data – Additional data to store
- Returns:
QR decomposition object where full() reconstructs the input matrix
- Return type:
Examples
>>> A = np.random.randn(100, 80) >>> X = QR.from_matrix(A) # X = Q @ R, X.full() == A >>> X_T = QR.from_matrix(A, transposed=True) # X_T = R.H @ Q.H, X_T.full() == A.H
- classmethod from_svd(svd, transposed=False, **extra_data)[source]
Convert SVD to QR format.
Simply uses Q = U and R = S @ V.H from the SVD representation. This is efficient and preserves orthogonality.
- Parameters:
- Returns:
QR decomposition object where full() reconstructs the SVD matrix
- Return type:
Notes
Algorithm:
- For standard mode (transposed=False):
Set Q = U (already orthogonal from SVD)
Set R = diag(s) @ V.H = S @ V.H
Result: X = Q @ R = U @ S @ V.H (same as SVD)
- For transposed mode (transposed=True):
Same Q and R as above
Interpretation: X = R.H @ Q.H = V @ S @ U.H = (U @ S @ V.H).H
Computational cost: - Matrix multiplication S @ V.H: O(r²n) where r = rank - No QR decomposition needed (already have orthogonal Q) - Total: O(r²n)
Advantages over to_svd(): - Cheaper: O(r²n) vs O(r²(m+n)) - No loss of orthogonality (U is already orthogonal) - Preserves singular value information in R structure - No SVD computation needed - Preserves singular value structure in R
When to use: - When you have an SVD and need QR format - When you want to exploit triangular structure of R - For efficient solve operations (QR is faster than SVD)
Examples
>>> A = np.random.randn(100, 80) >>> X_svd = SVD.from_matrix(A) >>> X_qr = QR.from_svd(X_svd) >>> print(X_qr.shape) # (100, 80) >>> print(X_qr.rank) # Same as X_svd.rank >>> np.allclose(X_svd.full(), X_qr.full()) # True
>>> # Verify Q is orthogonal and R is upper triangular >>> print(X_qr.is_orthogonal()) # True >>> print(X_qr.is_upper_triangular()) # False (R = S @ V.H not triangular!)
>>> # Round-trip conversion >>> X_svd2 = X_qr.to_svd() >>> np.allclose(X_svd.full(), X_svd2.full()) # True
- classmethod generate_random(shape, transposed=False, seed=None, **extra_data)[source]
Generate a random QR decomposition.
- Parameters:
- Returns:
Random QR decomposition object
- Return type:
Examples
>>> X = QR.generate_random((100, 80), seed=42) >>> print(X.shape) # (100, 80) >>> print(X.is_orthogonal()) # True
- hadamard(other)[source]
Element-wise (Hadamard) product with another matrix.
- Parameters:
other (QR or ndarray) – Other matrix to perform Hadamard product with
- Returns:
Resulting matrix after Hadamard product
- Return type:
QR, or ndarray
Notes
The rank will be multiplied by the rank of the other matrix. This operation may be inefficient for large ranks. In that case a warning is raised, and one should consider converting to dense with .todense() first.
- is_orthogonal(tol=1e-12)[source]
Check if Q has orthonormal columns within a tolerance.
- Parameters:
tol (float, optional) – Tolerance for orthogonality check, by default 1e-12
- Returns:
True if Q.H @ Q is approximately equal to identity
- Return type:
Notes
Verifies that Q.H @ Q = I_k (orthonormal columns), where k is the number of columns in Q. This is always true when QR is constructed via from_matrix() or qr(), but may not hold if Q and R are provided manually. Note that Q may have more columns than the reported rank after Hadamard products.
Computational cost: O(k²m) where k is the number of columns in Q. Result is cached for the default tolerance.
- is_upper_triangular(tol=1e-12)[source]
Check if R is upper triangular within a tolerance.
- Parameters:
tol (float, optional) – Tolerance for triangularity check, by default 1e-12
- Returns:
True if all elements below the diagonal of R are approximately zero
- Return type:
Notes
Verifies that R[i,j] ≈ 0 for all i > j (strict lower triangular part is zero). This is always true when QR is constructed via from_matrix() or qr(), but may not hold if Q and R are provided manually.
For methods like solve() and lstsq(), having an upper triangular R enables efficient back substitution. If R is not upper triangular, these methods may produce incorrect results.
Computational cost: O(r²) where r = min(rank, n) is the number of rows in R. Result is cached for the default tolerance.
Examples
>>> X = QR.from_matrix(A) >>> X.is_upper_triangular() # True True >>> Q = np.random.randn(10, 5) >>> R = np.random.randn(5, 8) # Not upper triangular >>> Y = QR(Q, R) >>> Y.is_upper_triangular() # False False
- lstsq(b, rcond=None)[source]
Least squares solution via QR decomposition.
Computes x that minimizes ||Xx - b||_2
- Parameters:
b (ndarray) – Right-hand side, shape (m,) or (m, k)
rcond (float, optional) – Relative condition number threshold for rank determination. Singular values smaller than rcond * largest_singular_value are treated as zero. If None, machine precision is used.
- Returns:
Least squares solution, shape (n,) or (n, k)
- Return type:
ndarray
Notes
For overdetermined systems (m > n), finds x minimizing ||Ax - b||_2 For underdetermined systems (m < n), finds minimum norm solution
Uses QR decomposition: - For X = Q @ R: x = R^+ @ Q.H @ b where R^+ is pseudoinverse of R - Handles rank deficiency by zeroing small diagonal elements of R
Examples
>>> A = np.random.randn(100, 80) >>> b = np.random.randn(100) >>> X = QR.from_matrix(A) >>> x = X.lstsq(b) >>> residual = np.linalg.norm(A @ x - b)
- norm(ord='fro')[source]
Compute matrix norm with QR-specific optimizations.
- Parameters:
ord (str or int, optional) – Norm type. Default is ‘fro’ (Frobenius). - ‘fro’: Frobenius norm (optimized: ||Q @ R||_F = ||R||_F for orthogonal Q) - 2: Spectral norm (requires full matrix) - 1, -1, inf, -inf: Other matrix norms (require full matrix)
- Returns:
Matrix norm
- Return type:
Notes
- For orthogonal Q:
||Q @ R||_F = ||R||_F (Frobenius norm preserved) ||Q @ R||_2 ≠ ||R||_2 (spectral norm NOT preserved)
Results are cached for efficiency.
Examples
>>> X = QR.from_matrix(A) >>> X.norm('fro') # Efficient: computed from R only >>> X.norm(2) # Requires full matrix
- pseudoinverse(rcond=None)[source]
Compute Moore-Penrose pseudoinverse using QR factorization.
For X = Q @ R, computes X^+ = R^+ @ Q.H
- Parameters:
rcond (float, optional) – Cutoff for small singular values in R. Singular values smaller than rcond * largest_singular_value are set to zero. If None, machine precision is used.
- Returns:
Pseudoinverse matrix, shape (n, m)
- Return type:
ndarray
Notes
The pseudoinverse satisfies: 1. X @ X^+ @ X = X 2. X^+ @ X @ X^+ = X^+ 3. (X @ X^+).H = X @ X^+ 4. (X^+ @ X).H = X^+ @ X
For full rank matrices: - Overdetermined (m > n): X^+ = (X.H @ X)^{-1} @ X.H = R^{-1} @ Q.H - Underdetermined (m < n): X^+ = X.H @ (X @ X.H)^{-1}
Computational cost: O(n²m) for computing R^+ and matrix multiplication
Examples
>>> A = np.random.randn(100, 80) >>> X = QR.from_matrix(A) >>> X_pinv = X.pseudoinverse() >>> np.allclose(A @ X_pinv @ A, A) # Property 1 True >>> np.allclose(X_pinv @ A @ X_pinv, X_pinv) # Property 2 True
- classmethod qr(matrix, mode='economic', transposed=False, **extra_data)[source]
Compute a QR decomposition (alias for from_matrix).
- Parameters:
- Returns:
QR decomposition object
- Return type:
- solve(b, method='direct')[source]
Solve Xx = b using QR factorization.
- For X = Q @ R, solve via:
Q @ R @ x = b R @ x = Q.T @ b x = R^{-1} @ Q.T @ b (back substitution)
- Parameters:
b (ndarray) – Right-hand side, shape (m,) or (m, k)
method (str, optional) – ‘direct’ (default): Use back substitution (fast, requires full rank) ‘lstsq’: Use least squares (handles rank deficiency)
- Returns:
Solution x, shape (n,) or (n, k)
- Return type:
ndarray
- Raises:
ValueError – If matrix dimensions don’t match RHS dimensions
np.linalg.LinAlgError – If matrix is singular (method=’direct’)
Notes
QR solve is faster than SVD solve: O(n²) vs O(n³) Best for overdetermined systems (more equations than unknowns)
- For standard mode (X = Q @ R):
Solve R @ x = Q.H @ b via back substitution
- For transposed mode (X = R.H @ Q.H):
Solve R.H @ Q.H @ x = b
Equivalent to Q @ R @ y = b where y = conj(x)
Examples
>>> A = np.random.randn(100, 80) >>> b = np.random.randn(100) >>> X = QR.from_matrix(A) >>> # For overdetermined systems (more rows than columns), use lstsq instead: >>> x = X.lstsq(b) >>> # For square full-rank systems: >>> A_square = np.random.randn(80, 80) >>> X_square = QR.from_matrix(A_square) >>> b_square = np.random.randn(80) >>> x = X_square.solve(b_square) >>> np.allclose(A_square @ x, b_square) True
- to_svd()[source]
Convert QR to SVD format.
Computes SVD of R: R = U_R @ S @ V_R.H Then: X = Q @ R = (Q @ U_R) @ S @ V_R.H
- Returns:
Equivalent SVD representation with same shape and rank
- Return type:
Notes
Algorithm:
- For standard mode (X = Q @ R):
Compute SVD of R: R = U_R @ diag(s) @ V_R.H
Construct: X = Q @ R = (Q @ U_R) @ diag(s) @ V_R.H
Return SVD(Q @ U_R, s, V_R)
- For transposed mode (X = R.H @ Q.H):
Compute SVD of R: R = U_R @ diag(s) @ V_R.H
X.H = (R.H @ Q.H).H = Q @ R = Q @ U_R @ diag(s) @ V_R.H
Return SVD(Q @ U_R, s, V_R)
Computational cost: - SVD of R: O(r²n) where r = rank, n = number of columns - Matrix multiplication Q @ U_R: O(mr²) where m = number of rows - Total: O(r²(m + n))
Orthogonality: - Input: Q has orthonormal columns, R is upper triangular - Output: U = Q @ U_R has orthonormal columns (product of orthogonal matrices) - Output: V = V_R has orthonormal columns (from SVD) - Result is a valid SVD with orthonormal U and V
Transposed mode: Both standard and transposed modes produce the same SVD representation. This is because SVD naturally represents X, not X.H.
Examples
>>> A = np.random.randn(100, 80) >>> X_qr = QR.from_matrix(A) >>> X_svd = X_qr.to_svd() >>> print(X_svd.shape) # (100, 80) >>> print(X_svd.rank) # Same as X_qr.rank >>> np.allclose(X_qr.full(), X_svd.full()) # True
>>> # Transposed mode >>> X_qr_t = QR.from_matrix(A, transposed=True) >>> X_svd_t = X_qr_t.to_svd() >>> np.allclose(X_qr_t.full(), X_svd_t.full()) # True
- truncate(r=None, rtol=None, atol=np.float64(2.220446049250313e-14), inplace=False)[source]
Truncate QR to lower rank by removing columns with small R diagonal elements.
The rank is prioritized over the tolerance. The relative tolerance is prioritized over absolute tolerance. NOTE: By default, the truncation is done with respect to the machine precision (DEFAULT_ATOL).
- Parameters:
r (int, optional) – Target rank (number of columns to keep). If None, use tolerance.
rtol (float, optional) – Relative tolerance. Remove columns where |R[i,i]| < rtol * max(|R[i,i]|). Default is DEFAULT_RTOL (None).
atol (float, optional) – Absolute tolerance. Remove columns where |R[i,i]| < atol. Default is DEFAULT_ATOL (~2.22e-14).
inplace (bool, optional) – If True, modify in place. Default False.
- Returns:
Truncated QR decomposition
- Return type:
Notes
For QR, truncation keeps the first r columns of Q and first r rows of R. The diagonal elements of R, |R[i,i]|, serve as importance indicators.
Unlike SVD where singular values are sorted, R diagonal elements are NOT necessarily in decreasing order. However, for numerically stable QR decompositions, large diagonal elements indicate more important directions.
Algorithm: 1. If r is specified: keep first r columns/rows 2. If rtol is specified: keep columns where |R[i,i]| > rtol * max(|R[i,i]|) 3. If atol is specified: keep columns where |R[i,i]| > atol
Priority: r > rtol > atol
Examples
>>> X = QR.from_matrix(A) >>> X_trunc = X.truncate(r=10) # Keep only first 10 columns >>> X_trunc = X.truncate(rtol=1e-10) # Remove columns with small R[i,i] >>> X_trunc = X.truncate(atol=1e-12) # Absolute threshold
See also
SVD.truncateSimilar method for SVD (uses singular values)
- Parameters:
Q (ndarray)
R (ndarray)
transposed (bool)