Skip to main content

Linear Algebra

The torch.linalg module provides linear algebra operations including matrix decompositions, solvers, and eigenvalue computations.

Matrix Properties

norm

torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None)
Computes the matrix or vector norm.
input
Tensor
The input tensor.
ord
int, float, or str
default:"None"
The order of norm. Options: None, 'fro', 'nuc', inf, -inf, or any int/float.
dim
int or Tuple[int]
default:"None"
Dimensions to compute the norm over.
keepdim
bool
default:"False"
Whether to keep the reduced dimensions.

det

torch.linalg.det(A, *, out=None)
Computes the determinant of a square matrix.
A
Tensor
Tensor of shape (*, n, n) where * is zero or more batch dimensions.
Returns
Tensor
The determinant

slogdet

torch.linalg.slogdet(A, *, out=None)
Computes the sign and natural logarithm of the absolute value of the determinant.
Returns
namedtuple(sign, logabsdet)
The sign and log absolute value of the determinant

matrix_rank

torch.linalg.matrix_rank(A, *, atol=None, rtol=None, hermitian=False, out=None)
Computes the numerical rank of a matrix.
A
Tensor
Input matrix of shape (*, m, n).
atol
float
default:"None"
Absolute tolerance value.
rtol
float
default:"None"
Relative tolerance value.
hermitian
bool
default:"False"
Whether matrix is Hermitian/symmetric.

Matrix Decompositions

cholesky

torch.linalg.cholesky(A, *, upper=False, out=None)
Computes the Cholesky decomposition of a Hermitian positive-definite matrix.
A
Tensor
Hermitian positive-definite matrix of shape (*, n, n).
upper
bool
default:"False"
Whether to return upper triangular matrix. If False, returns lower triangular.
Returns
Tensor
Lower (or upper) triangular matrix L such that A = L @ L.T (or A = U.T @ U)

qr

torch.linalg.qr(A, mode='reduced', *, out=None)
Computes the QR decomposition of a matrix.
A
Tensor
Matrix of shape (*, m, n).
mode
str
default:"'reduced'"
One of 'reduced', 'complete', or 'r'.
Returns
namedtuple(Q, R)
Orthogonal/unitary matrix Q and upper triangular matrix R

svd

torch.linalg.svd(A, full_matrices=True, *, driver=None, out=None)
Computes the singular value decomposition (SVD) of a matrix.
A
Tensor
Matrix of shape (*, m, n).
full_matrices
bool
default:"True"
Whether to compute full or reduced SVD.
Returns
namedtuple(U, S, Vh)
  • U: Left singular vectors of shape (*, m, k)
  • S: Singular values of shape (*, k)
  • Vh: Right singular vectors (conjugate transposed) of shape (*, k, n)
where k = min(m, n)

svdvals

torch.linalg.svdvals(A, *, driver=None, out=None)
Computes only the singular values of a matrix.

eig

torch.linalg.eig(A, *, out=None)
Computes the eigenvalue decomposition of a square matrix.
A
Tensor
Square matrix of shape (*, n, n).
Returns
namedtuple(eigenvalues, eigenvectors)
Complex-valued eigenvalues and eigenvectors

eigvals

torch.linalg.eigvals(A, *, out=None)
Computes only the eigenvalues of a square matrix.

eigh

torch.linalg.eigh(A, UPLO='L', *, out=None)
Computes the eigenvalue decomposition of a Hermitian or symmetric matrix.
A
Tensor
Hermitian or symmetric matrix of shape (*, n, n).
UPLO
str
default:"'L'"
Whether to use upper ('U') or lower ('L') triangular part.
Returns
namedtuple(eigenvalues, eigenvectors)
Real eigenvalues (sorted in ascending order) and corresponding eigenvectors

eigvalsh

torch.linalg.eigvalsh(A, UPLO='L', *, out=None)
Computes only the eigenvalues of a Hermitian or symmetric matrix.

Matrix Inverse and Solvers

inv

torch.linalg.inv(A, *, out=None)
Computes the inverse of a square matrix.
A
Tensor
Invertible matrix of shape (*, n, n).
Returns
Tensor
The inverse matrix

pinv

torch.linalg.pinv(A, *, atol=None, rtol=None, hermitian=False, out=None)
Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.
A
Tensor
Matrix of shape (*, m, n).
hermitian
bool
default:"False"
Whether matrix is Hermitian/symmetric.

solve

torch.linalg.solve(A, B, *, left=True, out=None)
Computes the solution of a system of linear equations AX = B.
A
Tensor
Square coefficient matrix of shape (*, n, n).
B
Tensor
Right-hand side matrix of shape (*, n, k).
left
bool
default:"True"
Whether to solve AX = B (True) or XA = B (False).
Returns
Tensor
Solution tensor X

lstsq

torch.linalg.lstsq(A, B, rcond=None, *, driver=None)
Computes the least-squares solution to a linear system.
A
Tensor
Coefficient matrix of shape (*, m, n).
B
Tensor
Right-hand side of shape (*, m, k).
Returns
namedtuple(solution, residuals, rank, singular_values)
Least-squares solution and additional information

Matrix Operations

matrix_power

torch.linalg.matrix_power(A, n, *, out=None)
Computes the n-th power of a square matrix.
A
Tensor
Square matrix of shape (*, m, m).
n
int
The exponent (can be negative).

cross

torch.linalg.cross(input, other, *, dim=-1, out=None)
Computes the cross product of two 3D vectors.
input
Tensor
First input tensor.
other
Tensor
Second input tensor.
dim
int
default:"-1"
Dimension along which to compute the cross product.

Example Usage

import torch

# Cholesky decomposition
A = torch.randn(3, 3)
A = A @ A.T  # Make positive definite
L = torch.linalg.cholesky(A)
print(f"Reconstruction error: {torch.norm(A - L @ L.T)}")

# QR decomposition
B = torch.randn(4, 3)
Q, R = torch.linalg.qr(B)
print(f"Q is orthogonal: {torch.allclose(Q.T @ Q, torch.eye(3))}")
print(f"Reconstruction: {torch.allclose(B, Q @ R)}")

# SVD
U, S, Vh = torch.linalg.svd(B, full_matrices=False)
print(f"Singular values: {S}")
B_reconstructed = U @ torch.diag(S) @ Vh
print(f"SVD reconstruction error: {torch.norm(B - B_reconstructed)}")

Best Practices

  • Use torch.linalg.solve instead of computing inverses explicitly
  • For ill-conditioned systems, consider using lstsq or regularization
  • Use slogdet instead of det for large determinants to avoid overflow
  • Check condition numbers before solving linear systems
  • Batch operations are more efficient than loops
  • For symmetric/Hermitian matrices, use eigh instead of eig
  • Use svdvals if you only need singular values, not vectors
  • Consider using lower precision (float32) if accuracy allows
  • SVD returns Vh (conjugate transpose), not V
  • Eigenvalues from eig are not sorted; use eigh for sorted eigenvalues
  • matrix_power with negative exponents requires invertible matrices
  • Always check for singular matrices before computing inverses