4.3. LU Decomposition¶
The forward and backward substitution algorithms can be used to solve a non-triangular system by virtue of the following factorization property:
Theorem 1
If \(A\) is an \(n\times n\) matrix, it can be (generally) written as a product
where \(L\) is a lower triangular matrix and \(U\) is an upper triangular matrix. Furthermore, it is possible to construct \(L\) such that all diagonal elements \(l_{ii}=1\).
The code below outlines the Gaussian Elimination algorithm to compute the \(LU\) factorization of an \(n\times n\) matrix.
import numpy as np
# LU decomposition of square systems
def Gaussian_Elimination(A):
m=A.shape[0]
n=A.shape[1]
if(m!=n):
print 'Matrix is not square!'
return
for k in range(0,n-1):
if A[k,k] == 0:
return
for i in range(k+1,n):
A[i,k]=A[i,k]/A[k,k]
for j in range(k+1,n):
for i in range(k+1,n):
A[i,j]-=A[i,k]*A[k,j]
Note that the algorithm above executes in-place, i.e., the matrix \(A\) is replaced by its \(LU\) factorization, in compact form. More specifically, this algorithm produces a factorization \(A=LU\), where:
After the in-place factorization algorithm completes, \(A\) is replaced by the following compacted encoding of \(L\) and \(U\) together:
Example
Consider the matrix shown below.
Let us append the following code to the Gaussian Elimination algorithm outlined above to compute the corresponding \(LU\) factorization:
def main():
A=np.matrix([[2,4,-2],[4,9,-3],[-2,-3,7]])
Gaussian_Elimination(A)
print A
if __name__ == "__main__":
main()
Upon execution, it produces the following result:
[[ 2 4 -2]
[ 2 1 1]
[-1 1 4]]
As stated above, the factorization is in-place, and so the result should be interpreted as:
4.3.1. Elimination Matrices¶
Here is a slightly different algorithm for computing the \(LU\) factorization of an \(n\times n\) matrix by using elimination matrices. Define the \(n\times 1\) basis vector \(e_k\) as
where the \(1\) is in the \(k^{th}\) row and the length of \(e_k\) is \(n\). In order to perform Gaussian Elimination on the \(k^{th}\) column \(a_k\) of \(A\), we define the \(n\times n\) elimination matrix \(M_k = I-m_ke_k^T\) where
\(M_k\) adds multiples of row \(k\) to rows with index greater than \(k\) in order to create zeroes. As an example, for \(a_k=(2,4,-2)^T\)
Similarly,
The inverse of an elimination matrix is defined as \(L_k=M_k^{-1}=I+m_ke_k^T\). For example,
The algorithm now proceeds as follows. Consider the example:
First, we eliminate the lower triangular portion of \(A\) one column at a time using \(M_k\) to get \(U=M_{n-1}\ldots M_1A\). Note that we also carry out the operations on \(b\) to get a new system of equations \(M_2M_1Ax=M_2M_1b\) or \(Ux=M_2M_1b\) which can be solved for via back substitution.
Finally, solve the following system via back substitution.
Note that we can write \(LU=(L_1\ldots L_{n-1})(M_{n-1}\ldots M_1A)=A\) using the fact that the \(L\) matrices are inverses of the \(M\) matrices, where \(L=L_1\ldots L_{n-1}\) can be formed trivially from the \(M_k\) to obtain:
And thus, although we never needed it to solve the equations, the \(LU\) factorization of \(A\) is
4.3.2. Existence and Uniqueness¶
When computing the \(LU\) factorization, the algorithm will halt if the diagonal element \(a_{kk}=0\). This can be avoided by swapping rows of \(A\) prior to computing the \(LU\) factorization. This is done to always select the largest \(a_{kk}\) from the equations that follow. As an example, consider the matrix \(A\) and the action of the permutation matrix \(P\) on it.
This is pivoting: the pivot \(a_{kk}\) is selected to be non-zero. In this process, we can guarantee existence and uniqueness of the \(LU\) factorization.
Theorem 2
If \(P\) is a permutation matrix such that all pivots in the Gaussian Elimination of \(PA\) are non-zero, then the \(LU\) factorization exists and is unique.
So far, we have seen two ways of solving the system \(Ax=b\):
Without pivoting:
\[A = L\underbrace{Ux}_{=y} = b\]- Solve \(Ly = b\) through forward substitution.
- Solve \(Ux = y\) through backward substitution to obtain the solution \(x\).
Note that if we have multiple systems \(Ax_i = b_i\), we only need to incur the cost of computing an \(LU\) decomposition of \(A\) once.
With pivoting:
\[Ax = b \Longleftrightarrow PAx = Pb\]- Solve \(Ly = Pb\) using forward substitution.
- Solve \(Ux = y\) using backward substitution to obtain the solution \(x\).
Note that switching two rows twice puts the rows back, so \(P\) is its own inverse. Also, note that \(P\) is an orthogonal matrix, i.e., \(P^{-1} = P^T\), so in general, \(P^{-1} = P^T = P\). The process shown above is called partial pivoting because it switches rows to always get the largest diagonal element. This is in contrast to full pivoting (see below) which can switch both rows and columns to obtain the largest diagonal element. Partial pivoting gives \(A=LU\), where \(U=M_{n-1}P_{n-1}\ldots M_1P_1A\) and \(L=P_1L_1\ldots P_{n-1}L_{n-1}\). Note that \(U\) is upper triangular, but \(L\) is a permutation of a lower triangular matrix. It turns out that we can write \(L\) as \(L=P_1\ldots P_{n-1}L_1^P\ldots L_{n-1}^P\) where each \(L_k^P=I+(P_{n-1}\ldots P_{k+1}m_k)e_k^T\) has the same form as \(L_k\). Thus, we can write \(PA = L^PU\) where \(L^P=L_1^P\ldots L_{n-1}^P\) is lower triangular and \(P=P_{n-1}\ldots P_1\) is the total permutation matrix.
4.3.3. Full Pivoting¶
In this case, when we are in the \(k^{th}\) step of the Gaussian Elimination/\(LU\) procedure, we pick the pivot element among the entire \((n-k+1)\times (n-k+1)\) lower rightmost submatrix of \(A\). For example, if \(k=2\) and \(Ax=b\)
In this case, we can bring \((-8)\) to the pivot position \(a_{22}\) by permuting both rows \(2-3\) and columns \(2-4\). Naturally, we will respectively swap rows \(2-3\) of the right hand side \(b\), and rows \(2-4\) of the vector of unknowns \(x\). Thus, the equivalent system becomes
This process is encoded in the LU factorization using two permutation matrices \(P\) and \(Q\) such that \(\boxed{PAQ=LU}\). The solution is then computed via
- Solve \(Ly = Pb\) using forward substitution.
- Solve \(Uz = y\) using backward substitution.
- Finally, \(Q^Tx = z \Rightarrow QQ^T x = Qz \Rightarrow \boxed{x=Qz}\) gives the solution.
To summarize:
Partial pivoting permutes rows, such that the pivot element in the \(k^{th}\) iteration is the largest number in the \((n-k+1)\) lower entries of the \(k^{th}\) column. It is written, in the context of \(LU\) decomposition as
\[PA=LU\enspace\enspace\enspace\mbox{($P =$ permutation)}\]Full pivoting selects the pivot element in the \(k^{th}\) iteration as the largest element of the \((n-k+1)\times (n-k+1)\) lower rightmost sub-matrix of \(A\). It operates by permuting rows and columns and leads to an \(LU\) decomposition of
\[PAQ = LU\]
However, there are certain categories of matrices for which we can safely use Gaussian elimination or \(LU\) decomposition without the need for pivoting (i.e., the pivot elements will never be problematically small).
Definition
A matrix \(A\) is called diagonally dominant by columns if the magnitude of every diagonal element is larger than the sum of the magnitudes of all other entries in the same column, i.e., for every \(i=1,2,\ldots, n\) we have
If the diagonal element exceeds in magnitude the sum of magnitudes of all other elements in its row, i.e., for every \(i=1,2,\ldots, n\) we have
then the matrix is called diagonally dominant by rows.
Definition
A symmetric matrix \(A\in\mathbb R^{n\times n}\) is called positive definite (in short SPD for “symmetric positive definite”), if for any \(x\in\mathbb R^n, x\neq 0\) we have \(x^TAx>0\). If for any \(x\in\mathbb R^n, x\neq 0\) we have \(x^TAx\geq0\), the matrix is called positive semi-definite. If the respective properties are \(x^TAx<0\) (or \(x^TAx\leq 0\)) the matrix is called negative (semi) definite.
Definition
The \(k^{th}\) leading principal minor of a matrix \(A\in\mathbb R^{n\times n}\) is the determinant of the top-leftmost \(k\times k\) sub-matrix of \(A\). Thus, if we denote this minor by \(M_k\):
Theorem 1
If all leading principal minors (i.e., for \(k=1,2,3,\ldots,n\)) of the symmetric matrix \(A\) are positive, then \(A\) is positive definite. If \(M_k<0\) for \(k = \mbox{odd}\) and \(M_k>0\) for \(k = \mbox{even}\), then \(A\) is negative definite.
Theorem 2
Pivoting is not necessary when \(A\) is diagonally dominant by columns, or symmetric and positive (or negative) definite.
These “special” classes of matrices (which appear quite often in engineering and applied sciences) not only make \(LU\) decomposition more robust, but also open some additional possibilities for solving \(Ax=b\).