2. Homework #2ΒΆ

For all programming assignments, please turn in your code along with your solution. Submissions should be made on Sakai.

Problem 1

Let \(A\in\mathbb R^{n\times n}\) be any matrix, and \(I\) be the \(n\times n\) identity matrix. Show that the matrix \(L_k = I + m_ke_k^T\) is the inverse of the matrix \(M_k = I - m_ke_k^T\), where \(M_k\) is the elimination matrix for the \(k^{th}\) column \(a_k\) of \(A\).

Problem 2

Write a program to compute the LU decomposition of a matrix \(A\) using the concept of elimination matrices. Use it to solve the following system:

\[\begin{split}A=\left[\begin{array}{ccccccccc} 21 & 32 & 14 & 8 & 6 & 9 & 11 & 3 & 5 \\ 17 & 2 & 8 & 14 & 55 & 23 & 19 & 1 & 6 \\ 41 & 23 & 13 & 5 & 11 & 22 & 26 & 7 & 9 \\ 12 & 11 & 5 & 8 & 3 & 15 & 7 & 25 & 19 \\ 14 & 7 & 3 & 5 & 11 & 23 & 8 & 7 & 9 \\ 2 & 8 & 5 & 7 & 1 & 13 & 23 & 11 & 17 \\ 11 & 7 & 9 & 5 & 3 & 8 & 26 & 13 & 17 \\ 23 & 1 & 5 & 19 & 11 & 7 & 9 & 4 & 16 \\ 31 & 5 & 12 & 7 & 13 & 17 & 24 & 3 & 11 \end{array}\right]\left[\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8 \\ x_9 \end{array}\right]=\left[\begin{array}{c} 2 \\ 5 \\ 7 \\ 1 \\ 6 \\ 9 \\ 4 \\ 8 \\ 3 \end{array}\right]\end{split}\]

Problem 3

Suppose that \(\lVert\cdot\rVert_a\) and \(\lVert\cdot\rVert_b\) are two equivalent vector norms in \(\mathbb R^n\). Thus, by definition, there exist two constants \(c,d>0\) such that for any vector \(x\in\mathbb R^n\), we have

\[c\lVert x\rVert_a \leq \lVert x\rVert_b \leq d\lVert x\rVert_a\]

Prove that their induced matrix norms are also equivalent, i.e., there exist two constants \(c',d'>0\) such that for any matrix \(A\in\mathbb R^{n\times n}\)

\[c'\lVert A\rVert_a \leq \lVert A\rVert_b \leq d'\lVert A\rVert_a\]

Problem 4

  1. Use a single-precision routine for Gaussian elimination to solve the system \(Ax=b\) defined below
\[\begin{split}\left[\begin{array}{cccc} 21.0 & 67.0 & 88.0 & 73.0 \\ 76.0 & 63.0 & 7.0 & 20.0 \\ 0.0 & 85.0 & 56.0 & 54.0 \\ 19.3 & 43.0 & 30.2 & 29.4 \end{array}\right]\left[\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array}\right]=\left[\begin{array}{c} 141.0 \\ 109.0 \\ 218.0 \\ 93.7 \end{array}\right]\end{split}\]
  1. Compute the residual \(r = b - Ax\) using double-precision arithmetic, but storing the final result in a single-precision vector \(r\). (Note that the solution routine may destroy the matrix \(A\), so you may need to save a separate copy for computing the residual.)
  2. Solve the linear system \(Az=r\) to obtain the “improved” solution \(x+z\). (Note that the matrix \(A\) need not be refactored.)
  3. Repeat steps \(b)\) and \(c)\) until no further improvement is observed.

Problem 5

Use Gaussian elimination without pivoting to solve the linear system

\[\begin{split}\left[\begin{array}{cc} \varepsilon & 1 \\ 1 & 1 \end{array}\right]\left[\begin{array}{c} x_1 \\ x_2 \end{array}\right] = \left[\begin{array}{c} 1+\varepsilon \\ 2 \end{array}\right]\end{split}\]

for \(\varepsilon=10^{-2k}\), where \(k=1,2,\ldots,10\). The exact solution is \(x=(1,1)^T\), independent of the value of \(\varepsilon\). How does the acccuracy of the computed solution behave as the value of \(\varepsilon\) decreases?