3. Homework #3ΒΆ
For all programming assignments, please turn in your code along with your solution. Submissions should be made on Sakai.
Problem 1
In this problem, you will construct a proof that the Jacobi method converges when applied to matrices that are diagonally dominant. In the following subproblems, you are free to use any of the individual questions (even if you did not prove it) to answer the ones that come after it.
If \(x\in\mathbb R^n\) and \(T\) is an \(n\times n\) matrix, show that for any positive integer \(k\) the following inequality holds:
\[\lVert T^k x\rVert \leq \lVert T\rVert^k \lVert x\rVert\]Show that the Jacobi method can be written as:
\[x^{(k+1)} = Tx^{(k)} + c\]where \(T=D^{-1}(L+U)\) (using the decomposition \(A = D - L - U\)).
Define the error after the \(k^{th}\) iteration to be \(e^{(k)} = x^{(k)} - x^\star\), where \(x^\star\) is the exact solution to the equation \(Ax=b\). Show that:
\[e^{(k+1)} = Te^{(k)}\]Hint: You can use (after explaining why it is true) that \(x^\star = Tx^\star+c\).
If \(A\) is diagonally dominant by rows, and \(T\) is the matrix defined in (b) above, show that \(\lVert T\rVert_\infty < 1\).
Hint: The way to prove this question may become more apparent if you write out explicitly what the first few rows of \(T = D^{-1}(L+U)\) look like.
Show that when \(A\) is diagonally dominant by rows, then:
\[\lVert e^{(k)}\rVert_\infty \leq \lVert T\rVert^k_\infty \lVert e^{(0)}\rVert_\infty\]Explain why this result implies that the Jacobi method is guaranteed to converge for matrices that are diagonally dominant with rows.
Problem 2
A polynomial \(p(t)\) of degree \(n\) is defined as:
For \(n = 0,1,\ldots,5\), fit a polynomial of degree \(n\) by least squares to the following data:
Make a plot of the original data points along with each resulting polynomial curve (you may make separate graphs for each curve, or a single graph containing all of the curves). Which polynomial would you say captures the general trend of the data better?
Problem 3
Solve the following least squares problem using the normal equations:
\[\begin{split}\left[\begin{array}{cc} 0.16 & 0.10 \\ 0.17 & 0.11 \\ 2.02 & 1.29 \end{array}\right]\left[\begin{array}{c} x_1 \\ x_2 \end{array}\right] \cong \left[\begin{array}{c} 0.26 \\ 0.28 \\ 3.31 \end{array}\right]\end{split}\]Now solve the same least squares problem again, but this time use the slightly perturbed right-hand side:
\[\begin{split}b = \left[\begin{array}{c} 0.27 \\ 0.25 \\ 3.33 \end{array}\right]\end{split}\]Compare your results from parts (a) and (b). Can you explain this difference?
Hint: Use the function
np.linalg.cond
to compute the condition number of \(A^TA\) in different norms.
Problem 4
Show that the matrix
\[\begin{split}A = \left[\begin{array}{ccc} 0.1 & 0.2 & 0.3 \\ 0.4 & 0.5 & 0.6 \\ 0.7 & 0.8 & 0.9 \end{array}\right]\end{split}\]is singular. Describe the set of solutions to the system \(Ax = b\) if
\[\begin{split}b = \left[\begin{array}{c} 0.1 \\ 0.3 \\ 0.5 \end{array}\right]\end{split}\]If we were to use Gaussian Elimination with partial pivoting to solve this system using exact arithmetic, at what point would the process fail?
Because some of the entries of \(A\) are not exactly representable in a binary floating-point system, the matrix is no longer exactly singular when entered into a computer. Thus, solving the system by Gaussian Elimination will not necessarily fail. Compare the computed solution with your description of the solution set in part (a). What is the estimated value of the condition number \(\kappa(A)\)?