Solution.

In IEEE single precision, numbers have the form

$\displaystyle x = \pm (1.b_1 b_2 \dots b_{23})_2 \times 2^{e},
\qquad -126 \le e \le 127 .
$


Step 1: Representation of $256$.

$\displaystyle 256 = 2^8 = (1.00000000000000000000000)_2 \times 2^8 .
$

Thus the stored exponent is $e=8$ and the significand has all zero fraction bits.


Step 2: Spacing (ULP) at $256$.

For a normalized number with exponent $e$, the spacing between adjacent floating point numbers is

$\displaystyle \mathrm{ulp}(256) = 2^{e-(N-1)} = 2^{8-23} = 2^{-15}.
$


Step 3: Number that follows $256$.

   next$\displaystyle (256) = 256 + 2^{-15}.
$

Binary:

$\displaystyle (1.00000000000000000000001)_2 \times 2^8 .
$

Decimal:

$\displaystyle 256 + 2^{-15} = 256.000030517578125.
$


Step 4: Number that precedes $256$.

   prev$\displaystyle (256) = 256 - 2^{-15}.
$

Binary:

$\displaystyle (1.11111111111111111111111)_2 \times 2^7 .
$

Decimal:

$\displaystyle 256 - 2^{-15} = 255.999969482421875.
$


Final answer:

\begin{indisplay}
\boxed{
\begin{aligned}
\text{follows }256 &: (1.0000000000000...
...111111111111)_2 \times 2^7
= 255.999969482421875.
\end{aligned}}
\end{indisplay}

Out of these numbers which ones can be represented exactly in the Single Precision Floating Point system? Justify your answer.

  1. $0.1$
  2. $0.125$
  3. 0.4375
  4. 4.2849
  5. $0.775$
  6. $\displaystyle \frac{256+128+ 32 + 1 }{4096} .$

SOLUTION

In IEEE single precision, a real number is represented exactly if and only if its binary expansion is finite and fits within 24 significant bits (including the hidden leading $1$). Equivalently, a rational number is exactly representable if its denominator (in lowest terms) is a power of $2$.

  1. $\mathbf{0.1}$

    $\displaystyle 0.1 = \frac{1}{10} = \frac{1}{2 \cdot 5}.
$

    The denominator contains a factor $5$, so the binary expansion is infinite.

    $\displaystyle \boxed{\text{Not exactly representable.}}
$

  2. $\mathbf{0.125}$

    $\displaystyle 0.125 = \frac{1}{8} = 2^{-3}.
$

    This has a finite binary expansion:

    $\displaystyle 0.125 = (0.001)_2.
$

    $\displaystyle \boxed{\text{Exactly representable.}}
$

  3. $\mathbf{0.4375}$

    $\displaystyle 0.4375 = \frac{7}{16} = 7 \cdot 2^{-4}.
$

    Binary expansion:

    $\displaystyle 0.4375 = (0.0111)_2.
$

    $\displaystyle \boxed{\text{Exactly representable.}}
$

  4. $\mathbf{4.2849}$

    $\displaystyle 4.2849 = \frac{42849}{10000} = \frac{42849}{2^4 \cdot 5^4}.
$

    The denominator contains powers of $5$, so the binary expansion is infinite.

    $\displaystyle \boxed{\text{Not exactly representable.}}
$

  5. $\mathbf{0.775}$

    $\displaystyle 0.775 = \frac{775}{1000} = \frac{31}{40} = \frac{31}{2^3 \cdot 5}.
$

    The denominator contains a factor $5$, hence the binary expansion is infinite.

    $\displaystyle \boxed{\text{Not exactly representable.}}
$

Calculate approximately what fraction of Single Precision Floating Point Numbers lie between 0 and $1024$? SOLUTION

Single precision IEEE floating point numbers consist of

$\displaystyle 2^{24}
$

distinct significands for each exponent (including subnormals when applicable).


Step 1: Count numbers between 0 and $1024$.

Since

$\displaystyle 1024 = 2^{10},
$

the positive floating point numbers in the interval $(0,1024)$ include:

Each normalized exponent contributes $2^{23}$ positive numbers.

Thus,

$\displaystyle N_{(0,1024)} \approx (136)\,2^{23}.
$

(Subnormals contribute only one additional exponent block and do not affect the estimate at leading order.)


Step 2: Total number of floating point numbers.

The total number of single precision floating point numbers is

$\displaystyle N_{\text{total}} = 2^{32}.
$


Step 3: Fraction.

   Fraction$\displaystyle \approx
\frac{136 \cdot 2^{23}}{2^{32}}
= \frac{136}{2^{9}}
= \frac{136}{512}
\approx 0.266.
$


Final answer:

$\displaystyle \boxed{\text{Approximately } 27\% \text{ of single precision floating point numbers lie between } 0 \text{ and } 1024.}
$

Consider the function

$\displaystyle f(x) = x^{1/3}.
$

  1. Derive the Newton iteration method for solving $f(x)=0$ and analyze the resulting iterations. Do you get the quadratic convergence to the root $x = 0$? Does the Newton method converge to the root $x = 0$?

  2. For which initial guesses $x_0$ does Newton's method converge?

  3. Does this contradict the theorem stating that Newton's method converges quadratically near a simple root? Carefully explain.

  4. Provide a geometric explanation of the behavior of the iteration by sketching the graph of $f(x)$ and describing the tangent lines near the root.

SOLUTION

  1. Newton iteration:

    Newton's method is defined by

    $\displaystyle x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}.
$

    For $f(x) = x^{1/3}$, we have

    $\displaystyle f'(x) = \frac{1}{3} x^{-2/3}.
$

    Thus the Newton iteration becomes

    $\displaystyle x_{n+1} = x_n - \frac{x_n^{1/3}}{\frac{1}{3} x_n^{-2/3}} = x_n - 3 x_n = -2 x_n.
$

    Analysis: The iteration formula is

    $\displaystyle x_{n+1} = -2 x_n.
$

    Clearly, the sequence does not converge to 0. Instead, the iterates alternate in sign and grow in magnitude:

    $\displaystyle x_1 = -2 x_0, \quad x_2 = 4 x_0, \quad x_3 = -8 x_0, \dots
$

    Hence, Newton's method does not converge to the root $x = 0$ for this function.

  2. Initial guesses for convergence:

    From the iteration formula $x_{n+1} = -2 x_n$, the magnitude of the iterates is

    $\displaystyle \vert x_{n+1}\vert = 2 \vert x_n\vert.
$

    Thus, unless $x_0 = 0$, the iterates diverge. The only "initial guess" that leads to the root is

    $\displaystyle x_0 = 0.
$

  3. Quadratic convergence theorem:

    The standard Newton convergence theorem states that if $x^*$ is a simple root of $f$ ($f(x^*)=0$, $f'(x^*) \neq 0$) and the initial guess is sufficiently close to $x^*$, then Newton's method converges quadratically.

    Here, $x = 0$ is a root, but

    $\displaystyle f'(0) = \lim_{x \to 0} \frac{1}{3} x^{-2/3} = \infty,
$

    so the derivative at the root is not finite. Therefore, $x = 0$ is not a simple root in the sense of the theorem, and there is no contradiction: the conditions of the theorem are not satisfied.

  4. Geometric explanation:

    The function $f(x) = x^{1/3}$ has a very steep slope near the origin. Its tangent line at a point $x_n \neq 0$ is

    $\displaystyle y = f(x_n) + f'(x_n)(x - x_n) = x_n^{1/3} + \frac{1}{3} x_n^{-2/3} (x - x_n).
$

    Because $f'(x_n)$ becomes very large as $x_n \to 0$, the tangent line is nearly vertical, and the Newton step

    $\displaystyle x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}
$

    jumps far away from the root, not towards it. This explains the divergence observed in the iteration.

    A sketch of $f(x) = x^{1/3}$ and the tangent lines near the origin would show very steep tangents, causing iterates to overshoot and alternate signs.

A criticism of the bisection method is that it does not use the values of $f(a_k)$ and $f(b_k)$, only their signs. A way to use these values is to find the equation of the straight line connecting $(a_k, f(a_k))$ and $(b_k, f(b_k))$ and then take $x_k$ to be the point where the line intersects the x-axis. Other than this, the bisection algorithm is unchanged. This is known as the method of false position.

(a) Show that

$\displaystyle x_k = \frac{ a_k f(b_k) -b_k f(a_k)}{f(b_k) - f(a_k)}.$

(b) To solve

$\displaystyle x^2 - 4 = 0,$

take $a_0 = 0$ and $b_0 =3$. Find $x_0$ and $x_1$ using the method of false position, and also find them using the bisection method. Which method produces the more accurate values? Which one requires more flops?

(a)
Derivation of the false position formula:

The method of false position (regula falsi) uses the line connecting $(a_k, f(a_k))$ and $(b_k, f(b_k))$. The equation of the line is

$\displaystyle y - f(a_k) = \frac{f(b_k)-f(a_k)}{b_k - a_k} (x - a_k).
$

Set $y=0$ to find the x-intercept $x_k$:

$\displaystyle 0 - f(a_k) = \frac{f(b_k)-f(a_k)}{b_k - a_k} (x_k - a_k) \implies
x_k - a_k = - f(a_k) \frac{b_k - a_k}{f(b_k) - f(a_k)}.
$

Thus,

$\displaystyle x_k = a_k - f(a_k) \frac{b_k - a_k}{f(b_k) - f(a_k)}
= \frac{a_k f(b_k) - b_k f(a_k)}{f(b_k) - f(a_k)},
$

which is the desired formula.

(b)
Example: $f(x) = x^2 - 4$, $a_0 = 0$, $b_0 =3$.

Method of false position:

$\displaystyle f(a_0) = f(0) = -4, \quad f(b_0) = f(3) = 5
$

$\displaystyle x_0 = \frac{a_0 f(b_0) - b_0 f(a_0)}{f(b_0) - f(a_0)}
= \frac{0 \cdot 5 - 3 \cdot (-4)}{5 - (-4)} = \frac{12}{9} = \frac{4}{3} \approx 1.333
$

Determine the new interval. Since $f(a_0)f(x_0) = (-4)(-4/9) > 0$, set

$\displaystyle a_1 = x_0 = 4/3, \quad b_1 = b_0 = 3
$

$\displaystyle f(x_0) = (4/3)^2 - 4 = 16/9 - 36/9 = -20/9
$

Next iteration:

$\displaystyle x_1 = \frac{a_1 f(b_1) - b_1 f(a_1)}{f(b_1) - f(a_1)}
= \frac{(4/...
...rac{40}{3} \cdot \frac{9}{65}
= \frac{360}{195} = \frac{24}{13} \approx 1.846
$

Bisection method:

$\displaystyle x_0 = \frac{a_0 + b_0}{2} = \frac{0+3}{2} = 1.5
$

Check the sign: $f(x_0) = 1.5^2 -4 = -1.75 <0$, so

$\displaystyle a_1 = x_0 = 1.5, \quad b_1 = b_0 = 3
$

$\displaystyle x_1 = \frac{a_1 + b_1}{2} = \frac{1.5+3}{2} = 2.25
$

Comparison:

- Accuracy: Method of false position produces $x_0 = 1.333$ and $x_1 = 1.846$ while bisection produces $x_0 = 1.5$ and $x_1 = 2.25$. The false position method gives iterates closer to the root $\sqrt{4} = 2$ after the first iteration. - Flops: Both methods require evaluating $f(x)$ each iteration, but false position requires extra multiplications/divisions to compute $x_k$ from the weighted formula. Bisection only needs a simple average. Therefore, bisection is slightly cheaper in flops per iteration.

Suppose you came to the Ice Age, and computer in your Time Machine is broken. Suppose to come back to 2026 to complete the numerical computing exam, you need to calculate “three to the power one third”, i.e.

$\displaystyle 3^\frac{1}{3}$

. You can only use $+$, $-$, $*$ and $/$. Set up the nonlinear equation that has $2^\frac{1}{3}$ as the solution.
  1. Describe how to use the bisection method to make this calculation. What is your initial bracket? How many iteration steps do you need to perform to get the solution with $10^{-3} \simeq 2^{-10}$ accuracy?
  2. Describe how to use the Newton method for this calculation. How many steps do you need to do to get the root with $10^{-3} $ accuracy?

(a)
Bisection method:

Set up the nonlinear equation

$\displaystyle f(x) = x^3 - 3 = 0.
$

Initial bracket: Since $1^3 = 1 < 3$ and $2^3 = 8 > 3$, a suitable initial bracket is

$\displaystyle [a_0, b_0] = [1, 2].
$

Bisection algorithm: At each step, compute

$\displaystyle x_k = \frac{a_k + b_k}{2}
$

and evaluate $f(x_k)$. If $f(a_k)f(x_k)<0$, set $b_{k+1}=x_k$; else set $a_{k+1}=x_k$. Repeat until $\vert b_k - a_k\vert <$   tolerance.

Number of steps for accuracy $10^{-3} $:

The bisection error after $n$ steps satisfies

$\displaystyle \vert x_n - x^*\vert \le \frac{b_0 - a_0}{2^n}.
$

Set

$\displaystyle \frac{b_0 - a_0}{2^n} \le 10^{-3} \implies 2^n \ge \frac{b_0 - a_0}{10^{-3}} = \frac{2-1}{10^{-3}} = 1000.
$

Take logarithm base 2:

$\displaystyle n \ge \log_2 1000 \approx 9.97 \approx 10$    steps$\displaystyle .
$

(b)
Newton's method:

Newton iteration for $f(x) = x^3 - 3$ is

$\displaystyle x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} = x_n - \frac{x_n^3 - 3}{3...
...c{1}{3}\left(x_n - \frac{3}{x_n^2}\right) = \frac{2}{3} x_n + \frac{1}{x_n^2}.
$

Check: Correct iteration formula:

$\displaystyle x_{n+1} = x_n - \frac{x_n^3 - 3}{3 x_n^2} = x_n - \frac{x_n^3}{3x...
...} = x_n - \frac{x_n}{3} + \frac{1}{x_n^2} = \frac{2}{3} x_n + \frac{1}{x_n^2}.
$

Start with $x_0 = 1.5$ (a reasonable guess).

Convergence: Newton's method is quadratically convergent. For tolerance $10^{-3} $, typically 4–5 iterations suffice.

Estimation of steps: Roughly, the error decreases as

$\displaystyle \vert e_{n+1}\vert \approx C \vert e_n\vert^2.
$

Starting with $e_0 \sim 0.134$ ( $1.5 - 3^{1/3} \approx 0.134$), squaring the error rapidly reduces it below $10^{-3} $ in 4 iterations.

Consider the function

$\displaystyle f(x) = 8*x^4 - 12*x^3 + 6*x^2 -
x.$

This function has two roots, $x = 0$ and $x=\frac{1}{2}$. For each of its two roots determine which of the two methods, Newton or Bisection will converge faster to the given root? To be more specific, you may assume that you are looking for, say, $10^3$ accuracy, and you are starting the Bisection with the interval $[-0.1, 0.1]$ for the root $x = 0$ and use the interval $[0.4, 0.6]$ for the root $x=1/2$.

For the secant, Newton, and bisection methods: Looking at iterative error when solving $f(x)=0$, which of these methods converges to an error of $10^{-16}$ the fastest? True or False:
If Newton's method converges to a solution $x^*$ for a particular choice of $x_0$, then it will converge to $x^*$ for any starting point between $x_0$ and $x_*$.
To get credit for this problem, you will need to give comprehensive explanation of your answer.

For compyting the midpoint $m$ of an interval $[a,b]$, which of the following two equations is prefereable in floating point system? Why? When? Devise the example when the midpoint given be the equation lies outside of the $[a,b]$ interval.

  1. $m=(a+b)/2$,
  2. $m=a+ (b-a)/2$.

Solve with three digits accuracy an equation

$\displaystyle x^2 + 3 x -8\times 10^{-14}=0.$

Note that you have to find two roots. What version of quadratic formulas are you going to use to find these roots?

The “divide and average” method for computing a square root $x_n$of a given number $a$ can be formulated as follows:

$\displaystyle x_{n+1} = \frac{x_n+ \frac{a}{x_n}}{2}.$

Show that this method converges to $\sqrt a$, i.e. that

$\displaystyle \lim\limits_{n\to\infty}x_n = \sqrt{a},$

and calculate the rate of convergence.

Show that if

$\displaystyle f(x) =\frac{1}{x}-b,$

for nonzero $b$, then the Newton method, converging to the root $r = 1/b$ can be implemented without performing any divisions. new

Newton method for solving a scalar nonlinear equation $f(x)=0$ requires computation of the derivative of $f$ at each iteration. Suppose that we instead replace the true derivative with a constant value $d$, that is we use the iteration scheme

$\displaystyle x_{k+1} = x_k - f(x_k)/d.$

(a) Under what condition on value of $d$ will this scheme be locally convergent?

(b) What is the convergence rate of this scheme?

(c) Is there any value of $d$ to give a quadratic convergence?

(a) For given values of $tol$, $a0$, and $b0$, will the bisection method use fewer or more steps to solve

$\displaystyle x - 2 = 0$

than for solving

$\displaystyle e^{x-2} - 1 = 0$

?

(a) If $a0 = 0$ and $b0 = 3$, then what is the maximum number of steps the bisection method will require to solve $x - 1 = 0$ and have at least six correct binary digits?

(c) Answer part (b) for the equation $(x - 1)^3 = 0.$

A criticism of the bisection method is that it does not use the values of $f(a_k)$ and $f(b_k)$, only their signs. A way to use these values is to find the equation of the straight line connecting $(a_k, f(a_k))$ and $(b_k, f(b_k))$ and then take $x_k$ to be the point where the line intersects the x-axis. Other than this, the bisection algorithm is unchanged. This is known as the method of false position.

(a) Show that

$\displaystyle xk = \frac{ a_k f(b_k) -b_k f(a_k)}{f(b_k) - f(a_k)}.$

(b) To solve

$\displaystyle x^2 - 4 = 0$

, take $a_0 = 0$ and $b_0 =3$. Find $x_0$ and $x_1$ using the method of false position, and also find them using the bisection method. Which method produces the more accurate values? Which one requires more flops?

True or False: If Newton’s method converges into a solution $x^*$ for a particular choice of $x_0$, then it will converge to $x^*$ for any starting point between $x^*$ and $x_0$.

If it is true, please explain why, if it is not true, give a counter example.

In class we have shown that Newton method

$\displaystyle x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)},$ (4)

has quadratic rate of convergence for simple root (i.e. when $f(x_*)=0$ and $f'(x_*)\ne 0$). In the homework you have shown that convergence rate is linear for the double root.

Proove, that Newton method has linear rate of convergence for triple root, i.e. when $f(x_*)=f'(x_*)=f''(x_*)=0, f'''(x_*)\ne 0$.

Extra Credit Proove it for roots of multiplicity $n$, i.e. when

$\displaystyle f(x_*)=0 \ \ \ \left(\frac{d}{d x}\right)^k f(x_*)= 0, \ \ \ k =
1,2,\ dots\ n-1, \left(\frac{d}{d x}\right)^{n+1} f(x_*)\ne 0 .$

Suppose you came to the Ice Age, and computer in your Time Machine is broken. Suppose to come back to 2016 to complete the numerical computing exam, you need to calculate “two to the power one third”, i.e.

$\displaystyle 2^\frac{1}{3}$

. You can only use $+$, $-$, $*$ and $/$. Set up the nonlinear equation that has $2^\frac{1}{3}$ as the solution.
  1. Describe how to use the bisection method to make this calculation. What is your initial bracket? How many iteration steps do you need to perform to get the solution with $10^{-3} \simeq 2^{-10}$ accuracy?
  2. Describe how to use the Newton method for this calculation. How many steps do you need to do to get the root with $10^{-3} $ accuracy?
Consider the following iteraton methods
  1. $\displaystyle x_{n+1} = \frac{20 x_n + \frac{21}{x_n^2}}{21},$

  2. $\displaystyle x_{n+1} = x_n - \frac{x_n^3-21}{3 x_n^2},$

  3. $\displaystyle x_{n+1} = \sqrt{\Big(\frac{21}{x_n}\Big)}.$

  4. $\displaystyle x_{n+1} = \frac{ 21 + 2 x^3}{ 3 x^2}.$

Assume that all iterations start from $x_0=1$.
  1. 4 points Verify that each of these fixed-point iterations converge to $\sqrt[3]{21}$, i.e.

    $\displaystyle \lim\limits_{n\to\infty}x_n=X=\sqrt[3]{21},$

    and
  2. rank the methods in order based on their apparent speed of convergence (i.e. find the fastest method to converge, the second fastest, the third and the last to converge).

In class we have shown that fixed point iterations

$\displaystyle x_{k+1}= g(x_k),\ x^*=g(x^*), $

converge if

$\displaystyle \vert g(x^*)\vert<1.$

Under what condition fixed point iterations converge if

$\displaystyle g(x^*)=1?$

To gain an intuition on this problem, consider the fixed point iterations

$\displaystyle x_{k+1}= g(x_k),\ \ g(x)=x-x^2/2.$

For this problem the fixed point is $x^*=0$. Start with $x_0=1$. Then

$\displaystyle x_1=g(x_0)=1/2,$

$\displaystyle x_2=g(x_1)=3/8=0.375,$

$\displaystyle x_3=g(x_2)=39/128=0.304688.$

It seems to converge to the fixed point $x^*=0$, yet $g'(x^*=0)=1$. Why is that?

Consider the function

$\displaystyle f(x) = e^x - 2 x^2.$

The graph of the function is given by

As you see from the graph, this function has three roots, given by $X_1=-0.539835$, $X_2= 1.48796$ and $X_3=2.61787$.

The following two functions are defined as

$\displaystyle g_1(x) = -\sqrt{\frac{e^x}2}$      
$\displaystyle g_2(x) = \sqrt{\frac{e^x} 2}$     (5)

Consider the following fixed point iterations:
$\displaystyle x_{k+1} = g_1(x_k)$     (6)
$\displaystyle x_{k+1} = g_2(x_k)$     (7)

  1. Show that fixed point iterations (6) with $g_1(x)$ converges to the root $X_1$.
  2. Show that fixed point iterations (7) with $g_2(x)$ converges to the root $X_2$.
  3. Show that iterations with neither $g_1(x)$ nor $g_2(x)$ converge to the root $X_3$, regardless of the starting point.
  4. Propose a fixed point iteration scheme that will converge to the root $X_3$.

We have studied Secant method

$\displaystyle x_{k+1} = x_k - f(x_k)\frac{x_k-x_{k-1}}{f(x_k)-f(x_{k-1})}.$

Suppose you are solving $f(x)=0$ by iterations, and you have obtained $(x_{k-1},f(x_{k-1}))$ and $(x_k,f(x_k))$. Now use linear interpolation to interpolate a straight line through these two points, and choose $x_{k+1}$ so that $f(x_{k+1})\simeq 0$. What is the resulting method of solving $f(x)=0$ that you have derived? Express the Newton iteration method for solving the following system of nonlinear equations:

$\displaystyle x_1^2 + x_1 x_2 ^3 =9, 3 x_1^2 x_2 - x_2^3 = 4,$

and carry out one iteration starting from the starting point $(1,1)$.

Express the Newton iteration method for solving the following system of nonlinear equations:

$\displaystyle x_1^2 + x_1 x_2 ^3 =9,$      
$\displaystyle 3 x_1^2 x_2 - x_2^3 = 4,$      

and carry out one iteration starting from the starting point $(1,1)$.

Carry out one iteration of Newton's method applied to the system

$\displaystyle x_{1}^{2}-x_{2}^{2}$ $\displaystyle =$ 0  
$\displaystyle x_{1}x_{2}$ $\displaystyle =$ $\displaystyle 1$  

with starting value ${\bf x}_{0}=[0,1]^{T}.$

The following values for the solution of $f(x)=0$ were computed using Matlab. What method was used (bisection, Newton or secant)? Make sure you explain in details why it is the method you claim:

X=50.25125628140704

X=25.378140640072242

X=12.944094811287638

X=6.7320923307183715

X=3.636103634563207

X=2.1079101939699143

X=1.3816957571715662

X=1.0826201384421688

X=1.0058580941730362

X=1.0000339198559194

X=1.0000000011504786

X=1.0000000000000000

X=1.0000000000000000

Chapter THREE

Show that for arbitrary square matrices $A$ and $B$ the following is true

$\displaystyle (A\cdot B)^{-1} = B^{-1}\cdot A^{-1}.$

Show that for arbitrary square matrices $A$ and $B$ the following is true

$\displaystyle (A\cdot B)^{\rm T} = B^{\rm T}\cdot A^{-\rm T},$

where ${\rm T}$ denotes the transposition Prove or give counterexample: if $A$ is a singular matrix, then

$\displaystyle \vert\vert A^{-1}\vert\vert=\vert\vert A\vert\vert^{-1}.$

Consider the following systems of equations:

$\displaystyle x^2+y^2=4, \ \ y = x^6.$

or

$\displaystyle x^3+4 y^3=4, \ \ 2 x^2+y^2y = 7.$

  1. Sketch the two curves and explain where approximately the solutions are located
  2. Set up and explain the Newton method to find the solutions. Calculate the Jacobian $J$ and the RHS ${\bf f}$ of the Newton method.
  3. What would be the good starting point for your calculations?
  4. Write down in all details the system of equations to make the first iteration. Do not solve the resulting equations.


Suppose that you use Newton method

$\displaystyle x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)},$ (8)

to find the solution of the equation

$\displaystyle f(x^*)=0$

for which

$\displaystyle f'(x^*)\ne 0, \rm {\ and \ } f''(x^*)=f'''(x^*)=f''''(x^*)=0.$

What would be the convergence rate for the Newton method for such a case? HINT: convergence of the Newton method is not necessarily quadratic.

(a) Let ${\bf A}$ be an arbitrary square matrix, and let $c$ be an arbitrary scalar.

Prove or disprove the following statements:

(i)

$\displaystyle \vert\vert c {\bf A}\vert\vert = \vert c\vert \cdot \vert\vert A\vert\vert$

(ii)

$\displaystyle {\rm cond} ( c \cdot {\bf A}) = \vert c\vert \cdot {\rm cond} ({\bf A})$

(b)

Let ${\bf A}$ be an $n\times n$ diagonal matrix with all its diagonal entries equal to $1/2$.

(i) What is the value of $det( {\rm A})$?

(ii) What is the value of ${\rm cond (A)}$?

Consider the linear system

$\displaystyle A x =b,$      
$\displaystyle A=\left(\begin{array}{cc} 1 & 1.1\\ 0.9 & 1\end{array}\right),$      
$\displaystyle b = \left(\begin{array}{c} 1.11 \\ 1\end{array}\right).$      

  1. Solve this system by any method you like using exact arithmetic.
  2. Solve this system by computing an inverse of $A$ using two decimal digit machine arithmetic.
    Hint If

    $\displaystyle A=\left(\begin{array}{cc} a & b\\ c & d\end{array}\right),$

    then

    $\displaystyle A^{-1}=\frac{1}{a d - b c}
\left(\begin{array}{cc} d & -b\\ -c & a\end{array}\right).$

    Also

    $\displaystyle x= A^{-1}b.$

  3. Now solve (11) by using LU factorization using two decimal digit machine arithmetic.
  4. Compare and explain the difference between results obtained in items 1, 2 and 3. What conclusion can you make about LU factorization and computing solution of (11) by using inverse of a matrix?

In this problem

\begin{indisplay}A =
\left[
\begin{array}{rr}
1 & -2 \\
-2 & 3
\end{array}\right]
\end{indisplay}

(a) Find a vector $x$ such that $\vert\vert A x\vert\vert _\infty = \vert\vert A\vert\vert _\infty$.

(b) Find a vector $x$ such that $\vert\vert A x\vert\vert _1 = \vert\vert A\vert\vert _1$

Consider the $A x = b$ problem with $\alpha>0$ and

\begin{indisplay}
A =
\left[
\begin{array}{rr}
-1 & 1 \\
0 & \alpha
\end{arr...
... \ \
b =
\left[
\begin{array}{rr}
1 \\
1
\end{array}\right] \end{indisplay}

Suppose that the error ${\bf e} = {\bf x } - {\bf x_c}$ in computed solution is small, but nonzero. Here ${\bf x}$ is an exact solution, and ${\bf x}_c$ is a computed solution. For what values of $\alpha$, if any, the residual will be large?

HINT For what values of $\alpha$, if any, the matrix $A$ will be ill-conditioned?

HINT If

$\displaystyle A=\left(\begin{array}{cc} a & b\\ c & d\end{array}\right),$

then

$\displaystyle A^{-1}=\frac{1}{a d - b c}
\left(\begin{array}{cc} d & -b\\ -c & a\end{array}\right).$

The Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually two integers, is defined as


\begin{indisplay}\delta^i_j =\left(
\begin{array}{cc}
1 & {\rm if \ \ } i=j\\
0 & {\rm if\ \ } i\ne j\\
\end{array}\right.\end{indisplay}     (9)

Consider the $n\times n$ matrix $A$ defined as

$\displaystyle A = (a_{ij}), \ \ a_{ij} = i\cdot \delta^i_j.$

In other words, $A$ is a diagonal matrix with diagonal entries being equal to

$\displaystyle 1, 2, 3, \dots (n-1), n.$

  1. What is the one norm of this matix?
  2. What is the two norm of this matrix?
  3. What is the $\infty$ norm of this matrix?
  4. What is the Condition Number of this matrix?

For the matrix

\begin{indisplay}
{\bf A=}\left[
\begin{array}{rrr}
2 & -4 & 2 \\
1 & 0 & 5 \\
2 & -2 & 2
\end{array}\right]
\end{indisplay}

find the $LU$-factorization (show both $L$ and $U$ matrices explicitly).

Consider the system

$\displaystyle {\bf A} \cdot x = b,$ (10)

where

\begin{indisplay}
{\bf A=}\left[
\begin{array}{rrr}
5 & 6 & 7 \\
10 & 20 & 23 \\
15 & 50 & 58
\end{array}\right]
\end{indisplay}

and

\begin{indisplay}
b =\left[
\begin{array}{r}
6\\
13\\
23
\end{array}\right]
\end{indisplay}

or

\begin{indisplay}
{\bf A=}\left[
\begin{array}{rrr}
1 & 3 & 5 \\
2 & 9 & 15 \\
2 & 18 & 37
\end{array}\right]
\end{indisplay}

and

\begin{indisplay}
b =\left[
\begin{array}{r}
22\\
65\\
149
\end{array}\right]
\end{indisplay}

(a) Find explicitly $LU$ factorization of ${\bf A}$.

(b) Use this $LU$ decomposition to solve (11).

Consider the matrix

\begin{indisplay}
{\bf B=}\left[
\begin{array}{rrr}
1 & 0 & 0 \\
2 & 1 & 0 \\
3 & 2 & 1
\end{array}\right]
\end{indisplay}

Calculate the inverse of this matrix $B^{-1}$ by representing the matrix as a product of two elementary Gauss elimination matrices.

In this problem $\alpha$ is a small positive number. Sketch the two lines in the $x,y$ plane, and describe how they change, including the point of intersection, as $\alpha$ approaches zero. Also, calculate the condition number for the matrix, and describe how it changes, when $\alpha$ approaches zero.

  1. $\displaystyle x-y = -1, \ \ -x+ (1+\alpha) y =1.$

  2. $\displaystyle 2 x + 4 y = 1, \ \ (1-\alpha)x+2 y = -1.$

Prove that the one norm is the maximum absolute column sum; use $2\times 2$ matrices. Consider

$\displaystyle {\bf A}=\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 &-2&2\\
0 & 2& -1\end{array}\right)$     (11)

a) Find the LU factorization of the given matrix

b) Calculate the norms $\vert\vert A\vert\vert _2$, $\vert\vert A\vert\vert _\infty$, and $\vert\vert A\vert\vert _1$ for this matrix

c) Calculate the condition number for the matrix A.

Consider

\begin{indisplay}{\bf A=}\left(
\begin{array}{cc}
-1 & 1 \\
2 & 2 \\
\end{array}\right)\end{indisplay}      

The $\infty$ norm of a vector is defined as a modulus of a maximum component of a vector. As we learned in class, matrix norms are defined through the vector norms as

$\displaystyle \vert\vert A\vert\vert _\infty = {\rm max} \frac{ \vert\vert A x\vert\vert _\infty}{\vert\vert x\vert\vert _\infty}.$

Use this definition of the matrix $\infty$ norm through the vector norm to proove that the $\infty$ norm of a matrix is a maximum absolute row sum.

Complete the proof for $2\times 2$ matrices.

b Extra credit 10% Prove this statement for general $n\times n$ matrices.

Consider the system

\begin{indisplay}\left[
\begin{array}{ll}
4 & 2 \\
2+\varepsilon & 1+\varepsilo...
...ray}\right] =\left[
\begin{array}{l}
2 \\
1
\end{array}\right]
.\end{indisplay}     (12)

where $\varepsilon $ is small. The system has two approximate solutions ${\bf\hat{x}=}[0,1]^{T}$ and ${\bf\tilde{x}=}[1-5\varepsilon ,-1]^{T}.$ Find the norms of the respective residuals. Which one is smaller? Find the condition number of the matrix. Explain, why you should not use residuals in this case to determine the quality of a solution.

Hint: Find an exact solution to (13).

Hint

\begin{indisplay}
\left[
\begin{array}{rr}
a & b \\
c & d
\end{array}\right] ^...
...}\left[
\begin{array}{rr}
d & -b \\
-c & a
\end{array}\right]
\end{indisplay}

Calculate an inverse of the following matrix:

\begin{indisplay}{\bf A=}\left(
\begin{array}{cccccc}
1 & 0 &0 &0 &0 &0 \\
0 & ...
...\
0 & 0 &3 &0 &1 &0 \\
0 & 0 &4 &0 &0 &1 \\
\end{array}\right)\end{indisplay}     (13)

Calculate the inverse of this matrix, ${\bf A}^{-1}$. Let a $40\times 40$ matrix ${\bf A}$ be factored as follows:
\begin{indisplay}{\bf A=LU=}\left(
\begin{array}{cccccc}
1 & & & & & \\
-1 & 1 ...
...ots & \vdots \\
& & & & 1 & 1 \\
& & & & & 1
\end{array}\right)\end{indisplay}     (14)

where the blank spaces stand for zeros. Use the LU factorization to solve the system ${\bf Ax=[}0,1,-1,1,...,-1,1]^{T}.$

Consider the matrix

\begin{indisplay}A = \left(
\begin{array}{cc}
1 & 1+\epsilon\\
1-\epsilon & 1
\end{array}\right)\end{indisplay}      

  1. What is the determinant of $A$?
  2. In floating point arithmetic, for what range of $\epsilon$ will the computed value of the determinant be nonzero
  3. What is the $LU$ factorization of $A$?
  4. In floating point arithmetic, for what value of $\epsilon$ will the computed value of $U$ be singular?
In a computer with no build in function for floating point divisions, one might instead use multiplication by the reciprocal of the divisor. Apply Newton's method to produce an iterative scheme to approximate the reciprocal of a number $y>0$, to solve the equation

$\displaystyle f(x)=\frac{1}{x}-y=0,$

given y. Considering intended application your scheme should not contain any divisions.

Consider the following system of equations:

$\displaystyle x^2+y^2=4, \ \ y = x^6.$

  1. Sketch the two curves and explain where approximately the solutions are located
  2. Set up and explain the Newton method to find the solutions. Calculate the Jacobian $J$ and the RHS ${\bf f}$ of the Newton method.
  3. What would be the good starting point for your calculations?
  4. Write down in all details the system of equations to make the first iteration. Do not solve the resulting equations.

Let a $6\times 6$ matrix ${\bf A}$ be factored as follows:

\begin{indisplay}{\bf A=}
\left(
\begin{array}{ccccccc}
1 & 1 & 1 & 1 &1 &1 \\
...
...0 & 0 & 0 & 0 &1 &1 \\
0 & 0 & 0 & 0 &0 &1\\
\end{array}\right)\end{indisplay}      

Using this factorization, solve for $x$ the following equation:
\begin{indisplay}{\bf A x =}
\left(
\begin{array}{cccccc}
21 \\
41 \\
59 \\
74 \\
85 \\
91 \\
\end{array}\right)\end{indisplay}      

Chapter FOUR Consider the system of equations

$\displaystyle x^2+v^2 = 1, \ \ (x-1)^2+ y^2 = 1.$

Perform one step of a Newton method with the starting point $x=1,\ y=1$. NEW

The Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually two integers, is defined as


\begin{indisplay}\delta^i_j =\left(
\begin{array}{cc}
1 & {\rm if \ \ } i=j\\
0 & {\rm if\ \ } i\ne j\\
\end{array}\right.\end{indisplay}     (15)

Consider the $n\times n$ matrix $A$ defined as

$\displaystyle A = (a_{ij}), \ \ a_{ij} = i\cdot \delta^i_j.$

In other words, $A$ is a diagonal matrix with diagonal entries being equal to

$\displaystyle 1, 2, 3, \dots (n-1), n.$

  1. Find eigenvalues of $A$.
  2. Find eigenvectors of $A$.
  3. To which eigenvector would a power iteractions converge? Explain, how to set up such power iterations.
  4. To which eigenvector would an inverse power iteraction converge? Explain, how to set up such power iterations.
  5. What algorithm would you use to find second largest eigenvector and corresponding eigenvalue? Please explain in details.

Let the $n\times n$ matrix $A$ have eigenvalues $\lambda_1\dots\lambda_n$, and eigenvectors $X_1\dots X_n$. Let the $n\times n$ matrix $B$ have eigenvalues $\mu_1\dots\mu_n$ and same eigenvectors $X_1\dots X_n$. What are eigenvectors and eigenvalues of the matrix $C$ given by

$\displaystyle C = 2 A^3+ 4 B^{-4}.$


$\displaystyle {\bf A}= \left(\begin{array}{ccc}
1 & 2 & -4\\
3 &-2 & 2\\
6 & 2 & 4
\end{array}\right)$     (16)

a) Calculate the Eigenvalues and Eigenvectors for the given matrix

b) What Eigenvalue would the inverse power method converge to? Why?

c) What Eigenvalue would the power method converge to? Why?

d) Define $B = 2A - 3I$. For this matrix, what Eigenvalue would the power method converge to? Why?

Consider the $6\times 6$ matrix

$\displaystyle A = \begin{bmatrix}
5 & 0 & 0 & 0 & 1 & 0 \\
0 & 5 & 0 & 0 & 0...
... 0 & 0 \\
1 & 0 & 0 & 0 & 5 & 0 \\
0 & 0 & 0 & 0 & 0 & 5\\
\end{bmatrix} $

  1. Find Eigenvalues and Eigenvectors of this matrix
  2. Find LU decomposition of this matrix
  3. To what eigenvalue and eigenvector the power iteration will converge?
  4. To what eigenvalue and eigenvector the inverse power iteration will converge?

Suppose that ${\bf A}$ is a symmetric $4\times 4$ matrix with eigenvalues

$\displaystyle -4, 1, 2, 3.$

a) To which of these eigenvalues will the power method converge? Why

b) To which of these eigenvalues will the inverse power iteration converge? Why?

c) To what eigenvalue the power iteration method applied to the matrix

$\displaystyle {\bf B}=2 {\bf A}+3 I$

will converge? Why?

Consider

$\displaystyle A=\left(\begin{array}{cc} 4 & 8\\ 1 & 6\end{array}\right),$

(a) Find eigenvalues and eigenvectors of $A$.

(b) Find eigenvalues and eigenvectors of

$\displaystyle B=\left(\begin{array}{cc} 5 & 8\\ 1 & 7\end{array}\right)^3
+ \left(\begin{array}{cc} 3 & 8\\ 1 & 5\end{array}\right)^{-1}.$

Note that you do not have to calculate $B$ explicitly.

Chapter FIVE Given

$\displaystyle f(x) = x^5,$

on the interval $-1\le x\le 2$, use the points $x_1 = -1$, $x_2 = 0$ and $x_3 = 2$.

a) Find the piecewise linear interpolation function

b) find the quadratic interpolation function

Use a Lagrange interpolating polynomial of degree 2 to find an approx- imate value for the following. Not all of the data points are needed, and you should explain which ones you use and why.

(a) $f(2.4)$ if $f(2.1) = 1, f(2.3) = 1.2, f(2.6) = 1.3, f(2.7) = 2$,

(b) $f(-0.1)$ if $f(0.1)=2,f(0)=0.1,f(-0.2)=-0.1, f(0.4) = - 0.5$,

(c) $f(1)$ if $f(0.5)=-1,f(0.8)=-0.5, f(1.1) = 0.5, f(1.2)=1$.

Answer:

(a) Lagrange Polynomial is

$\displaystyle f(x) = 10(x-2.3)(x-2.6) - 20 *(x-2.1)(x-2.6) + (26/3)(x-2.1)(x-2.3).$

This gives, for $x = 2.4$,

$\displaystyle f(2.4) = 1.26.$

(b) Lagrange Polynomial is

$\displaystyle f(x) =
-(5/3)(x-0.1)x + 5 ( x+0.2)(x-0.1) + (200/3) x (x+0.2),
$

which gives, for $x=-0.1$

$\displaystyle f(-0.1) = -0.6.$

(c) Lagrange Polynomial is

$\displaystyle f(x) = - (50/12)(x-1.1)(x-1.2) - (50/3)(x-0.8)(x-1.2) + 25(x-0.8)(x-1.1),$

which gives for $x=1$,

$\displaystyle f(1) = \frac{1}{12}.$

Consider
$s_1(x) = -2 + x + 6x^2 + 2x^3$ for $-1\le =\le 0$
$s_2(x)$ for $0 \le x\le 1$.

Solve for $s_2(x)$ so that the given function is a natural cubic spline from $-1\le x\le
1$.

Use the theorems we studied in class to calculate the maximum error in interpolating the function $\sin(t)$ by a polynomial of degree four using five equally spaced points on the interval $[0,2\pi]$. Suppose you want to interpolate the function $\sin(t)$ by a polynomial of degree $N$ by using $N+1$ equally spaced points on the interval $[0,2\pi]$. How many points should you use so that the difference between the sine function and your interpolation is less that $10^{-16}$?

This exercise explores some of the differences between a cubic poly- nomial and a cubic spline. In this problem the data are: $(x1 , y1 ) = (0, 0),
(x2 , y2 ) = (1, 1), (x3 , y3 ) = (2, 0)$, and $(x4 , y4 ) = (3, 1)$.

(a) Find the global interpolation polynomial that fits this data, and then evaluate this function at x = 1/2.

(b) Find the natural cubic spline that fits this data, and then evaluate this function at x = 1/2.

(c) The cubic in part (a) satisfies the interpolation and smoothness conditions required of a spline, yet it produces a different result than the cubic spline in part (b). Why?

(d) What boundary conditions should be used so the cubic spline produces the cubic in part (a)?

Solution

5.26

(a)

Global polynomial interpolation is

$\displaystyle y(x) = \frac{10}{3}x - 3 x^2 +\frac{2}{3}x^3.$

Then

$\displaystyle y(1/2) = 1b.$

(b)

Writting spline as

$\displaystyle s_1(x) = a_1+b_1 x + c_1 x^2 + d_1 x^3$      
$\displaystyle s_2(x) = a_2 + b_2 (x-1) + c_2 (x-1)^2 + d_2(x-1)^3$      
$\displaystyle s_3(x) = a_3 + b_3 (x-2)+ c_3(x-2)^2 +d_3 (x-2)^3$      

and follwoing steps we discussed in class, we obtain system of equations
$\displaystyle a_1=0, \ \ \ \ c_1=0$      
$\displaystyle b_1+d+1=0, \ \ \ a_2=1, \ \ 1+b_2 + c_2 + d+2 = 0$      
$\displaystyle a_2 = 1, 1+ b_2 +c_2 + d_2= 0, a_3 = 0,$      
$\displaystyle b_3+c_3+d_3 = 1 , b_1 + 3 d_1 =b_2, 6 d_1=2c_2 ,$      
$\displaystyle b_2 + 2 c_2 + 3 d_2 = b_3, 2 c_2 + 6 d_2 = 2 c_3,$      
$\displaystyle 2c_3 + 6 d_3 = 0.$      

We solve this system of equation to finally obtain that
$\displaystyle s_1(x) = \frac{5}{3} x+ \frac{2}{3} x^3,$      
$\displaystyle s_2(x) = 1-\frac{1}{3} (x-1) -2 (x-1)^2+\frac{5}{3}(x-1)^3,$      
$\displaystyle s_3(x) = -\frac{1}{3} (x-2)+ 2 (x-2)^2 -\frac{2}{3}(x-2)^3.$      

We therefore obtain

$\displaystyle s_1(\frac{1}{2}) = \frac{11}{12}.$

c

Result of part (a) does indeed satisfies interpolation and smoothness condition, but result in part (a) is not a natural spline

(d)To obtain result in (a) one would have to use clamped spline with

$\displaystyle s_1'(0) = \frac{10}{3}= s_3'(3).$

Find the clamped cubic spline $s(x)$, which goes through the points

$\displaystyle (0,1), \ (1,3), \ (2,4)$

with the value of the first derivative being equal to $2$ and $3$ at the beginning and the end of the domain, i.e.

$\displaystyle s'(x=0)=2, \ \ {\rm and \ } s'(2)=3.$

Find the clamped cubic spline $s(x)$, which goes through the points

$\displaystyle (0,1), \ (1,3), \ (2,5)$

with the value of the first derivative being equal to $2$ and $-2$ at the beginning and the end of the domain, i.e.

$\displaystyle s'(x=0)=2, \ \ {\rm and\ } s'(2)=-2.$

Consider the following piece-wise cubic function

$\displaystyle S(x) = \begin{bmatrix}1+ 2 x + 3 x^2 + + 4 x^3 & {\rm for } &0\le...
...
10 + 20(x-1) + 15 (x-1)^2 + 4 (x-1)^3 & {\rm for} & 1\le x\le 2
\end{bmatrix}$ (17)

Is this a cubic spline? If yes, is it natural, not-a-knot, clamped or neither? NEW

Consider the following piece-wise quadratic polynomial

$\displaystyle S(x) = \begin{bmatrix}
3-9x + 4 x^2 & {\rm for } &0\le x\le 1\\
-2 -(x-1) + c (x-1)^2 & {\rm for} & 1\le x\le 2
\end{bmatrix}$ (18)

Is it possible to choose $c$ so that this function is a cubic spline? If yes, what is the value of $c$ and is it natural, clamped or not-a-knot spline? NEW

For a set of $N$ given data points $t_1, \dots t_n$, define the function

$\displaystyle \pi(t)= (t-t_1)(t-t_2)\dots (t-t_n).$

  1. Show that

    $\displaystyle \pi^\prime(t_j) = (t_j-t_1)(t_j-t_2)\dots
(t_j-t_{j-1})(t_j-t_{j+1}) \dots (t_j-t_n).$

  2. Show that the $j$'th Lagrange basis function can be expressed as

    $\displaystyle L_j(t) = \frac{\pi(t)}{(t-t_j)\pi^\prime(t_j)}.$

In general, is it possible to interpolate $n$ data points by a piecewise quadratic polynomial, with knots at a given data points, such that the interpolant is

Explain your answer as best as you can. What is the maximum number of points $n$ that can be interpolated by a piecewise quadratic polynomial that is twice continuously differentiable?

Consider the following data:

$\displaystyle (0, 0), (1, 1), (2, 3).$

(a) Find the global interpolation polynomial that fits these data.

(b) Find the piecewise linear interpolation function that fits these data.

(c) Find the natural cubic spline that fits these data.

answer

(a) By using methods we learned in class, we obtain the polynomial of degree two:

$\displaystyle y(x)=\frac{x+x^2}{2}.$

(b) Using Lagrange interpolation for the two subintervals, we obtain

$\displaystyle y(x) =
\begin{bmatrix}x, & 0\le x\le 1,\\
2 x-1, & \le x\le 2\end{bmatrix}$     (19)

(c) by repeating calculations that we have done in class, we obtain


$\displaystyle s(x) =
\begin{bmatrix}\frac{3}{4}x+ \frac{x^3}{4} & \ 0\le x\le 1 \\
1+\frac{3}{2}(x-1) + \frac{3}{4}(x-1)^2 - \frac{1}{4}(x-1)^3
\end{bmatrix}.$     (20)

Here we explore some of the differences between a cubic polynomial and a cubic spline. In this problem the data are:

$\displaystyle (x1 ,
y1 ) = (0, 0), (x2 , y2 ) = (1, 1), (x3 , y3 ) = (2, 0), and (x4 ,
y4 ) = (3, 1).$

(a) Find the global interpolation polynomial that fits this data, and then evaluate this function at $x=1/2$.

(b) Find the natural cubic spline that fits this data, and then evaluate this function at $x=1/2$.

(c) The cubic in part (a) satisfies the interpolation and smoothness conditions required of a spline, yet it produces a different result than the cubic spline in part (b). Why?

(d) What boundary conditions should be used so the cubic spline produces the cubic in part (a)?

Solution:

5.26

(a)

Global polynomial interpolation is

$\displaystyle y(x) = \frac{10}{3}x - 3 x^2 +\frac{2}{3}x^3.$

Then

$\displaystyle y(1/2) = 1b.$

(b)

Writting spline as

$\displaystyle s_1(x) = a_1+b_1 x + c_1 x^2 + d_1 x^3$      
$\displaystyle s_2(x) = a_2 + b_2 (x-1) + c_2 (x-1)^2 + d_2(x-1)^3$      
$\displaystyle s_3(x) = a_3 + b_3 (x-2)+ c_3(x-2)^2 +d_3 (x-2)^3$      

and follwoing steps we discussed in class, we obtain system of equations
$\displaystyle a_1=0, \ \ \ \ c_1=0$      
$\displaystyle b_1+d+1=0, \ \ \ a_2=1, \ \ 1+b_2 + c_2 + d+2 = 0$      
$\displaystyle a_2 = 1, 1+ b_2 +c_2 + d_2= 0, a_3 = 0,$      
$\displaystyle b_3+c_3+d_3 = 1 , b_1 + 3 d_1 =b_2, 6 d_1=2c_2 ,$      
$\displaystyle b_2 + 2 c_2 + 3 d_2 = b_3, 2 c_2 + 6 d_2 = 2 c_3,$      
$\displaystyle 2c_3 + 6 d_3 = 0.$      

We solve this system of equation to finally obtain that
$\displaystyle s_1(x) = \frac{5}{3} x+ \frac{2}{3} x^3,$      
$\displaystyle s_2(x) = 1-\frac{1}{3} (x-1) -2 (x-1)^2+\frac{5}{3}(x-1)^3,$      
$\displaystyle s_3(x) = -\frac{1}{3} (x-2)+ 2 (x-2)^2 -\frac{2}{3}(x-2)^3.$      

We therefore obtain

$\displaystyle s_1(\frac{1}{2}) = \frac{11}{12}.$

c

Result of part (a) does indeed satisfies interpolation and smoothness condition, but result in part (a) is not a natural spline

(d)To obtain result in (a) one would have to use clamped spline with

$\displaystyle s_1'(0) = \frac{10}{3}= s_3'(3).$

Consider the following data

$\displaystyle (0,1) \ \ ( 1,2)\ \ \ (2,9).$

  1. Interpolate this data by a quadratic polynomial by using monomial interpolation
  2. Interpolate this data by using Lagrange interpolation
  3. Show that Lagrange interpolation reduces to monomial interpolation
The Gamma function $\Gamma(x)$ has the following known values:

$\displaystyle \Gamma(0.5)=\sqrt{\pi}, \ \Gamma(1)=1, \ \ \Gamma(1.5) =
\sqrt{\pi}/2.$

Use quadratic interpolation to determine the value approximate value of $x$ such that $\Gamma(x)=1.5$. HINT: Instead of using the data $(0.5,\sqrt{\pi}), (1,1), (1.5, \sqrt{\pi}/2),$ you may find it easier to use the data $(\sqrt{\pi},0.5), (1,1), (\sqrt{\pi}/2, 1.5).$

Suppose that some measurements had produced the following data:

$\displaystyle (0; 5), (1;9), (2,15) $

(i)

Write down second degree polynomial passing through all three points by using Lagrange interpolation

(ii)

Write down second degree polynomial passing through all three points by using Newton interpolation

(iii)

Show that the two polynomials obtained in (i) and (ii) are equivalent

Use appropriate Lagrange interpolating polynomial of to interpolate the following data:

$\displaystyle (0,-3), \ (1,0), \ (2,5), \ (3,12) .$

What is the degree of interpolating polynomial? There is a catch in this question.

Consider the following data:

$\displaystyle (0,3), \ \ \ (1,1), \ \ \ (2,3).$

  1. Interpolate this data by a quadratic polynomial by using monomial interpolation
  2. Interpolate this data by using Lagrange interpolation
  3. Show that Lagrange interpolation reduces to monomial interpolation
  4. Interpolate this data by piece wise linear interpolation

Determine the parabola (interpolating polynomial of degree two) that interpolates the values of $\sin x$ for $x=0,\pi/2,\pi.$

Find the clamped cubic spline $s(x)$, which goes through the points

$\displaystyle (0,1), \ (1,3), \ (2,4)$

with the value of the first derivative being equal to $2$ and $3$ at the beginning and the end of the domain, i.e.

$\displaystyle s'(x=0)=2, \ \ {\rm and \ } s'(2)=3.$

Find the clamped cubic spline $s(x)$, which goes through the points

$\displaystyle (0,1), \ (1,3), \ (2,5)$

with the value of the first derivative being equal to $2$ and $-2$ at the beginning and the end of the domain, i.e.

$\displaystyle s'(x=0)=2, \ \ {\rm and\ } s'(2)=-2.$

Consider the following function

\begin{indisplay}g(x) = \left[
\begin{array}{lll} x^3+1, & {\rm for }& 0\le x\le...
... -x^3+6 x^2 -6 x + 1 & {\rm for} &1\le x\le 2
\end{array}\right.
\end{indisplay}

Is $g(x)$ a cubic spline for $0\le x\le 2$? Make sure you justify your answer

Consider the logarithmic function $\ln(x)$ evaluated at the points 1,2 and 3:

$\displaystyle (1, \ln(1)), \ (2, \ln(2)), \ (3,\ln(3)).$

Write down the entries of the matrix and right hand side of the linear system that determines the coefficients for the cubic not-a-knot spline (variation: natural) interpolating these three points.

HINT: not-a-knot spline requires that the third derivative is continuous at the first and last points. Do not solve this system. Suppose you were to define a piece-wise quadratic spline that interpolates $n+1$ given values

$\displaystyle (x_1, y_1), (x_2, y_2), \dots (x_n, y_n), (x_{n+1}, y_{n+1}).$

Write down in general form $n$ quadratic polynomials that interpolate these $n+1$ points, such that the resulting piece wise quadratic polynomial has continuous first derivative. How many additional conditions are required to make a square system for the coefficients of this quadratic spline?

This problem considers the function

$\displaystyle g(x) = \begin{bmatrix}
x^3-1, & {\rm if} &0\le x\le 1 \\
-x^3+6x^2-6x+1, & {\rm if} & 1\le x\le 2
\end{bmatrix}$     (21)

  1. Is $g(x)$ a cubic spline for $0\le x\le 2$?
  2. If it is a spline, it is natural, clamped, or neither?
Make sure to justify your answers.

This problem considers the function

$\displaystyle g(x) = \begin{bmatrix}
2+3x^2 + \alpha x^3, & {\rm if} &-1\le x\le 0 \\
2+\beta x^2 -x^3, & {\rm if} & 0\le x\le 1
\end{bmatrix}$     (22)

  1. For what values of $\alpha$ and $\beta$, if any, is $g(x)$ a cubil spline for $-1\le x\le
1$? These values are to be used for the rest of this problem.
  2. What were the data points that give rise to this cubic spline?
  3. for what values of $\alpha$ and $\beta$ is $g(x)$ a natural cubic spline?
  4. for what values of $\alpha$ and $\beta$ is $g(x)$ a clamped cubic spine?

Suppose you are given 4 point:

$\displaystyle (t_1,y_1), \ \ (t_2, y_2), \ \ (t_3, y_3), \ \ (t_4, y_4).$

Please explain your reasoning as accurately as you can.

a

Suppose that you would like to obtain a quadratic spline that interpolates the function $\cos(x)$ between the nodes 0, $1$ and $2$.

  1. Write down the form of the spline interpolation function. How many coefficients need to be determined?
  2. Write down the conditions that the spline must satisfy, and the corresponding equations for the coefficients. Do not solve the resulting equations.
  3. Are there enough conditions? If not, what extra condition(s) would you recommend to make a best possible fit of $\cos(x)$?

b Given a function on a discrete set of $n$ data points. Explain the difference between interpolation and approximation. What Is the difference between an interpolating polynomial of degree $n-1$ and an approximating polynomial of the same degree $n-1$?

Is it possible to interpolate three points

$\displaystyle (x_1,y_1), (x_2,y_2), (x_3, y_3)$

by a second order piece wise continious quadratic polynomial with continious first and second derivatives?

Is it possible to interpolate four points

$\displaystyle (x_1,y_1), (x_2,y_2), (x_3, y_3),(x_4,y_4),$

by a second order piece wise continious quadratic polynomial with continious first and second derivatives?

Suppose that you would like to obtain a quadratic spline that interpolates the function $\cos(x)$ between the nodes $[0,1,2]$.

  1. Write down the form of the spline interpolation function. How many coefficients need to be determined?
  2. Write down the conditions that the spline must satisfy, and the corresponding equations for the coefficients. Do not solve the resulting equations.
  3. Are there enough conditions? If not, what extra condition(s) would you recommend?

Suppose that you are given 4 experimental points: $(t_1,y_1), \ \ (t_2,y_2), \ \ (t_3,y_3), \ \ (t_4, y_4)$.

Is it possible to interpolate these 4 data points by piecewise quadratic polynomial with knots at these given data points, such that interpolant is

  1. Once continuously differentialble?
  2. Twice continuously differentialble?
  3. Three times continuously differentialble?

In each case, if the answer is “yes” explain why, and outline the procedure to find the interpolating function (you may use short form of the equations, do not solve the resulting equations); if the answer is “no”, explain why.

In class we have studied cubic splines, i.e. interpolation by a piece wise cubic polynomial with continious first and second derivative. It is possible to also introduce quadratic spline, i.e. piece wise quadratic polynomial with continious first derivative. Such qudratic spline is the focus of this problem.q

Consider the same data:

$\displaystyle (0,3), \ \ \ (1,1), \ \ \ (2,3).$

Interpolate this data by piece wise quadratic polynomial with continious first and second derivative at a middle point.

Extra Credit, 30 percent Suppose that one additional point is added to the above data, so that there are four points.

$\displaystyle (0,3), \ \ \ (1,1), \ \ \ (2,3), \ \ (3,9).$

Is it possible to interpolate this data by piece wise quadratic interpolation with continious first and second derivatives at the interior points? Is it possible to do it in general?

Chapter SIX Let $P_3(x)$ be a degree three polynomial, and let $P_2(x)$ be its degree two interpolating polynomial at the three points $x=-h$, $x = 0$ and $x=h$. Prove by direct calculation that

$\displaystyle \int\limits_h^h P_3(x) d x = \int\limits_h^h P_2(x) d x. $

What does this fact say about the Simpson rule? NEW Determine the degree of precision of the following Newton-Cotes quadrature rule (often called Boole's rule)

$\displaystyle \int\limits_{x_0}^{x_4} f(x) d x = \frac{2 h}{45}\left(
7 f(x_0)+32 f(x_1) + 12 f (x_2) + 32 f(x_3) + 7 f(x_4) \right).$

Here $x_k = x_0 + k h, \ k = 0,1,2,3,4.$ NEW

In class we have studied quadrature rules of the form

$\displaystyle \int\limits_a^b f(x) d x \simeq \sum\limits_{i=0}^{n}w_i f(x_i).$

It appears that sometimes it is advantageous to derive quadratures which also use the derivatives of the function, in addition to value of the function at selected points. Find the weights for the quadrature

$\displaystyle \int\limits_{-1}^1f(x) d x \simeq \omega_1 f(-1) + \omega_2 f''(0) +
\omega_3 f(1).$

Chose the weights $\omega_1,\omega_2,\omega_3$ to maximize the precision of the quadrature. What is the order of the resulting quadrature?

Derive the error term of this quadrature.

In class we have studied quadrature rules of the form

$\displaystyle \int\limits_a^b f(x) d x \simeq \sum\limits_{i=0}^{n}w_i f(x_i).$

It appears that sometimes it is advantageous to derive quadratures which also use the derivatives of the function, in addition to value of the function at selected points. Find the weights for the quadrature

$\displaystyle \int\limits_{-1}^1f(x) d x \simeq \omega_1 f(-1) + \omega_2 f'(-1) +
\omega_3 f'(1)+\omega_4 f(1).$

Chose the weights $\omega_1,\omega_2,\omega_3, \omega_4$ to maximize the precision of the quadrature. What is the order of the resulting quadrature?

HINT Solution of this problem will be much easier if you guess the relationship between $\omega_1$, $\omega_2$, $\omega_3$ and $\omega_4$.

variation

$\displaystyle \int\limits_{-1}^1f(x) d x \simeq \omega_1 f(-1) + \omega_2 f''(-1) +
\omega_3 f''(1)+\omega_4 f(1).$

Derive the error term of this quadrature.

Consider the qudrature rule of the form

$\displaystyle \int\limits_{0}^1 f(x) d x \simeq a f(\frac{1}{2}) + b f(1).$

Choose $a$ and $b$ to maximize the accuracy of the resulting quadrature. Calculate the truncation error of the resulting quadrature.

Is it more accurate or less accurate than Midpoint quadrature?

Given the points $x_n,f(x_n)$:

$\displaystyle (0,3), (1,1), (2,0), (4,-2), (6,1).$

a) Evaluate the integral of the function using the midpoint rule

b) Evaluate the integral using Simpson's rule

c) Evaluate the integral using the trapezoid rule

(a) Consider the integration rule of the form

$\displaystyle \int\limits_{0}^{1} f(x) d x
\simeq a_1 f(\frac{1}{3})+ a_2 f(\frac{2}{3}).$

Choose $a_1$ and $a_2$ to maximise the precision of this quadrature rule 16 points

(b) Calculate truncation error of this quadrature.

This problem concerns using numerical methods to calculate the integral

$\displaystyle I=\int\limits_0^1 x^4 d x.$

Note that the exact value is $I =\frac{1}{5}.$

We are going to compare different ways to calculate this integral by using the value of the function $f(x) = x^5$ on only three points, $x = 0$, $x=\frac{1}{2}$ and $x=1$.

  1. Using the composite trapezoidal rule, and 2 subintervals, find an approximate value for the integral. What is the error?
  2. Using the Simpson rule on an $[0:1]$ interval find the approximate value of this integral. What is the error?
  3. out of two methods used above, which one gives more accurate answer, and why? Make sure you justify your answer.
  4. Extra Credit 10 percent Do the errors you obtained with Simpson and composite trapezoid above agree with the predictions given by the theorems we studied in class?

Let us denote

$\displaystyle I = \int\limits_{-1}^{1}\cos(x) d x.$

As you know, we can calculate the value of this integral analytically:

$\displaystyle \int\limits_{-1}^{1}\cos(x) d x = 2 \sin (1)\simeq 1.6829419696157930133.$

Calculate the value of this integral numerically by using

  1. Modpoint method
  2. Trapezoid method
  3. Simpson method
  4. Two point Gauss quadrature

Compare the result of your calculations with the exact value. Which of these four methods give the most accurate result? Is this consistent with your expectations?

(a) Consider the integration rule of the form

$\displaystyle \int\limits_{0}^{1} f(x) d x
\simeq a_1 f(\frac{1}{2})+ a_2 f(\frac{3}{4}).$

Choose $a_1$ and $a_2$ to maximise the precision of this quadrature rule 16 points

(b) Calculate truncation error of this quadrature.

Find 3-point Gaussian rule for $%
\int_{0}^{1}F(x)dx,$ if the standard 3-point Gaussian rule is given by

$\displaystyle \int_{-1}^{1}g(x)dx\cong \frac{1}{9}\left( 5g(-\sqrt{3/5})+8g(0)+5g(\sqrt{3/5})\right)
$

(b) The trapezoid rule has the error estimate

$\displaystyle \int_{a}^{a+h}g(x)dx=\frac{h}{2}\left[ g(a)+g(a+h)\right] -\frac{1}{12}%
h^{3}g^{\prime \prime }(s)
$

where $a<s<a+h.$ If interval $[a,b]$ is divided into $n$ equal panels, show that the error of the composite trapezoid rule is bounded by

$\displaystyle \frac{(b-a)^{3}}{12n^{2}}\max_{a\leq x\leq b}\left\vert g^{\prime \prime
}(s)\right\vert
$

(c) Use the result in part (b) to determine the number of panels sufficient to approximate $\int_{0}^{1}\sin xdx$ within to $\frac{1}{3}
10^{-4}$ by using the composite trapezoid rule.

(a) (10 points) In the three point quadrature rule

$\displaystyle \int_{-1}^{1} f (x) d x = a_1 f(x_1) + a_2 f(x_2) + a_3 f(x_3),$

choose $a_1$, $a_2$, $a_3$, $x_1$, $x_2$ and $x_3$ to maximize the precision of the quadrature rule. (The result will be three-point Gauss quadrature.)

(b) (5 points)What is the degree of the resulting scheme? Demonstrate this by showing the scheme correctly integrates a polynomial of that degree, and that it does not integrate correctly the polynomial of the degree of one order higher.

Find $a_{1},a_{2},a_{3},x_1,x_2,x_3$ in

$\displaystyle \int\limits_{-1}^1 f(x) d x = a_1 f(x_1)+ a_2 f(x_2) + a_3 f(x_3)$

to maximize the precision of the quadrature. Find the error term of this quadrature. In class we have studied the two point Gauss quadrature

$\displaystyle \int\limits_{-1}^{1}f(x)\simeq
f(-\frac{1}{\sqrt{3}})+f(\frac{1}{\sqrt{3}}).$

Calculate the error term for this quadrature.

Hint Derivation is similar to calculation of the trapezoid error term we did in class. You will need to express $f(\pm\frac{1}{\sqrt{3}})$ via $f(0)$ and its derivatives.

In class we have studied two point Gauss quadrature. In this problem you are to derive one point Gauss quadrature.

Consider the qudrature rule of the form

$\displaystyle \int\limits_{-1}^1 f(x) d x \simeq a f(z).$

Choose $a$ and $z$ to maximize the order (accuracy) of the resulting quadrature. What is the truncation order of this quadrature?

Suppose that you have a tabular data, that is to say that the function $f(x)$ is given only on the $n$ equidistant points $x_i$, $i=1\dots n$ such that $x_1 = 0$, $x_n=1$. Propose a way to numerically evaluate

$\displaystyle g(x) = \int\limits_0^x f(x) d x.$

Chapter SEVEN

Find an $O(k^2)$ approximation of $y'(t_j)$ that utilizes $y(t_j)$, $y(t_{j+3})$, and $y(t_{j-1})$. You are producing a final project for your Master's Degree and need to solve Initial Value Problem numerically. You prefer to have accurate numerical solution. Out of Forward Euler, Backward Euler, Trapezoidal, Heun (RK2), and Runge-Kutta (RK4) method, which method would you choose and why?

Consider RK2 (Heun's) method:

$\displaystyle k_{1}=f(x_{k},y_{k});\;k_{2}=f(x_{k}+h,y_{k}+hk_{1});\;y_{k+1}=y_{k}+\frac{h}{2}(k_{1}+k_{2}).
$

Show that this method is second order accurate, i.e. it finds exact solution to the initial value problem

$\displaystyle y'(x)=2 x, y(x=0)=0.$

This can be done, for example, by showing that if at step $k$ the numerical solution is

$\displaystyle y_k = x_k^2,$

then at step $k+1$ the numerical solution is equal to

$\displaystyle y_{k+1} = (x_k+h)^2,$

which agrees with exact analytically solution

$\displaystyle y(x)=x^2.$

Suppose you are solving Initial Value Problem

$\displaystyle y'(t) = f(t,y(t)).$

Please investigate the stability of the following finite difference method:

$\displaystyle y_{j+1}= y_j + \frac{k}{3}\left[ f(t_j,y_j)+ 2 f(t_{j+1},y_{j+1})
\right].$

Determine, if this method is A-stable, conditionally A-stable or unstable. If the method is conditionally stable, please determine what the condition is.

Derive an $O(k^2)$ finite difference approximation to $y'(t_k)$ that uses $y(t_{j+1})$, $y(t_{j-1})$ and $y(t_{j+2})$. Calculate the truncation error term of the resulting finite difference approximation.

variation $y(t_{j-2})$, $y(t_{j})$ and $y(t_{j+1})$. Calculate the

Find approximation of the first derivative $f'(x)$ that uses $f(x)$, $f(x+h)$ and $f(x-3 h)$. What is the error term of your approximation?

For the equation $y^{\prime }=f(y)$ consider the following numerical method

$\displaystyle k_{1}=f(y_{k});\;k_{2}=f(y_{k}+hk_{1});\;y_{k+1}=y_{k}+\frac{h}{2}%
(k_{1}+k_{2})
$

(a) Is this method explicit or implicit? Is it one-step or muti-step?
(b) Perform one step of the method for the equation $y^{\prime }=\lambda y.$
(c) Find the order of the method.

Consider the following $n$-step method for solving $y^{\prime }=f(y)$

$\displaystyle y_{k+1}=y_{k}+af(y_{k-1})+bf(y_{k+1})%
$ (23)

  1. What is the ”number of steps” $n$ for this method? Is this method explicit or implicit?
  2. Determine $a$ and $b$, for which the method has the highest possible accuracy.
  3. Determine the order of the method of the highest possible accuracy in (24).
  4. Determine the order of the method of the highest possible accuracy in (24).

    SOLUTION Method of undetermined coefficients, $f(x)=1$ is satisfied automatically, $f(x)=x$ gives $a+b = h$, $f(x)=x^2$ gives $a-b=-h/2$. Solving equations we get $a= h/4, b=3/4 h;$ second order accurate

Determine whether the method

$\displaystyle y_{k+1}=y_{k}+hf(t_{k},y_{k})+\frac{h^{2}}{2}\left[ \frac{\partia...
...{\partial t}+f(t_{k},y_{k})\frac{\partial f(t_{k},y_{k}%
)}{\partial y}\right]
$

with $h=0.1$ is stable for the equation $y^{\prime}=\lambda y$ with $\lambda=-30$.

Suppose you want to solve numerically $y'(t)=e^{-y(t)} + t^5,$ for $0<t<1$ using 100 time steps (so, $k=0.01$). The method to be tried are (i) the Euler method (ii) the backward Euler method (iii) the trapezoidal method (iv) the RK2 (Heun) method (v) the RK4 method.

(a) Which method do you expect to finish the calculation the fastests? Why?

(b) Which would be the second fastest method? Why?

(c) Which would be the most accurate method? Why?

(d) If stability is a concern, which method would be the best? Why?

This is general question. It is sufficient to give an answer with out proof.

Consider the following initial value problem

$\displaystyle y'(t) = f(t,y(t)), y(t=0)=y_0.$

The following algorithm is proposed for its numerical solution:

$\displaystyle y_{n+1} = y_n + \frac{h}{2}\left[ f(t_n,y_n)+f(t_{n+1}, y_{n+1})\right].$

  1. Define the term stability for a numerical algorithm in the context of initial value problems for ODE's.
  2. Determine if the above algorithm is stable, and if it is, what restrictions if any is implied on the size of step size $h$ for stability.
  3. List one advantage and one disadvantage of the above algorithm over the Euler method.

Consider the following initial value problem:

$\displaystyle y'(t) = t e^{-3 t} - y, \ \ \ y(0)=1, \ 0\le t\le 1, {\rm with} h=0.5 $

Using the Euler method with $\Delta t = 0.5$ calculate $y(0.5)$ and $y(1)$ 10 pt.

Is the proposed method numerically stable for the propsoed time step? 2 pt

Extra credit - 2 pt compare the result with the exact analytical solution.

computer the solution of

$\displaystyle \frac{d y} { d t} =
e^{-2 y} - 4 t^3$

for $0\le t\le 1$ using 100 time steps (i.e. with $h=0.01$). The methods to be tried are
  1. The 2nd order Taylor method
  2. RK4
  3. Trapezoid method
.

Please answer the following questions:

  1. Which one would you expect to complete the calculation the fastest? Why?
  2. Which one would you expect to complete the calculation last Why?
  3. Which one you expect to be more accurate? Why?
  4. If stability is a concern, which meshod should be used? Why?

Suppose you want to solve numerically $y'(t)=e^{-y(t)} + t^5,$ for $0<t<1$ using 100 time steps (so, $k=0.01$). The method to be tried are (i) the Euler method (ii) the backward Euler method (iii) the trapezoidal method (iv) the RK2 (Heun) method (v) the RK4 method.

(a) Which method do you expect to finish the calculation the fastests? Why?

(b) Which would be the second fastest method? Why?

(c) Which would be the most accurate method? Why?

(d) If stability is a concern, which method would be the best? Why?

IsHeun's method

$\displaystyle k_{1}=f(t_{k},y_{k});\;k_{2}=f(t_{k}+h,y_{k}+hk_{1});\;y_{k+1}=y_{k}+\frac{h}{2}(k_{1}+k_{2}),
$

stable for the equation $y^{\prime }=-20y$ with $h=0.1$?

Consider RK2 (Heun's) method:

$\displaystyle k_{1}=f(t_{k},y_{k});\;k_{2}=f(t_{k}+h,y_{k}+hk_{1});\;y_{k+1}=y_{k}+\frac{h}{2}(k_{1}+k_{2}).
$

Show that this method is second order accurate, i.e. it finds exact solution to the initial value problem

$\displaystyle y'(x)=frac{x}{2}, y(x=0)=0.$

This can be done, for example, by showing that if at step $k$ the numerical solution is

$\displaystyle y_k = x_k^2,$

then at step $k+1$ the numerical solution is equal to

$\displaystyle y_{k+1} = (x_k+h)^2,$

which agrees with exact analytically solution

$\displaystyle y(x)=x^2.$

Consider the following initial value problem:

$\displaystyle y'(x) = 2 x y , \ \ \ y(0)=1. $

Using Taylor second order scheme, calculate $y(0.1)$ with $\Delta x= 0.1$ (i.e. calculate one time step).

( 4 points) Compare the result with the exact analytical solution.

State whether the following methods are (i) explicit or implicit (ii) single step or multi-step (iii) selfstarting or not. Cross out wrong statements and underline correct ones:

$\displaystyle y_{k+1} = y_k + \frac{h}{24}(55 y'_k - 59 y'_{k-1}- 9 y'_{k-3}),$

(i)explicit/implicit (ii)single step/multi-step (iii)selfstarting/not selfstarting.

$\displaystyle y_{k+1}=\frac{1}{11}(18 y_k - 9 y_{k-1} + 2 y_{k-2}) + \frac{6 h}{11}y'_{k+1}$

(i)explicit/implicit (ii)single step/multi-step (iii)selfstarting/not selfstarting.

$\displaystyle y_{k+1} = y_k + \frac{h}{2} (y'_{k}+ y'_{k+1})$

(i)explicit/implicit (ii)single step/multi-step (iii)selfstarting/not selfstarting.

The Bernoulli equation is

$\displaystyle y'(t)+y^3(t)=\frac{y(t)}{1+t}.$

  1. If the Forward Euler method is used to solve this equation, what is the resulting finite difference equation (i.e. equation expressing $y_{k+1}$ through $y_k$)?
  2. If the trapezoid method is used to solve this equation, what is the resulting finite difference equation?
  3. If the RK2 method is used, what is the resulting finite difference equation?
  4. If the RK4 method is used, what is the resulting finite difference equation?

Consider the equation

$\displaystyle y'(t)+y^2(t)=\sin(y(t)*t).$

  1. If the Forward Euler method is used to solve this equation with given time step $\delta t$ what is the resulting finite difference equation (i.e. equation expressing $y_{k+1}$ through $y_k$)?
  2. If the trapezoid method is used to solve this equation wit the same time step, what is the resulting finite difference equation?
  3. If the RK2 method is used, what is the resulting finite difference equation?
  4. Answer the following questions:
    1. Which of these three methods will finish fastest with the same $\delta t$?
    2. Which of these methods will produce the most accurate result?
    3. Which of these methods will be the most stable?

  5. Consider the fourth order RK method:

    $\displaystyle y_{n+1}$ $\displaystyle =$ $\displaystyle y_n + \frac{1}{6}(K_1+2K_2 + 2K_3 + K_4),$ (24)
    $\displaystyle t_{n+1}$ $\displaystyle =$ $\displaystyle x_n + h,$ (25)
    $\displaystyle K_1$ $\displaystyle =$ $\displaystyle h f(x_n, y_n),$ (26)
    $\displaystyle K_2$ $\displaystyle =$ $\displaystyle h f(x_n + \frac{1}{2}h, y_n+ \frac{1}{2}K_1),$ (27)
    $\displaystyle K_3$ $\displaystyle =$ $\displaystyle h f(x_n+\frac{1}{2}h, y_n+\frac{1}{2}K_2),$ (28)
    $\displaystyle K_4$ $\displaystyle =$ $\displaystyle h f(x_n+h, y_n+K_3).$ (29)

    Apply this RK4 method for solving the equation

    $\displaystyle y'(x) = 4 x^3,$

    with initial condition

    $\displaystyle y(x=0)=0,$

    and show that the RK4 method is indeed at least fourth order accurate. This can be done, for example, by showing that if at step $k$ the numerical solution is

    $\displaystyle y_k = x_k^4,$

    then at step $k+1$ the numerical solution is equal to

    $\displaystyle y_{k+1} = (x_k+h)^4,$

    which agrees with exact analytically solution

    $\displaystyle y(x)=x^4.$

  6. Consider the fourth order RK method:

    $\displaystyle y_{n+1}$ $\displaystyle =$ $\displaystyle y_n + \frac{1}{6}(K_1+2K_2 + 2K_3 + K_4),$ (30)
    $\displaystyle t_{n+1}$ $\displaystyle =$ $\displaystyle t_n + h,$ (31)
    $\displaystyle K_1$ $\displaystyle =$ $\displaystyle h f(t_n, y_n),$ (32)
    $\displaystyle K_2$ $\displaystyle =$ $\displaystyle h f(t_n + \frac{1}{2}h, y_n+ \frac{1}{2}K_1),$ (33)
    $\displaystyle K_3$ $\displaystyle =$ $\displaystyle h f(t_n+\frac{1}{2}h, y_n+\frac{1}{2}K_2),$ (34)
    $\displaystyle K_4$ $\displaystyle =$ $\displaystyle h f(t_n+h, y_n+K_3).$ (35)

    Apply this RK4 method for solving the equation

    $\displaystyle y'(x) = y (x),$

    and show that this method is indeed fourth order accurate. This can be done, for example, by computing an amplification factor and comparing it to the analytic value $e^h$.

    Chapter EIGHT

  7. Find the best line (in a linear least-square sense) going through the points

    $\displaystyle (-3, 3); ( -1, 2); ( 0, 1) , (1, -1), ( 3, -4).$

    NEW Sauer p 207

  8. Set up the linear least squares problem for fitting the model
    $f(t,\mathbf{x})=x_{1}t+x_{2}e^{-t}$ to the four data points $(1,2)$, $(2,3)$, $(3,5)$, $(4,3)$.

  9. Set up and solve the linear least squares system

    $\displaystyle A x \simeq b$

    for the fitting the model function

    $\displaystyle f(t,x) = x_1 t + x_2 e^t, \ \ x=(x_1,x_2)^T$

    to the three data points

    $\displaystyle (1,2), \ (2,3), \ (3,5)$

  10. (a) Suppose you would like to fit the data points

    $\displaystyle (-1,0), (0,1)\ {\rm and} (1,2)$

    by the fitting function

    $\displaystyle y(x) = a \cos(x) + b x^2.$

    Find $a$ and $b$ by setting up a linear (overdetermined) least square problem and solving it.

    (b) Also calculate the residual.

  11. Suppose you measure $x$ as a function of $t$ and you get the following:

    $t$ $x(t)$
    0 2
    1 1
    2 -2
    3 -1

    Suppose you would like to fit this data by

    $\displaystyle x(t) = a \sin \left(\frac{\pi t}{2}\right)
+b \cos \left(\frac{\pi t}{2}\right)
$

    Find $a$ and $b$ by setting up a linear (overdetermined) least square problem and solving it. Also calculate the residal.

  12. Consider the data

    $\displaystyle (1,2), \ (2, 3), \ (3, 4) $

    Apporximate this data by a constant, i.e. find $x$ such that

    $\displaystyle y(t) = x, $

    is a good fit for this data.

    1. Use least square method. As we have shown in class the least square method minimizes the square of the second norm of a residual, i.e. $\vert\vert r\vert\vert^2_2$, where $r= (A x - b) .$
    2. Now minimize the fourth power of the fourth norm of a residual.
    3. Compare the results and explain the difference.

  13. Suppose that an experiment produced the following data:

    $\displaystyle (1,3), (2,2).$

    You are to fit this data by using the linear fit

    $\displaystyle y(t) = x \cdot t.$

    a Calculate the value of $x$ which minimized the square of the second norm $\vert\vert r\vert\vert^2_2$ of the residual $r=A x - b.$

    b Calculate $r$ and $A x$. Sketch ${\rm Span}(A)$, $A x$, $b$ and $r$. Verify that $r\perp A x$. Is it so on your graph?

    c Extra-credit, 5 points Obtain equation for $x$ that minimizes $\vert\vert r\vert\vert^4_4$ instead of traditional $\vert r\vert^2_2$. Do not solve the resulting equation. What are the advantages and disadvantages of using $\vert\vert r\vert\vert^4_4$ instead of $\vert\vert r\vert\vert^2_2$?

  14. Suppose that an experiment produced the following data:

    $\displaystyle (1,3), (2,2).$

    You are to fit this data by using the linear fit

    $\displaystyle y(t) = x \cdot t.$

    a Calculate the value of $x$ which minimized the square of the second norm $\vert\vert r\vert\vert^2_2$ of the residual $r=A x - b.$

    b Calculate $r$ and $A x$. Sketch ${\rm Span}(A)$, $A x$, $b$ and $r$. Verify that $r\perp A x$. Is it so on your graph?

    c Extra-credit, 5 points Obtain equation for $x$ that minimizes $\vert\vert r\vert\vert^4_4$ instead of traditional $\vert r\vert^2_2$. Do not solve the resulting equation. What are the advantages and disadvantages of using $\vert\vert r\vert\vert^4_4$ instead of $\vert\vert r\vert\vert^2_2$?

  15. Let A be the $5\times 3 $ identity matrix,

    $\displaystyle A= \left(\begin{array}{ccc}
1 & 0 &0 \\
0& 1& 0\\
0& 0& 1\\
0& 0& 0\\
0& 0& 0
\end{array}\right).$

    Let $b$ the vector

    $\displaystyle b= \left(\begin{array}{c}
b_1 \\
b_2 \\
b_3 \\
b_4 \\
b_5 \end{array}\right).$

    Find the least squares solution of

    $\displaystyle A x \tilde{=}b,$

    and calculate the residual and its 2-norm.

  16. Let A be the $m\times n $ identity matrix

    $\displaystyle A=(a_{ij}),\ \ i = 1..m, \ \ j=1..n, \ \
a_{ij}=\left[\begin{array}{cc} 1 & i=j, \\
0 & i\ne j. \end{array}\right.$     (36)

    Furthermore, let

    $\displaystyle b=[b_1, b_2, \dots, b_m]^{T}.$

    Find the least squares solution of

    $\displaystyle A x \tilde{=}b,$

    and calculate the residual and its 2-norm.