Improved Euler Method

As we saw, in the case $f(x,y) = f(x)$ the Euler method corresponds to a Riemann sum approximation for an integral, using the values at the left endpoints:

\begin{displaymath}y_n - y_0 = \int_{x_0}^{x_n} f(t)\, dt \approx h \sum_{j=0}^{n-1} f(x_j) \end{displaymath}

A better method of numerical integration would be the Trapezoid Rule:

\begin{displaymath}\int_{x_0}^{x_n} f(t)\, dt \approx h \sum_{j=0}^{n-1} (f(x_j) +
f(x_{j+1}))/2 \end{displaymath}

This would correspond to an iteration formula $y_{j+1} = y_j + h (f(x_j) + f(x_{j+1}))/2 $. As you may have seen in Math 101, this has local error $O(h^3)$ and global error $O(h^2)$, while the Euler method (or the corresponding Riemann sum) has local error $O(h^2)$ and global error $O(h^3)$. The obvious generalization to the case where $f$ can depend on both $x$ and $y$ is

\begin{displaymath}
y_{j+1} = y_j + \frac{h}{2} \left(f(x_j,y_j) + f(x_{j+1},y_{j+1})\right) \end{displaymath}

The trouble with this is that $y_{j+1}$, which is what we're trying to compute, appears on both sides of the equation. But we will replace the $y_{j+1}$ on the right side by the Euler approximation for $y_{j+1}$:

\begin{displaymath}
y_{j+1} = y_j + \frac{h}{2} \left(f(x_j,y_j) + f(x_{j+1},
y_j + h f(x_j,y_j))\right) \end{displaymath}

This is the iteration formula for the Improved Euler Method, also known as Heun's method. It looks a bit complicated. We would actually compute it in three steps:
  1. $m_1 = f(x_j, y_j)$
  2. $m_2 = f(x_{j+1}, y_j + h m_1)$
  3. $y_{j+1} = y_j + h (m_1 + m_2)/2 $
Let's try the same example we used for the Euler method: $ y' = 2 (y^2 + 1)/(x^2+4)$, $y(0) = 1$, with step size $h = .1$. With $x_0 = 0$, $y_0 = 1$, we have $m_1 = f(0,1) = 1$, $m_2 = f(.1, 1.1) =
1.102244389$, $y_1 = 1 + .1 (m_1+m_2)/2 = 1.105112219$. The true solution had $\phi(.1) = 1.105263158$, so the error is $.000150939$, compared to $.005263158$ for the Euler method. Here is a table of the results of the first 10 steps for the Improved Euler and Euler methods with $h = .1$, and their respective errors:


$ x_i$ Heun $y_i$ Euler $y_i$ $\phi(x_i)$ Heun error Euler error
0.0 1.00000000 1.00000000 1.00000000 0.0 0.0
0.1 1.10511222 1.10000000 1.10526316 0.00015094 0.00526316
0.2 1.22185235 1.21022444 1.22222222 0.00036987 0.01199778
0.3 1.35225607 1.33223648 1.35294118 0.00068510 0.02070470
0.4 1.49886227 1.46792616 1.50000000 0.00113773 0.03207384
0.5 1.66487828 1.61959959 1.66666667 0.00178838 0.04706708
0.6 1.85441478 1.79009854 1.85714286 0.00272808 0.06704432
0.7 2.07282683 1.98296335 2.07692308 0.00409625 0.09395973
0.8 2.32722149 2.20265794 2.33333333 0.00611184 0.13067539
0.9 2.62723508 2.45488648 2.63636364 0.00912856 0.18147716
1.0 2.98626232 2.74704729 3.00000000 0.01373768 0.25295271

Step size 0.1


Clearly, in this example the Improved Euler method is much more accurate than the Euler method: about 18 times more accurate at $x=1$. Now if the order of the method is better, Improved Euler's relative advantage should be even greater at a smaller step size. Here is the table for $h = .05$.


$ x_i$ Heun $y_i$ Euler $y_i$ $\phi(x_i)$ Heun error Euler error
0.0 1.00000000 1.00000000 1.00000000 0.0 0.0
0.1 1.10522508 1.10252967 1.10526316 0.00003808 0.00273349
0.2 1.22212855 1.21596496 1.22222222 0.00009367 0.00625726
0.3 1.35276701 1.34209198 1.35294118 0.00017417 0.01084920
0.4 1.49970962 1.48310373 1.50000000 0.00029038 0.01689627
0.5 1.66620837 1.64172213 1.66666667 0.00045830 0.02494454
0.6 1.85644079 1.82136643 1.85714286 0.00070207 0.03577643
0.7 2.07586420 2.02638978 2.07692308 0.00105887 0.05053330
0.8 2.33174590 2.26241822 2.33333333 0.00158743 0.07091511
0.9 2.63398036 2.53684738 2.63636364 0.00238328 0.09951625
1.0 2.99639263 2.85958887 3.00000000 0.00360737 0.14041113

Step size 0.05


As we saw before, in the Euler method the errors for $h = .05$ are about $1/2$ the errors at the same points for $h = .1$. In the Improved Euler method, the $1/2$ becomes $1/4$ (the actual ratio is from $.252$ to $.263$). This supports the idea that Improved Euler's global error is $O(h^2)$.

For more support to this idea, we look at Improved Euler's error at $x=1$ as a function of step size $h$, using 14 different values for $h$: $1/10$, $1/20$, $1/30$, $1/40$, $1/50$, $1/60$, $1/70$, $1/80$, $1/90$, $1/100$, $1/120$, $1/160$, $1/250$, $1/500$. We plot the error divided by $h^2$: if the error is approximately proportional to $h^2$, this should be approximately constant.

Now suppose we wanted the error at $x=1$ to be less than $10^{-10}$. For the Euler method, as we saw, we'd need more than 28 billion steps. For Improved Euler, on the other hand, with error approximately proportional to $h^2$, we would need

\begin{displaymath}h \approx \left(\frac{10^{-10}}{.00360737}\right)^{1/2}\times .05
= 8.3248 \times 10^{-6} \end{displaymath}

and $n = 1/h \approx 120 123$. To provide a bit of a safety margin, we might take $n = 125 000$ or $130 000$. This is much more practical than 28 billion. I tried this in DIFF on a slow 386DX: it turns out for $n = 125 000$ the error is $1.001033 \times 10^{-10}$ (calculated in 96 seconds) while for $n=130 000$ it is $7.951551 \times 10^{-11}$.

If the global error is $O(h^2)$, presumably the local error is $O(h^3)$. Let's see this in the case of the differential equation $y' = y$. The Improved Euler calculation goes like this:

\begin{displaymath}m_1 = f(x_j,y_j) = y_j \end{displaymath}


\begin{displaymath}m_2 = f(x_{j+1},y_j + h m_1) = (1 + h) y_j \end{displaymath}


\begin{displaymath}y_{j+1} = y_j + \frac{h}{2}(y_j + (1+h) y_j) = (1 + h + h^2/2) y_j \end{displaymath}

Since the true solution has $\psi_j(x_{j+1}) = \exp(h) y_j$ and the Taylor series says

\begin{displaymath}\exp(h) = 1 + h + \frac{h^2}{2} + \frac{h^3}{6} + O(h^4)\end{displaymath}

we get local error $y_j h^3/6 + O(h^4) = O(h^3)$.



Robert
2002-01-28