Error Analysis of the Euler Method

As before, we are considering the first-order initial value problem

\begin{displaymath}
\frac{dy}{dx} = f(x,y),\ \ \ y(x_0) = y_0
\end{displaymath}

and approximating its solution using Euler's method with a certain step size $h$. I will ignore roundoff error and consider only the discretization error. For step-by-step methods such as Euler's for solving ODE's, we want to distinguish between two types of discretization error: the global error and the local error. The global error at a certain value of $x$ (assumed to be $x_n = x_0 + n h$) is just what we would ordinarily call the error: the difference between the true value $\phi(x_n)$ and the approximation $y_n$. The local error at $x_n$ is, roughly speaking, the error introduced in the $n$th step of the process. That is, it is the difference between $y_n$ and $\psi_{n-1}(x_n)$, where $\psi_{n-1}(x)$ is the solution of the differential equation with $y(x_{n-1}) = y_{n-1}$. Thus if $y_{n-1}$ were exactly correct (equal to $\phi(x_{n-1})$), the global error at $x_n$ would be equal to this local error. But since in general $y_{n-1}$ is not correct (as a result of earlier local errors), the global and local errors are different. In the picture below, $y = \phi(x)$ is the black curve, and the curves $y = \psi_j(x)$ are in red. The local errors at each stage of the process are the blue vertical lines.

First we consider the local error at $x_n$:

\begin{displaymath}e_n = \psi_{n-1}(x_n) - y_n \end{displaymath}

Now $y_n = y_{n-1} + h f(x_{n-1},y_{n-1}) = \psi_{n-1}(x_{n-1}) + h
\psi'_{n-1}(x_{n-1})$. According to Taylor's Theorem, for any twice-differentiable function $g$

\begin{displaymath}g(x+h) = g(x) + h g'(x) + \frac{h^2}{2} g''(\xi)\end{displaymath}

for some $\xi$ between $x$ and $x+h$. Taking $g = \psi_{n-1}$, $x =
x_{n-1}$ and $x+h = x_n$ we find

\begin{displaymath}e_n = \frac{h^2}{2} \psi''_{n-1}(\xi) \end{displaymath}

If there is some constant $K$ such that we can be sure that $\vert\psi''_{n-1}(\xi)\vert < K$, then we can say

\begin{displaymath}e_n = O(h^2) \end{displaymath}

Such a $K$ does exist (assuming $f$ has continuous derivatives in some rectangle containing the true and approximate solutions): for any solution of the differential equation $y' = f(x,y)$, we can differentiate once more to get

\begin{displaymath}y'' = \frac{\partial f}{\partial x} + \frac{\partial f}{\part...
...\partial f}{\partial x} + \frac{\partial f}{\partial y} f(x,y) \end{displaymath}

which is a continuous function of $x$ and $y$ and therefore can't get too big in our rectangle.

Now, what about the global error? It's tempting to say that the global error at $x = x_n$ is the sum of all the local errors $e_j$ for $j$ from 1 to $n$. Since each $e_j = O(h^2)$ and there are $n = (x - x_0)/h$ of them, the global error should be $(x-x_0)/h \times O(h^2) = O(h)$. Unfortunately, it's not quite true that the global error is the sum of the local errors: the global error at $x$ is the sum of the differences $\psi_{j-1}(x) - \psi_{j}(x)$, but $e_j = \psi_{j-1}(x_j) - \psi_j(x_j)$. Between $x_j$ and $x$, $\psi_{j-1} - \psi_j$ might grow or shrink. Fortunately, we can control the amount of growing that might take place, and the result is that it grows by at most some constant factor (again, this is in a rectangle where $f$ has continuous derivatives and which contains the true and approximate solutions).

Let's look at a simple example: $y' = y$, $y(0) = 1$. This is so simple that we can find an explicit formula for $y_j$. We have

\begin{displaymath}y_{j+1} = y_j + h y_j = (1+h) y_j \end{displaymath}

Thus at each stage $y$ is multiplied by $1+h$. Since we start out with $y_0 = 1$, we have

\begin{displaymath}y_j = (1+h)^j \end{displaymath}

The actual solution is of course $\phi(x) = \exp(x)$. The global error at $x$ with step size $h$, where $x = n h$, is

\begin{displaymath}E(h) = \exp(x) - y_n = \exp(x) - (1+h)^n = \exp(x) - (1+h)^{x/h}
\end{displaymath}

Since $\psi_{j-1}(x) = y_{j-1} \exp(x - x_{j-1})$ the local error in step $j$ is $e_j = y_{j-1} (\exp(h) - (1+h))$.

The local error is $O(h^2)$ because (from Taylor series) $\exp(h) = 1 + h + O(h^2)$. The global error is $O(h)$: in fact, on the O and Order page, we used the example $Q(h) = ( 1 + x h)^{1/h}$, which we saw had error $O(h)$. The Euler approximation is just $Q(h/x)$, so it too has error $O(h)$.

Another special case: suppose $f(x,y) = f(x)$ is just a function of $x$. The true solution is

\begin{displaymath}\phi(x) = y_0 + \int_{x_0}^x f(t)\, dt \end{displaymath}

For the Euler method we have $y_{j+1} = y_{j} + h f(x_j)$, so that

\begin{displaymath}y_n = y_0 + h \sum_{j=0}^{n-1} f(x_j) \end{displaymath}

This is a just a Riemann sum for the integral: for each interval $[x_j, x_{j+1}]$ we are approximating the area under the graph of $f$ by a rectangle with height $f(x_j)$. As you may have seen in calculus, the error in this approximation for each interval is at most $K h^2/2$ if $\vert f'(x)\vert \le K$ on the interval, and the global error at $x = x_0 + n h$ is then at most $n K h^2/2 = K (x - x_0) h/2$.





Robert
2002-01-28