Automatic Error and Step Size Control

Automatic Error Control

It would be more convenient for users of a differential equation solver if, instead of choosing the stepsize directly, they chose the accuracy they wished to achieve. The software would then choose the stepsize to meet that goal. After all, the user is more interested in the quality of the results, rather than the details of the computation. Richardson extrapolation could be used to make this work. The problem would be solved with one stepsize h, then with h/2, and the global error estimated. If this was too large, the process would be repeated with another stepsize. Knowing the order in h of the global error for the method could be used to choose the new stepsize, as follows.

Suppose with stepsize $\displaystyle h_0$ we get estimated global error $\displaystyle E_0$, while we wish to attain global error at most $\epsilon$, and the global error is known to be $\displaystyle O(h^p)$. If the error was of the form $\displaystyle A h^p$ for some constant A, then $\displaystyle A = E_0 h_0^{-p}$, and we should take stepsize $\displaystyle h \le h_0 (\epsilon/E_0)^{1/p}$ to get $\displaystyle A h^p = \epsilon$. Since the formula for the error is only approximate, to play it safe we should probably take h a bit smaller than this, say by a factor of 0.9 or so.

In the example we've been following, Fourth-Order Runge-Kutta with step size h=0.1 gave the result Q(.1) = 2.999991377 for y(1) while with step size h = 0.05 we had Q(.05) = 2.999999452. Richardson extrapolation with p=4 gives us QR = (24 Q(.05)-Q(.1))/(24-1) = 2.999999990 and estimates the error in Q(.05) as $Q_R - Q(.05) = 5.38
\times 10^{-7}$. If we want error at most 10-10, we should take $h \le .05 (10^{-10}/(5.38 \times 10^{-7}))^{1/4} = .00584$. Allowing a safety factor of about 0.9, we might try $h = 1/190 \approx .00526$.

Note that we did a very similar calculation before. The difference is that this time we are using Richardson extrapolation, rather than our knowledge of the exact solution, to estimate the error.


Adaptive Stepsize Control

The scheme above is not the one usually implemented, however. Once the stepsize is being chosen automatically, the possibility arises of using different stepsizes in different places. In some regions, where it is easy to keep the local errors small, a large stepsize can be used, while in more difficult regions the stepsize should be smaller. The result of this can sometimes reduce the number of steps by factors of 10 or even 100. Remember also that reducing the number of steps tends to reduce roundoff error, so this may be useful even when time is not a major consideration.

The ``Adaptive Runge-Kutta'' method of the Differential Equations Calculator is the ODEINT algorithm from "Numerical Recipes in Pascal" by W.H. Press, B.P. Flannery, S.A. Teukolsky and W.T. Vetterling, Cambridge U. Press, Cambridge 1989 (with a few minor changes). The algorithm uses fourth-order Runge-Kutta together with step doubling and Richardson extrapolation to obtain a fifth-order method. The step size is adjusted to achieve a given local error tolerance. This error tolerance is for the relative error, i.e. if the ``Error tolerance'' is given as p on the main menu, we want local errors to be at most p |y| (except that this has to be adjusted if y is 0 or nearly 0).

Let y1 be the result of one fourth-order Runge-Kutta step of step size hstarting at (x,y), and y2 the result of two fourth-order Runge-Kutta steps of size h/2 starting from the same point. Then we take $\displaystyle y_R(x + h) = (16 y_2 - y_1)/15$and use |y2 - y1| as an estimate of the local error. If this is larger than desired, we adjust h (in a similar way to that above) and repeat the current step until the local error estimate is small enough. Then the next step is taken, starting with the stepsize that worked for this step. On the other hand, if the local error estimate is smaller than needed, we adjust the initial stepsize for the next step.



 

Robert Israel
2002-02-07