Richardson Extrapolation and the Bulirsch-Stoer Method in Software Paint PDF417 in Software Richardson Extrapolation and the Bulirsch-Stoer Method

17.3 Richardson Extrapolation and the Bulirsch-Stoer Method generate, create pdf417 none in software projects Developing with Visual Studio .NET 17.3.1 Modi ed Midpoint Method The modi ed midpoint method advances a vector of dependent variables y.x/ from a point x to a point x C H by a sequence of n substeps each of size h, h D H=n (17.3.

1). In principle, one cou Software pdf417 ld use the modi ed midpoint method in its own right as an ODE integrator. In practice, the method nds its most important application as a part of the more powerful Bulirsch-Stoer technique,. The number of right-hand side evaluations required by the modi ed midpoint method is n C 1.

The formulas for the method are z0 y.x/ z1 D z0 C hf .x; z0 / zmC1 D zm 1 C 2hf .

x C mh; zm / y.x C H / yn . 1 z 2 n for m D 1; 2; : : : ; n C zn C hf .x C H; zn / . (17.3.2).

Here the z s are inte Software PDF-417 2d barcode rmediate approximations that march along in steps of h, while yn is the nal approximation to y.x C H /. The method is basically a centered difference or midpoint method (compare equation 17.

1.2), except at the rst and last points. Those give the quali er modi ed.

The modi ed midpoint method is a second-order method, like (17.1.2), but with the advantage of requiring (asymptotically for large n) only one derivative evaluation per step h instead of the two required by second-order Runge-Kutta.

The usefulness of the modi ed midpoint method to the Bulirsch-Stoer technique ( 17.3) derives from a deep result about equations (17.3.

2), due to Gragg. It turns out that the error of (17.3.

2), expressed as a power series in h, the stepsize, contains only even powers of h, yn y.x C H / D. 1 X i D1 i h2i (17.3.3).

where H is held const PDF417 for None ant but h changes by varying n in (17.3.1).

The importance of this even power series is that, if we play our usual tricks of combining steps to knock out higher-order error terms, we can gain two orders at a time! For example, suppose n is even, and let yn=2 denote the result of applying (17.3.1) and (17.

3.2) with half as many steps, n ! n=2. Then the estimate yn=2 (17.

3.4) 3 is fourth-order accurate, the same as fourth-order Runge-Kutta, but requires only about 1.5 derivative evaluations per step h instead of Runge-Kutta s four evaluations.

Don t be too anxious to implement (17.3.4), since we will soon do even better.

Now would be a good time to look back at the routine qsimp in 4.2, and especially to compare equation (4.2.

4) with equation (17.3.4) above.

You will see that the transition in 4 to the idea of Richardson extrapolation, as embodied in Romberg integration of 4.3, is exactly analogous to the transition in going from this section to the next one. A routine that implements the modi ed midpoint method will be given as part of the implementation of StepperBS, in the dy member function.

y.x C H / 4yn. 17. Integration of Ordinary Differential Equations 17.3.2 The Bulirsch-Stoer Method Consider attempting t o cross the interval H using the modi ed midpoint method with increasing values of n, the number of substeps. Bulirsch and Stoer originally proposed the sequence n D 2; 4; 6; 8; 12; 16; 24; 32; 48; 64; 96; : : : ; nj D 2nj More recent work by Deu hard [2,3] suggests that the sequence n D 2; 4; 6; 8; 10; 12; 14; : : : ; nj D 2.j C 1/ ; : : : (17.

3.6). 2 ; : : :. (17.3.5).

is usually more ef ci Software PDF417 ent. For each step, we do not know in advance how far up this sequence we will go. After each successive n is tried, a polynomial extrapolation is attempted.

That extrapolation gives both extrapolated values and error estimates. If the errors are not satisfactory, we go higher in n. If they are satisfactory, we go on to the next step and begin anew with n D 2.

Of course there must be some upper limit, beyond which we conclude that there is some obstacle in our path in the interval H , so that we must reduce H rather than just subdivide it more nely. Moreover, precision loss sets in if we choose too ne a subdivision. In the implementations below, the maximum number of n s to be tried is called KMAXX.

We usually take this equal to 8; the eighth value of the sequence (17.3.6) is 16, so this is the maximum number of subdivisions of H that we allow.

We enforce error control, as in the Runge-Kutta method, by monitoring internal consistency and adapting the stepsize to match a prescribed bound on the local truncation error. Each new result from the sequence of modi ed midpoint integrations allows a tableau like that in equation (3.2.

2) to be extended by one additional set of diagonals. Write the tableau as a lower triangular matrix: T00 T10 T20.
Copyright © . All rights reserved.