Approximate Solution of Differential Equations

Approximate Solution of Differential Equations

 

the obtaining of analytic expressions (formulas) or numerical values that approximate the desired solution of a differential equation to some degree of accuracy.

An approximate solution to a differential equation in the form of an analytic expression can be found by the method of series (power series, trigonometric series, and so on), the method of small parameters, the method of successive approximations, the Ritz and Galerkin methods, and the Chaplygin method. Each of these methods defines one or more infinite processes that under certain conditions can be used to obtain an exact solution to a problem. Termination of the process after a finite number of steps yields an approximate solution.

If a solution is represented by means of an infinite series, a finite portion of the series can be taken as the approximate solution. For example, suppose we wish to find a solution to the differential equation yʹ = f(x, y), which satisfies the initial conditions y (x0) = y0. Suppose, moreover, that f(x, y) is an analytic function of x and y in some neighborhood of the point (x0, y0). The solution can then be sought in the form of a power series:

The coefficients Ak of the series can be found either from the formulas

or by means of the method of undetermined coefficients. The series method permits a solution to be found only for small values of the quantity xx0.

Cases are often encountered—for example, in the study of periodic motions in celestial mechanics and in the theory of oscillations—where an equation consists of two kinds of terms: principal and secondary, the secondary terms being characterized by the presence of small parameters, or small constant factors. If the secondary terms are dropped, an equation that admits of an exact solution is usually obtained. The solution of the original equation can then be sought in the form of a series whose first term is the solution of the equation without the secondary terms and whose remaining terms are arranged according to the powers of the small parameters. Since the equations for the coefficients of the powers of the small parameters are linear, they are rather easy to solve. Initial values are sometimes used as small parameters—for example, in the study of oscillations about an equilibrium position. The method of small parameters was used by L. Euler and P. Laplace in solving problems of perturbed motion in celestial mechanics. The theoretical foundation for the method was provided by A. M. Liapunov and J. H. Poincaré.

Numerical methods include methods that permit approximate solutions to be found for certain values of the argument (that is, they permit the obtaining of a table of approximate values of the desired solution) by using known values of the solution at one or more points. Examples of such methods are the Euler method, the Runge-Kutta method, and a number of difference methods.

These methods can be illustrated with the equation

yʹ = f(x, y)

with the initial condition y (x0) = y0. Let the exact solution of this equation be represented in a certain neighborhood of the point x0 by a power series in h = xx0. A basic characteristic of the accuracy of formulas for the approximate solution of differential equations is the requirement that the first k terms of the power series in h of the approximate solution coincide with the first k terms in the power series in h of the exact solution.

The Euler method is based on the use of the series method to calculate approximate values of the solution y(x) at the points x1, x2, …, xn of the fixed closed interval [x0, b]. Thus, to compute y(x1)—where x1 = x0 + h and h = bx0)/ny (x1) is represented by a finite number of terms in the power series in h = x1x0. For example, keeping only the first two terms of the series, we obtain the following formulas for computing y (xk):

y (xk) = y (xk - 1) + hf (xk - 1, yk - 1) xk = x0 + kh

This method is called the Euler method; for each interval [xk, xk + 1] the integral curve is replaced by a line segment. The error in this method is proportional to h2.

In the Runge-Kutta method, instead of finding derivatives we form a combination of values of f(x, y) at several points that gives with a certain accuracy the first few terms of the power series for the exact solution of the equation. For example, the right-hand side of the Runge formula

where

gives the first five terms of the power series with accuracy up to terms of order h5.

In difference formulas, already computed values of the right-hand side are used several times. The solution is sought as a linear combination of the y (xi), ηi, and the differences Δiηj, where ηj = hf (xj, yj), Δηj = ηj + 1 - ηj, and Δiηj = Δi-1ηj+1 – Δi-1ηj. An example of a difference formula is the Adams extrapolation formula. If we use differences up to third order, then the Adams formula

gives the solution y(x) at the point xk with accuracy up to terms of order h4.

Formulas for the numerical integration of second-order equations can be obtained by applying the Adams formula twice. The Norwegian mathematician F. Størmer obtained the formula

which is especially convenient for solving equations of the form = f (x, y). With this formula, Δ2yn -1 is found, and then yn + 1 = yn + Δyn + 1 + Δ2yn - 1. With yn + 1 known, yʺn + 1 = f (xn + 1, yn + 1) is computed, the differences are found, and the process is continued.

The numerical methods mentioned above can be extended to systems of differential equations.

The importance of numerical methods for solving differential equations has grown considerably with the advent of computers.

In addition to analytic and numerical methods, graphic methods are also used for the approximate solution of differential equations. In the simplest of these, a set of directions, or direction field, determined by the differential equation is constructed —that is, at certain points the directions of the tangents to the integral curves passing through the points are drawn. Then a curve is drawn having these directions as tangents.

REFERENCES

Berezin, I. S., and N. P. Zhidkov. Metody vychislenii, 2nd ed., vol. 2. Moscow, 1962.
Bakhvalov, N. S. Chislennye metody. Moscow, 1973.
Collatz, L. Chislennye metody resheniia differentsial’nykh uravnenii. Moscow, 1953. (Translated from German.)
Milne, W. E. Chislennoe reshenie differentsial’nykh uravnenii. Moscow, 1955. (Translated from English.)