a type of function encountered in various branches of mathematics. Consider a matrix of order n, that is, a square array of n2 elements, for example, numbers or functions:
Each element of the matrix is specified by two indices. The element aij belongs to the i th row and the j th column. The determinant of the matrix (1) is a polynomial in the entries aij;
∑ ±a1ɑa2β … anγ
In this formula, α, β, …, γ is an arbitrary permutation of the numbers 1,2, …, n. The plus or minus sign is used according to whether the permutation α, β, …, γ is even or odd. (A permutation is even if it contains an even number of inversions, that is, cases in which a larger number precedes a smaller number; otherwise, the permutation is odd. Thus, for example, the permutation 51243 is odd, since it contains five inversions, namely, 51, 52, 54, 53, 43.) The summation extends over all permutations α, β, ‖, γ of the numbers 1, 2, …, n. Thus only one element from any row and any column enters each product. The number of distinct permutations of n symbols is n! = 1 · 2 · 3 · … ·n; therefore, a determinant contains n! terms, of which ½n! have a plus sign and ½n! have a minus sign. The number n is called the order of the determinant. The determinant of the matrix (1) is written as
or, briefly, ǀaikǀ. For determinants of order 2 and 3, we have the formulas
Determinants of order 2 and 3 admit a simple geometric interpretation:
is (except possibly for sign) the area of the parallelogram on the vectors a1 = (x1, y1) and a2 = (x2, y2),
and is (except possibly for sign) the volume of the parallelepiped on the vectors a1 = (x1, y1, z1), a2 = (x2, y2, z2), and a3 = (x3, y3, z3). In each case it is assumed that the coordinate system is rectangular.
The theory of determinants arose in connection with the problem of solving a system of first-degree algebraic equations (linear equations). In the most important case, when the number of equations equals the number of unknowns, such a system may be written in the form
This system has a unique solution if the determinant ǀaikǀ of the coefficients of the unknowns is different from zero; in that case, the unknown xm (m = 1, 2, …, n) is a fraction whose denominator is the determinant ǀaikǀ and whose numerator is the determinant obtained from ǀaikǀ by replacing the elements of the mth column, that is, the coefficients of xm, by the numbers b1, b2, …, bn. Thus, in the case of a system of two equations in two unknowns
the solution is given by the formulas
If b1 = b1 = … = bn = 0, then (4) is called homogeneous. A homogenous system has nonzero solutions only if ǀaikǀ = 0. The connection between the theory of determinants and the theory of linear equations enables us to apply the theory of determinants to the solution of a large number of problems in analytic geometry. Many formulas of analytic geometry can conveniently be written using determinants; for example, the equation of the plane passing through the points with coordinates (x1, y1, z1), (x2, y2, Z2), and (x3, y3, z3) can be written in the form
Determinants have a number of important properties, some of which facilitate their computation. Below we list the simplest of these properties.
(1) A determinant does not change if its rows and columns are interchanged:
(2) A determinant changes sign if two of its rows or two of its columns are interchanged; thus, for example,
(3) A determinant is equal to zero if the elements in two of its rows or columns are proportional; thus, for example,
(4) A factor common to all the elements of a row or column of a determinant can be placed outside the determinant; thus, for example,
(5) If each element in some column (row) of a determinant is a sum of two terms, then the determinant is the sum of two determinants, in one of which the corresponding column (row) consists of the first terms and in the other the corresponding column (row) consists of the second terms, while the remaining columns (rows) are the same as in the original determinant; thus for example,
(6) A determinant does not change if the elements of one row (column) are multiplied by an arbitrary constant and added to the elements of another row (column); thus, for example,
(7) A determinant can be expanded by the elements of any row or any column. The expansion of the determinant (3) by the elements of the i th row has the form
The coefficient Aik of aik is called the cofactor of aik The cofactor Aik = (– 1)i + kDik, where Dik is the minor associated with the element an, that is, the determinant of order n – 1 obtained from the original determinant by crossing out the i th row and the k th column. For example, the expansion of a determinant of order 3 by the elements of the second column is given by
Expansion of a determinant of order n by a row or column reduces computation of the determinant to the computation of n determinants of order n — 1. Thus the computation of a determinant of order 5, say, reduces to the computation of five determinants of order 4; the computation of each of these determinants of order 4 can, in turn, be reduced to the computation of four determinants of order 3 (the formula for the computation of a determinant of order 3 has been given above). However, except for the simplest cases, this method of computing determinants is practical only for determinants of relatively low order. For the computation of determinants of high order, a number of more convenient methods have been developed (approximately n3 operations must be performed to compute a determinant of order n).
An important rule is the rule for multiplying two determinants of order n. The product of two determinants of order n may be expressed as a single determinant of order n in which the element belonging to the i th row and the k th column is obtained by first multiplying each element in the i th row of the first factor by the corresponding element in the k th column of the second factor and then summing all these products. In other words, the product of the determinants of two matrices is the determinant of the product of these matrices.
Determinants have been systematically used in mathematical analysis ever since the second quarter of the 19th century, when the German mathematician K. Jacobi studied determinants in which the elements are not numbers but functions of one or more variables. The most interesting of these determinants is the Jacobian
The Jacobian gives the local value of the factor by which volumes are altered by the change of variables
y1 = f1(x1, …, xn)
y2 = f2(x1, …, xn)
.................................
yN = fn(x1, …, xn)
The vanishing of this determinant in a certain region is a necessary and sufficient condition for the functional dependence of the functions f1(x1,…,xn), f2(x1, …,xn), …, fn (x1, …,xn) in that region.
The theory of determinants of infinite order was developed in the second half of the 19th century. Infinite determinants are expressions of the type
(one-sided infinite determinant) or
(two-sided infinite determinant). The infinite determinant (5) is the limit of the determinant
as n → ∞. If this limit exists, then the determinant is called convergent; otherwise, it is divergent. The study of a two-sided infinite determinant may sometimes be reduced to the study of a certain one-sided infinite determinant.
The theory of determinants of finite order was created mainly in the second half of the 18th century and the first half of the 19th century by the Swiss mathematician G. Cramer, the French mathematicians A. Vandermonde, P. Laplace, and A. Cauchy, and the German mathematicians K. Gauss and Jacobi. The term “determinant” was proposed by Gauss, and the modern notation is due to the British mathematician A. Cayley.
REFERENCES
See references under LINEAR ALGEBRA and .