| Part of a series of articles about | ||||||
| Calculus | ||||||
|---|---|---|---|---|---|---|
| ||||||
| ||||||
Specialized | ||||||
Invector calculus, theJacobian matrix (/dʒəˈkoʊbiən/,[1][2][3]/dʒɪ-,jɪ-/) of avector-valued function of several variables is thematrix of all its first-orderpartial derivatives. If this matrix issquare, that is, if the number of variables equals the number ofcomponents of function values, then itsdeterminant is called theJacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as theJacobian.[4] They are named afterCarl Gustav Jacob Jacobi.
The Jacobian matrix is the natural generalization of thederivative and thedifferential of a usual function to vector valued functions of several variables. This generalization includes generalizations of theinverse function theorem and theimplicit function theorem, where the non-nullity of the derivative is replaced by the non-nullity of the Jacobian determinant, and themultiplicative inverse of the derivative is replaced by theinverse of the Jacobian matrix.
The Jacobian determinant is fundamentally used for changes of variables inmultiple integrals.
Let be a function such that each of its first-order partial derivatives exists on. This function takes a point as input and produces the vector as output. Then the Jacobian matrix off, denotedJf, is the matrix whose(i,j) entry is explicitlywhere is the transpose (row vector) of thegradient of the-th component.
The Jacobian matrix, whose entries are functions ofx, is denoted in various ways; other common notations includeDf,, and.[5][6] Some authors define the Jacobian as thetranspose of the form given above.
The Jacobian matrixrepresents thedifferential off at every point wheref is differentiable. In detail, ifh is adisplacement vector represented by acolumn matrix, thematrix productJ(x) ⋅h is another displacement vector, that is the best linear approximation of the change off in aneighborhood ofx, iff(x) isdifferentiable atx.[a] This means that the function that mapsy tof(x) +J(x) ⋅ (y –x) is the bestlinear approximation off(y) for all pointsy close tox. Thelinear maph →J(x) ⋅h is known as thederivative or thedifferential off atx.
When, the Jacobian matrix is square, so itsdeterminant is a well-defined function ofx, known as theJacobian determinant off. It carries important information about the local behavior off. In particular, the functionf has a differentiableinverse function in a neighborhood of a pointx if and only if the Jacobian determinant is nonzero atx (seeinverse function theorem for an explanation of this andJacobian conjecture for a related problem ofglobal invertibility). The Jacobian determinant also appears when changing the variables inmultiple integrals (seesubstitution rule for multiple variables).
When, that is when is ascalar-valued function, the Jacobian matrix reduces to therow vector; this row vector of all first-order partial derivatives of is the transpose of thegradient of, i.e.. Specializing further, when, that is when is ascalar-valued function of a single variable, the Jacobian matrix has a single entry; this entry is the derivative of the function.
These concepts are named after themathematicianCarl Gustav Jacob Jacobi (1804–1851).
The Jacobian of a vector-valued function in several variables generalizes thegradient of ascalar-valued function in several variables, which in turn generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian matrix of a scalar-valuedfunction of several variables is (the transpose of) its gradient and the gradient of a scalar-valued function of a single variable is its derivative.
At each point where a function is differentiable, its Jacobian matrix can also be thought of as describing the amount of "stretching", "rotating" or "transforming" that the function imposes locally near that point. For example, if(x′,y′) =f(x,y) is used to smoothly transform an image, the Jacobian matrixJf(x,y), describes how the image in the neighborhood of(x,y) is transformed.
If a function is differentiable at a point, its differential is given in coordinates by the Jacobian matrix. However, a function does not need to be differentiable for its Jacobian matrix to be defined, since only its first-orderpartial derivatives are required to exist.
Iff isdifferentiable at a pointp inRn, then itsdifferential is represented byJf(p). In this case, thelinear transformation represented byJf(p) is the bestlinear approximation off near the pointp, in the sense that
whereo(‖x −p‖) is aquantity that approaches zero much faster than thedistance betweenx andp does asx approachesp. This approximation specializes to the approximation of a scalar function of a single variable by itsTaylor polynomial of degree one, namely
In this sense, the Jacobian may be regarded as a kind of "first-order derivative" of a vector-valued function of several variables. In particular, this means that thegradient of a scalar-valued function of several variables may too be regarded as its "first-order derivative".
Composable differentiable functionsf :Rn →Rm andg :Rm →Rk satisfy thechain rule, namely forx inRn.
The Jacobian of the gradient of a scalar function of several variables has a special name: theHessian matrix, which in a sense is the "second derivative" of the function in question.

Ifm =n, thenf is a function fromRn to itself and the Jacobian matrix is asquare matrix. We can then form itsdeterminant, known as theJacobian determinant. The Jacobian determinant is sometimes simply referred to as "the Jacobian".
The Jacobian determinant at a given point gives important information about the behavior off near that point. For instance, thecontinuously differentiable functionf isinvertible near a pointp ∈Rn if the Jacobian determinant atp is non-zero. This is theinverse function theorem. Furthermore, if the Jacobian determinant atp ispositive, thenf preservesorientation nearp; if it isnegative,f reverses orientation. Theabsolute value of the Jacobian determinant atp gives us the factor by which the functionf expands or shrinksvolumes nearp; this is why it occurs in the generalsubstitution rule.
The Jacobian determinant is used when making achange of variables when evaluating amultiple integral of a function over a region within its domain. To accommodate for the change of coordinates the magnitude of the Jacobian determinant arises as a multiplicative factor within the integral. This is because then-dimensionaldV element is in general aparallelepiped in the new coordinate system, and then-volume of a parallelepiped is the determinant of its edge vectors.
The Jacobian can also be used to determine the stability ofequilibria forsystems of differential equations by approximating behavior near an equilibrium point.
According to theinverse function theorem, thematrix inverse of the Jacobian matrix of aninvertible functionf :Rn →Rn is the Jacobian matrix of theinverse function. That is, the Jacobian matrix of the inverse function at a pointp is
and the Jacobian determinant is
If the Jacobian is continuous and nonsingular at the pointp inRn, thenf is invertible when restricted to someneighbourhood ofp. In other words, if the Jacobian determinant is not zero at a point, then the function islocally invertible near this point.
The (unproved)Jacobian conjecture is related to global invertibility in the case of a polynomial function, that is a function defined bynpolynomials inn variables. It asserts that, if the Jacobian determinant is a non-zero constant (or, equivalently, that it does not have any complex zero), then the function is invertible and its inverse is a polynomial function.
Iff :Rn →Rm is adifferentiable function, acritical point off is a point where therank of the Jacobian matrix is not maximal. This means that the rank at the critical point is lower than the rank at some neighbour point. In other words, letk be the maximal dimension of theopen balls contained in the image off; then a point is critical if allminors of rankk off are zero.
In the case wherem =n =k, a point is critical if the Jacobian determinant is zero.
Consider a functionf :R2 →R3, with(x,y) ↦ (f1(x,y),f2(x,y),f3(x,y)), given by
The Jacobian matrix off is
The transformation frompolar coordinates(r,φ) toCartesian coordinates (x,y), is given by the functionF:R+ × [0, 2π) →R2 with components
The Jacobian determinant is equal tor. This can be used to transform integrals between the two coordinate systems:
The transformation fromspherical coordinates(ρ,φ,θ)[7] toCartesian coordinates (x,y,z), is given by the functionF:R+ × [0,π) × [0, 2π) →R3 with components
The Jacobian matrix for this coordinate change is
Thedeterminant isρ2 sinφ. SincedV =dxdydz is the volume for a rectangular differential volume element (because the volume of a rectangular prism is the product of its sides), we can interpretdV =ρ2 sinφdρdφdθ as the volume of the sphericaldifferential volume element. Unlike rectangular differential volume element's volume, this differential volume element's volume is not a constant, and varies with coordinates (ρ andφ). It can be used to transform integrals between the two coordinate systems:
The Jacobian matrix of the functionF :R3 →R4 with components
is
This example shows that the Jacobian matrix need not be a square matrix.
The Jacobian determinant of the functionF :R3 →R3 with components
is
From this we see thatF reverses orientation near those points wherex1 andx2 have the same sign; the function islocally invertible everywhere except near points wherex1 = 0 orx2 = 0. Intuitively, if one starts with a tiny object around the point(1, 2, 3) and applyF to that object, one will get a resulting object with approximately40 × 1 × 2 = 80 times the volume of the original one, with orientation reversed.
Consider adynamical system of the form, where is the (component-wise) derivative of with respect to theevolution parameter (time), and is differentiable. If, then is astationary point (also called asteady state). By theHartman–Grobman theorem, the behavior of the system near a stationary point is related to theeigenvalues of, the Jacobian of at the stationary point.[8] Specifically, if the eigenvalues all have real parts that are negative, then the system is stable near the stationary point. If any eigenvalue has a real part that is positive, then the point is unstable. If the largest real part of the eigenvalues is zero, the Jacobian matrix does not allow for an evaluation of the stability.[9]
A square system of coupled nonlinear equations can be solved iteratively byNewton's method. This method uses the Jacobian matrix of the system of equations.
The Jacobian serves as a linearizeddesign matrix in statisticalregression andcurve fitting; seenon-linear least squares. The Jacobian is also used in random matrices, moments, local sensitivity and statistical diagnostics.[10][11]
{{cite web}}: CS1 maint: multiple names: authors list (link)