Differential equation containing derivatives with respect to only one variable
Thetrajectory of aprojectile launched from acannon follows a curve determined by an ordinary differential equation that is derived from Newton's second law.
In mathematics, anordinary differential equation (ODE) is adifferential equation (DE) dependent on only a single independentvariable. As with any other DE, its unknown(s) consists of one (or more)function(s) and involves thederivatives of those functions.[1] The term "ordinary" is used in contrast withpartial differential equations (PDEs) which may be with respect tomore than one independent variable,[2] and, less commonly, in contrast withstochastic differential equations (SDEs) where the progression is random.[3]
where and are arbitrarydifferentiable functions that do not need to be linear, and are the successive derivatives of the unknown function of the variable.[4]
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Mostelementary andspecial functions that are encountered inphysics andapplied mathematics are solutions of linear differential equations (seeHolonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for exampleRiccati equation).[5]
Ordinary differential equations (ODEs) arise in many contexts ofmathematics andsocial andnatural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), orgradients of quantities, which is how they enter differential equations.[7]
A simple example isNewton's second law of motion—the relationship between the displacement and the time of an object under the force, is given by the differential equation
which constrains themotion of a particle of constant mass. In general, is a function of the position of the particle at time. The unknown function appears on both sides of the differential equation, and is indicated in the notation.[9][10][11][12]
A number of coupled differential equations form a system of equations. If is a vector whose elements are functions;, and is avector-valued function of and its derivatives, then
is anexplicit system of ordinary differential equations oforder anddimension. Incolumn vector form:
These are not necessarily linear. Theimplicit analogue is:
For a system of the form, some sources also require that theJacobian matrix benon-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termeddifferential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems.[19][20][21] Presumably for additional derivatives, theHessian matrix and so forth are also assumed non-singular according to this scheme,[citation needed] although note thatany ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order,[22] which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of aphase portrait.
a function, where is an interval, is called asolution orintegral curve for, if is-times differentiable on, and
Given two solutions and, is called anextension of if and
A solution that has no extension is called amaximal solution. A solution defined on all of is called aglobal solution.
Ageneral solution of anth-order equation is a solution containing arbitrary independentconstants of integration. Aparticular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions orboundary conditions'.[23] Asingular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.[24]
In the context of linear ODE, the terminologyparticular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to thehomogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in theguessing method section in this article, and is frequently used when discussing themethod of undetermined coefficients andvariation of parameters.
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,[25] meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
The theory ofsingular solutions of ordinary andpartial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854).Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notablyCasorati andCayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
The primitive attempt in dealing with differential equations had in view a reduction toquadratures, that is, expressing the solutions in terms of known function and their integrals. This is possible for linear equations with constant coefficients, it appeared in the 19th century that this is generally impossible in other cases. Hence, analysts began the study (for their own) of functions that are solutions of differential equations, thus opening a new and fertile field.Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by quadratures, but whether a given differential equation suffices for the definition of a function, and, if so, what are the characteristic properties of such functions.
Two memoirs byFuchs[26] inspired a novel approach, subsequently elaborated by Thomé andFrobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868.Clebsch (1873) attacked the theory along lines parallel to those in his theory ofAbelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces under rational one-to-one transformations.
From 1870,Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, usingLie groups, be referred to a common source, and that ordinary differential equations that admit the sameinfinitesimal transformations present comparable integration difficulties. He also emphasized the subject oftransformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.[27]
A general solution approach uses the symmetry property of differential equations, the continuousinfinitesimal transformations of solutions to solutions (Lie theory). Continuousgroup theory,Lie algebras, anddifferential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find itsLax pairs, recursion operators,Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based oneigenvalues and correspondingeigenfunctions of linear operators defined via second-orderhomogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named afterJ. C. F. Sturm andJ. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering.[28] SLPs are also useful in the analysis of certain partial differential equations.
There are several theorems that establish existence and uniqueness of solutions toinitial value problems involving ODEs both locally and globally. The two main theorems are
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions ofGrönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply toDAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.[29]
The theorem can be stated simply as follows.[30] For the equation and initial value problem:if and are continuous in a closed rectanglein the plane, where and arereal (symbolically:) and denotes theCartesian product, square brackets denoteclosed intervals, then there is an intervalfor some wherethe solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on to be linear, this applies to non-linear equations that take the form, and it can also be applied to systems of equations.
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:[31]
For each initial condition there exists a unique maximum (possibly infinite) open interval
such that any solution that satisfies this initial condition is arestriction of the solution that satisfies this initial condition with domain.
In the case that, there are exactly two possibilities
explosion in finite time:
leaves domain of definition:
where is the open set in which is defined, and is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
may depend on the specific choice of.
Example.
This means that, which is and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all since the solution is
which has maximum domain:
This shows clearly that the maximum interval may depend on the initial conditions. The domain of could be taken as being but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not because
which is one of the two possible cases according to the above theorem.
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below,,,,, and, are anyintegrable functions of,; and are real given constants; are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions, and are dummy variables of integration (the continuum analogues of indices insummation), and the notation just means to integrate with respect to, thenafter the integration substitute, without adding constants (explicitly stated).
Since are the solutions of thepolynomial ofdegree:, then:for all different,for each root repeated times,for some complex, then setting, and usingEuler's formula, allows some terms in the previous results to be written in the formwhere is an arbitrary constant (phase shift).
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to verify whether it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
^Vardia T. Haimo (1985). "Finite Time Differential Equations".1985 24th IEEE Conference on Decision and Control. pp. 1729–1733.doi:10.1109/CDC.1985.268832.S2CID45426376.
^abMathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M. R. Spiegel, J. Liu, Schaum's Outline Series, 2009, ISC_2N 978-0-07-154855-7
^Further Elementary Analysis, R. Porter, G.Bell & Sons (London), 1978,ISBN0-7135-1594-5
^abMathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISC_2N 978-0-521-86153-3
Polyanin, A. D. and V. F. Zaitsev,Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003.ISBN1-58488-297-2
Ascher, Uri; Petzold, Linda (1998),Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM,ISBN978-1-61197-139-2
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux,Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002.ISBN0-415-27267-X
D. Zwillinger,Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.