Inmathematics,physics,engineering andsystems theory, adynamical system is the description of how a system evolves in time. We express our observables as numbers and we record them over time.[1]
For example anastronomer can experimentally record the positions of how theplanets move in the sky, and this can be considered a complete enough description of a dynamical system. In the case of planets there is also enough knowledge to codify this information as a set ofdifferential equations withinitial conditions, or as amap from the present state to a future state in a predefinedstate space with atime parameter t , or as anorbit inphase space.[2]
The concept of a dynamical system has its origins inNewtonian mechanics and more precisely incelestial mechanics. There, as in other natural sciences and engineering disciplines, there is some need to predict the evolution of the system, but maybe also pose other questions such as stability, qualitative or long term behaviour, dependence on parameters, existence of periodic, stochastic or chaotic behaviour.[9]The relation from one state and another is either explicit such as a function in the parameter t predicting position and velocity of a particle or implicit such as adifferential equation,difference equation or othertime scale. Some times it may not be possible to define such a description, there may not even be a differential equation predictingstock price, or it maybe impossible to build one but still talk stock prices can be considered a dynamical system based on experimental data changing over time.[10]
If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as atrajectory ororbit.
Before the advent ofcomputers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems.[13] Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.[14]
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such asLyapunov stability orstructural stability. Thestability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish theirequivalence changes with the different notions of stability.[15]
The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes.Linear dynamical systems andsystems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.[16]
The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may havebifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in thetransition to turbulence of a fluid.[17]
The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined forergodic systems and a more detailed understanding has been worked out forhyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations ofstatistical mechanics and ofchaos.[18]
Three body problem: Approximate trajectories of three identical bodies located at the vertices of a scalene triangle and having zero initial velocities.
Arnold cat map: picture showing how the linear map stretches the unit square and how its pieces are rearranged when themodulo operation is performed. The lines with the arrows show the direction of the contracting and expandingeigenspaces
Baker's map: Example of ameasure that is invariant under the action of the (unrotated) baker's map: aninvariant measure. Applying the baker's map to this image always results in exactly the same image.
Billiards: A particle moving inside the Bunimovich stadium, a well-known chaotic billiard.
The recursive application of aComplex quadratic polynomial as a complex plane map gives a Dynamical system. Here there is a Dynamical plane with a Julia set and critical orbit.
A two-dimensional Poincaré section of the forcedDuffing equation
Many people regard French mathematicianHenri Poincaré as the founder of dynamical systems.[19] Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included thePoincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.[20]
Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.[21]
The Smalehorseshoe mapf is the composition of three geometrical transformations.
Stephen Smale made significant advances as well. His first contribution was theSmale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
From amathematics perspective in the most general case the state space X is treated as a genericset ofabstract algebra. This space Xhas asemi-group structure on it (i.e. where onlyassociativity is required) and there is most often a natural choice for anIdentity element, which is typically attached to the origin of the choosenreference frame. This semi-group can be intuitively interpreted as the time coordinate t[26]. Time in fact has an addition operation and an origin, the identity, like a group. The action of the semi-group on X is a set ofmaps from X to itself parametric in the time t, and this is intuitively the time evolution.[27]
Time can be generalized too as a generic set of continuous parameters, for example the control parameters of arobotic arm can be amanifold such as atorus.There is no need that time has a direction, that issmooth or even that it has whatsoever meaning similar to the intuition oftime, in fact it can be generalized to even more general algebraic objects[33]
Time typically is considered often an external parameter as inclassical andquantum mechanics, and this is typically called time domain representation, and it goes hand in hand with theHamiltonian mechanics formulation. This is not necessarily always the case:general relativity for example isframe independent,[34] and gravity has an influence over time too, and inquantum electrodynamics the use of theLagrangian mechanics formulation is more common[35] where time and space are on same footing. In both cases the literature still talks about dynamical systems.
Time can also be a discrete parameter. When time is generalized to the multi-dimensional case, i.e. as a general set of control or external parameters, this space can be interpreted as aLattice, i.e. as the discrete points of amanifold or thetics of astock price.[36] Discrete Time events therefore can be counted by integers, for example like the measurements of the position of the planets in the sky, but this can be very different than the intution of time as a clock that has equispaced time events. One of the tasks is typically to extract some mathematical model from the data.[37]
Theevolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function isdeterministic, that is, for a given time interval only one future state follows from the current state.[38][39] However, some systems are notdeterministic they may allow multiple future states (i.e. the maps are generalized intomultivalued functions and not uniquely defined everywhere) and the system can be subject to abifurcation. Last but not least there may bechaotic systems (i.e deterministic but not predictable) orquantum systems (i.e. deterministic until they are measured).
Some systems are alsostochastic, either in the input parameters such as anoscillator with a random force, or in the initial conditions, or in the predicted variables as in aStochastic differential equation. In that random events also affect the evolution of the state variables, and this includes stochasticjump processes which are not continuos, a prototype example of a stochastic dynamical system arestock prices.[40]
for and, where we have defined the set for anyx inX.
In particular, in the case that we have for everyx inX that and thus that Φ defines amonoid action ofT onX.
The function Φ(t,x) is called theevolution function of the dynamical system: it associates to every pointx in the setX a unique image, depending on the variablet, called theevolution parameter.X is calledphase space orstate space, while the variablex represents aninitial state of the system.
We often write
if we take one of the variables as constant. The function
is called theflow throughx and itsgraph is called thetrajectory throughx. The set
is called theorbit throughx.The orbit throughx is theimage of the flow throughx.A subsetS of the state spaceX is called Φ-invariant if for allx inS and allt inT
Thus, in particular, ifS is Φ-invariant, for allx inS. That is, the flow throughx must be defined for all time for every element ofS.
In the geometrical definition, a dynamical system is the tuple. is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. is amanifold, i.e. locally a Banach space or Euclidean space, or in the discrete case agraph.f is an evolution rulet → ft (with) such thatf t is adiffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms,f(t) is a diffeomorphism, for every timet in the domain .
Areal dynamical system,real-time dynamical system,continuous time dynamical system, orflow is a tuple (T,M, Φ) withT anopen interval in thereal numbersR,M amanifold locallydiffeomorphic to aBanach space, and Φ acontinuous function. If Φ iscontinuously differentiable the system is called adifferentiable dynamical system. If the manifoldM is locally diffeomorphic toRn, the dynamical system isfinite-dimensional; if not, the dynamical system isinfinite-dimensional. This does not assume asymplectic structure. WhenT is taken to be the reals, the dynamical system is calledglobal or aflow; and ifT is restricted to the non-negative reals, then the dynamical system is asemi-flow.
Acellular automaton is a tuple (T,M, Φ), withT alattice such as theintegers or a higher-dimensionalinteger grid,M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As suchcellular automata are dynamical systems. The lattice inM represents the "space" lattice, while the one inT represents the "time" lattice.
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore calledmultidimensional systems. Such systems are useful for modeling, for example,image processing.
Given a global dynamical system (R,X, Φ) on alocally compact andHausdorfftopological spaceX, it is often useful to study the continuous extension Φ* of Φ to theone-point compactificationX* ofX. Even after losing the differential structure of the original system, there are compactness arguments to analyze the new system (R,X*, Φ*).
A dynamical system may be defined formally as a measure-preserving transformation of ameasure space, the triplet (T, (X, Σ,μ), Φ). Here,T is a monoid (usually the non-negative integers),X is aset, and (X, Σ,μ) is aprobability space, meaning that Σ is asigma-algebra onX and μ is a finitemeasure on (X, Σ). A map Φ:X →X is said to beΣ-measurable if and only if, for every σ in Σ, one has. A map Φ is said topreserve the measure if and only if, for everyσ in Σ, one has. Combining the above, a map Φ is said to be ameasure-preserving transformation ofX, if it is a map fromX to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ,μ), Φ), for such a Φ, is then defined to be adynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems theiterates for every integern are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called theKrylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as theLiouville measure inHamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaoticdissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on theattractor, but attractors have zeroLebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, theSinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure ofstable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
The concept ofevolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior ofclassical mechanical systems. But a system ofordinary differential equations must be solved before it becomes a dynamic system. For example, consider aninitial value problem such as the following:
v:T ×M →TM is avector field inRn orCn and represents the change ofvelocity induced by the knownforces acting on the given material point in the phase spaceM. The change is not a vector in the phase space M, but is instead in thetangent spaceTM.
There is no need for higher order derivatives in the equation, nor for the parametert inv(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
autonomous, whenv(t,x) =v(x)
homogeneous whenv(t,0) = 0 for allt
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
The dynamical system is then (T,M, Φ).
Some formal manipulation of the system ofdifferential equations shown above gives a more general form of equations a dynamical system must satisfy
where is afunctional from the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locallyBanach spaces—in which case the differential equations arepartial differential equations.
AComputational Fluid Dynamics mesh is an example of a discretization of a dynamical system, typically both in space and time, for computational purposes
A discrete dynamical system is when either time or space or both are discrete. Typically for both space and time, there is afinite orcountable sets of points and bounded maps and operators, that can be manipulated on a computer given some general assumptions on theboundaries.
In the general context ofmathematics, it's possible to define the dynamical system as a general discrete map[44] as in theFormal definition.A genericsequence is already per se a discrete dynamical system[45].Recursion and interation of maps is another such case[46].A prototype of this is theLogistic map[47].
From an empirical perspective, all dynamical systems derived from temporal dataare discrete, Gauss for example proved that with the measurement of 3 positions and times ofCeres in the sky is possible to fully determine the orbit, therefore be able to compute any possible position and velocity of the asteroid in the past or the future and therefore fully characterize the dynamical system.[48] Typical tasks with experimental data are to derive a mathematical model.[49]
More generally this can be generalized into a generic discrete map from an-dimensionalmanifold to itself:
In the context ofHamiltonianflows[54], motion itself can be considered acanonical transformation (i.e. ultimately amap) and therefore a discrete set of these in a discrete time interval is again a shape of characterization of the full discrete dynamical system.
An example of this is a weather forecast of Earth where the data points are separated in space from each other.The system can be put on alattice, and formulas can be used to compute certain values, like thediscretization of Navier–Stokes equations.
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is theN-dimensional Euclidean space, so any point in phase space can be represented by a vector withN numbers. The analysis of linear systems is possible because they satisfy asuperposition principle: ifu(t) andw(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so willu(t) + w(t).
For aflow, the vector field v(x) is anaffine function of the position in the phase space, that is,
withA a matrix,b a vector of numbers andx the position vector. The solution to this system can be found by using the superposition principle (linearity).The caseb ≠ 0 withA = 0 is just a straight line in the direction of b:
Whenb is zero andA ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, ifx0 = 0, then the orbit remains there.For other initial conditions, the equation of motion is given by theexponential of a matrix: for an initial pointx0,
Whenb = 0, theeigenvalues ofA determine the structure of the phase space. From the eigenvalues and theeigenvectors ofA it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the caseA ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions forchaotic behavior.
withA a matrix andb a vector. As in the continuous case, the change of coordinatesx → x + (1 − A) –1b removes the termb from the equation. In the newcoordinate system, the origin is a fixed point of the map and the solutions are of the linear systemAnx0.The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors ofA determine the structure of phase space. For example, ifu1 is an eigenvector ofA, with a real eigenvalue smaller than one, then the straight lines given by the points alongαu1, withα ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): asingular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; aperiodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
A flow in most small patches of the phase space can be made very simple. Ify is a point where the vector fieldv(y) ≠ 0, then there is a change of coordinates for a region aroundy where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
Therectification theorem says that away fromsingular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase spaceM the dynamical system isintegrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (wherev(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a pointx0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular tov(x0). These points are aPoincaré sectionS(γ, x0), of the orbit. The flow now defines a map, thePoincaré mapF : S → S, for points starting inS and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré mapF. By a translation, the point can be assumed to be atx = 0. The Taylor series of the map isF(x) = J · x + O(x2), so a change of coordinatesh can only be expected to simplifyF to its linear part
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. Ifλ1, ..., λν are the eigenvalues ofJ they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the formλi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the functionh, the non-resonant condition is also known as the small divisor problem.
The results on the existence of a solution to the conjugation equation depend on the eigenvalues ofJ and the degree of smoothness required fromh. AsJ does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues ofJ are not in the unit circle, the dynamics near the fixed pointx0 ofF is calledhyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is calledelliptic.
In the hyperbolic case, theHartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear mapJ · x. The hyperbolic case is alsostructurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues ofJ in the complex plane, implying that the map is still hyperbolic.
When the evolution map Φt (or thevector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in thephase space until a special valueμ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically afixed point, a periodic orbit, or an invarianttorus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed pointx0 of a system familyFμ can be characterized by theeigenvalues of the first derivative of the systemDFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues ofDFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article onBifurcation theory.
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subsetA into the points Φt(A) and invariance of the phase space means that
In theHamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by theLiouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered therecurrence theorem: Assume the phase space has a finite Liouville volume and letF be a phase space volume-preserving map andA a subset of the phase space. Then almost every point ofA returns toA infinitely often. The Poincaré recurrence theorem was used byZermelo to object toBoltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called theergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a regionA is vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development ofstatistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems.Koopman approached the study of ergodic systems by the use offunctional analysis. An observablea is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operatorUt, thetransfer operator,
By studying the spectral properties of the linear operatorU it becomes possible to classify the ergodic properties of Φt. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φt gets mapped into an infinite-dimensional linear problem involving U.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed inequilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with theBoltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems.SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
Simple nonlinear dynamical systems, includingpiecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been calledchaos.Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems thetangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (thestable manifold) and another of the points that diverge from the orbit (theunstable manifold).
This branch ofmathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to asteady state in the long term, and if so, what are the possibleattractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue.Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. ThePomeau–Manneville scenario of thelogistic map and theFermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; thehorseshoe map is piecewise linear.
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,[56] meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen forLipschitz continuous differential equations according to the proof of thePicard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
Admits the finite duration solution:
that is zero for and is not Lipschitz continuous at its ending time
^Melby, Paul; Weber, Nicholas; Hübler, Alfred (September 2005). "Dynamics of self-adjusting systems with noise".Chaos: An Interdisciplinary Journal of Nonlinear Science.15 (3) 033902.Bibcode:2005Chaos..15c3902M.doi:10.1063/1.1953147.PMID16252993.
^One of the first to get the intuition of numerical computations for weather forecasting is Richardson, he imagined a set ofhuman people doing computations
^Schultz, David M.; Lynch, Peter (April 2022). "100 Years of L. F. Richardson's Weather Prediction by Numerical Process".Monthly Weather Review.150 (4):693–695.doi:10.1175/MWR-D-22-0068.1.
^Holmes, Philip (September 1990). "Poincaré, celestial mechanics, dynamical-systems theory and 'chaos'".Physics Reports.193 (3):137–163.doi:10.1016/0370-1573(90)90012-Q.
^Rega, Giuseppe (2020). "Tribute to Ali H. Nayfeh (1933–2017)".IUTAM Symposium on Exploiting Nonlinear Dynamics for Engineering Systems. IUTAM Bookseries. Vol. 37. pp. 1–13.doi:10.1007/978-3-030-23692-2_1.ISBN978-3-030-23691-5.
^Sip, Viktor; Breyton, Martin; Petkoski, Spase; Jirsa, Viktor (2025). "Dynamical system reconstruction from partial observations using stochastic dynamics".arXiv:2510.01089 [cs.LG].
^Moore, Samuel A.; Mann, Brian P.; Chen, Boyuan (17 December 2025). "Automated global analysis of experimental dynamics through low-dimensional linear embeddings".npj Complexity.2 (1) 36.doi:10.1038/s44260-025-00062-y.
^Vardia T. Haimo (1985). "Finite Time Differential Equations".1985 24th IEEE Conference on Decision and Control. pp. 1729–1733.doi:10.1109/CDC.1985.268832.
Encyclopaedia of Mathematical Sciences (ISSN0938-0396) has a sub-series on dynamical systems with reviews of current research.
Christian Bonatti; Lorenzo J. Díaz; Marcelo Viana (2005).Dynamics Beyond Uniform Hyperbolicity: A Global Geometric and Probabilistic Perspective. Springer.ISBN978-3-540-22066-4.
Tim Bedford, Michael Keane and Caroline Series,eds. (1991).Ergodic theory, symbolic dynamics and hyperbolic spaces. Oxford University Press.ISBN978-0-19-853390-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
David D. Nolte (2015).Introduction to Modern Dynamics: Chaos, Networks, Space and Time. Oxford University Press.ISBN978-0199657032.
Julien Clinton Sprott (2003).Chaos and time-series analysis. Oxford University Press.ISBN978-0-19-850839-7.
Steven H. Strogatz (1994).Nonlinear dynamics and chaos: with applications to physics, biology chemistry and engineering. Addison Wesley.ISBN978-0-201-54344-5.