Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Linear differential equation

From Wikipedia, the free encyclopedia
Differential equation that is linear with respect to the unknown function
This article is about linear differential equations with one independent variable. For similar equations with two or more independent variables, seePartial differential equation § Linear equations of second order.
Differential equations
Scope
Classification
Solution
People

Inmathematics, alinear differential equation is adifferential equation that islinear in the unknown function and its derivatives, so it can be written in the forma0(x)y+a1(x)y+a2(x)y+an(x)y(n)=b(x){\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''\cdots +a_{n}(x)y^{(n)}=b(x)}wherea0(x), ...,an(x) andb(x) are arbitrarydifferentiable functions that do not need to be linear, andy′, ...,y(n) are the successive derivatives of an unknown functiony of the variablex.

Such an equation is anordinary differential equation (ODE). Alinear differential equation may also be a linearpartial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation arepartial derivatives.

Types of solution

[edit]

A linear differential equation or asystem of linear equations such that the associated homogeneous equations have constant coefficients may besolved by quadrature, which means that the solutions may be expressed in terms ofintegrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two,Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.

The solutions of homogeneous linear differential equations withpolynomial coefficients are calledholonomic functions. This class of functions is stable under sums, products,differentiation,integration, and contains many usual functions andspecial functions such asexponential function,logarithm,sine,cosine,inverse trigonometric functions,error function,Bessel functions andhypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations ofcalculus, such as computation ofantiderivatives,limits,asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.

Basic terminology

[edit]

The highestorder of derivation that appears in a (linear) differential equation is theorder of the equation. The termb(x), which does not depend on the unknown function and its derivatives, is sometimes called theconstant term of the equation (by analogy withalgebraic equations), even when this term is a non-constant function. If the constant term is thezero function, then the differential equation is said to behomogeneous, as it is ahomogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is theassociated homogeneous equation. A differential equation hasconstant coefficients if onlyconstant functions appear as coefficients in the associated homogeneous equation.

Asolution of a differential equation is a function that satisfies the equation.The solutions of a homogeneous linear differential equation form avector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation.

Linear differential operator

[edit]
Main article:Differential operator

Abasic differential operator of orderi is a mapping that maps anydifferentiable function to itsith derivative, or, in the case of several variables, to one of itspartial derivatives of orderi. It is commonly denoteddidxi{\displaystyle {\frac {d^{i}}{dx^{i}}}}in the case ofunivariate functions, andi1++inx1i1xnin{\displaystyle {\frac {\partial ^{i_{1}+\cdots +i_{n}}}{\partial x_{1}^{i_{1}}\cdots \partial x_{n}^{i_{n}}}}}in the case of functions ofn variables. The basic differential operators include the derivative of order 0, which is the identity mapping.

Alinear differential operator (abbreviated, in this article, aslinear operator or, simply,operator) is alinear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form[1]a0(x)+a1(x)ddx++an(x)dndxn,{\displaystyle a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},}wherea0(x), ...,an(x) are differentiable functions, and the nonnegative integern is theorder of the operator (ifan(x) is not thezero function).

LetL be a linear differential operator. The application ofL to a functionf is usually denotedLf orLf(X), if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is alinear operator, since it maps sums to sums and the product by ascalar to the product by the same scalar.

As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form avector space over thereal numbers or thecomplex numbers (depending on the nature of the functions that are considered). They form also afree module over thering of differentiable functions.

The language of operators allows a compact writing for differentiable equations: ifL=a0(x)+a1(x)ddx++an(x)dndxn,{\displaystyle L=a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},}is a linear differential operator, then the equationa0(x)y+a1(x)y+a2(x)y++an(x)y(n)=b(x){\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}=b(x)}may be rewrittenLy=b(x).{\displaystyle Ly=b(x).}

There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not iny and the right-hand and of the equation, such asLy(x) =b(x) orLy =b.

Thekernel of a linear differential operator is itskernel as a linear mapping, that is thevector space of the solutions of the (homogeneous) differential equationLy = 0.

In the case of an ordinary differential operator of ordern,Carathéodory's existence theorem implies that, under very mild conditions, the kernel ofL is a vector space of dimensionn, and that the solutions of the equationLy(x) =b(x) have the formS0(x)+c1S1(x)++cnSn(x),{\displaystyle S_{0}(x)+c_{1}S_{1}(x)+\cdots +c_{n}S_{n}(x),}wherec1, ...,cn are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an intervalI, if the functionsb,a0, ...,an are continuous inI, and there is a positive real numberk such that|an(x)| >k for everyx inI.

Homogeneous equation with constant coefficients

[edit]

A homogeneous linear differential equation hasconstant coefficients if it has the forma0y+a1y+a2y++any(n)=0{\displaystyle a_{0}y+a_{1}y'+a_{2}y''+\cdots +a_{n}y^{(n)}=0}wherea1, ...,an are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients.

The study of these differential equations with constant coefficients dates back toLeonhard Euler, who introduced theexponential functionex{\displaystyle e^{x}}, which is the unique solution of the equationf=f{\displaystyle f'=f}, such thatf(0)=1{\displaystyle f(0)=1}. It follows that thenth derivative ofecx{\displaystyle e^{cx}} iscnecx{\displaystyle c^{n}e^{cx}}, and this allows solving homogeneous linear differential equations rather easily.

Leta0y+a1y+a2y++any(n)=0{\displaystyle a_{0}y+a_{1}y'+a_{2}y''+\cdots +a_{n}y^{(n)}=0}be a homogeneous linear differential equation with constant coefficients (that isa0, ...,an are real or complex numbers).

Searching solutions of this equation that have the formeαx is equivalent to searching the constantsα such thata0eαx+a1αeαx+a2α2eαx++anαneαx=0.{\displaystyle a_{0}e^{\alpha x}+a_{1}\alpha e^{\alpha x}+a_{2}\alpha ^{2}e^{\alpha x}+\cdots +a_{n}\alpha ^{n}e^{\alpha x}=0.}Factoring outeαx (which is never zero), shows thatα must be a root of thecharacteristic polynomiala0+a1t+a2t2++antn{\displaystyle a_{0}+a_{1}t+a_{2}t^{2}+\cdots +a_{n}t^{n}}of the differential equation, which is the left-hand side of thecharacteristic equationa0+a1t+a2t2++antn=0.{\displaystyle a_{0}+a_{1}t+a_{2}t^{2}+\cdots +a_{n}t^{n}=0.}

When these roots are alldistinct, one hasn distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to belinearly independent, by considering theVandermonde determinant of the values of these solutions atx = 0, ...,n – 1. Together they form abasis of thevector space of solutions of the differential equation (that is, the kernel of the differential operator).

Example

y2y+2y2y+y=0{\displaystyle y''''-2y'''+2y''-2y'+y=0}has the characteristic equationz42z3+2z22z+1=0.{\displaystyle z^{4}-2z^{3}+2z^{2}-2z+1=0.}This has zeros,i,i, and1 (multiplicity 2). The solution basis is thuseix,eix,ex,xex.{\displaystyle e^{ix},\;e^{-ix},\;e^{x},\;xe^{x}.}A real basis of solution is thuscosx,sinx,ex,xex.{\displaystyle \cos x,\;\sin x,\;e^{x},\;xe^{x}.}

In the case where the characteristic polynomial has onlysimple roots, the preceding provides a complete basis of the solutions vector space. In the case ofmultiple roots, more linearly independent solutions are needed for having a basis. These have the formxkeαx,{\displaystyle x^{k}e^{\alpha x},}wherek is a nonnegative integer,α is a root of the characteristic polynomial of multiplicitym, andk <m. For proving that these functions are solutions, one may remark that ifα is a root of the characteristic polynomial of multiplicitym, the characteristic polynomial may be factored asP(t)(tα)m. Thus, applying the differential operator of the equation is equivalent with applying firstm times the operatorddxα{\textstyle {\frac {d}{dx}}-\alpha }, and then the operator that hasP as characteristic polynomial. By theexponential shift theorem,(ddxα)(xkeαx)=kxk1eαx,{\displaystyle \left({\frac {d}{dx}}-\alpha \right)\left(x^{k}e^{\alpha x}\right)=kx^{k-1}e^{\alpha x},}

and thus one gets zero afterk + 1 application ofddxα{\textstyle {\frac {d}{dx}}-\alpha }.

As, by thefundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a basis of the vector space of the solutions.

In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting ofreal-valued functions. Such a basis may be obtained from the preceding basis by remarking that, ifa +ib is a root of the characteristic polynomial, thenaib is also a root, of the same multiplicity. Thus a real basis is obtained by usingEuler's formula, and replacingxke(a+ib)x{\displaystyle x^{k}e^{(a+ib)x}} andxke(aib)x{\displaystyle x^{k}e^{(a-ib)x}} byxkeaxcos(bx){\displaystyle x^{k}e^{ax}\cos(bx)} andxkeaxsin(bx){\displaystyle x^{k}e^{ax}\sin(bx)}.

Second-order case

[edit]

A homogeneous linear differential equation of the second order may be writteny+ay+by=0,{\displaystyle y''+ay'+by=0,}and its characteristic polynomial isr2+ar+b.{\displaystyle r^{2}+ar+b.}

Ifa andb arereal, there are three cases for the solutions, depending on the discriminantD =a2 − 4b. In all three cases, the general solution depends on two arbitrary constantsc1 andc2.

Finding the solutiony(x) satisfyingy(0) =d1 andy′(0) =d2, one equates the values of the above general solution at0 and its derivative there tod1 andd2, respectively. This results in a linear system of two linear equations in the two unknownsc1 andc2. Solving this system gives the solution for a so-calledCauchy problem, in which the values at0 for the solution of the DEQ and its derivative are specified.

Non-homogeneous equation with constant coefficients

[edit]

A non-homogeneous equation of ordern with constant coefficients may be writteny(n)(x)+a1y(n1)(x)++an1y(x)+any(x)=f(x),{\displaystyle y^{(n)}(x)+a_{1}y^{(n-1)}(x)+\cdots +a_{n-1}y'(x)+a_{n}y(x)=f(x),}wherea1, ...,an are real or complex numbers,f is a given function ofx, andy is the unknown function (for sake of simplicity, "(x)" will be omitted in the following).

There are several methods for solving such an equation. The best method depends on the nature of the functionf that makes the equation non-homogeneous. Iff is a linear combination of exponential and sinusoidal functions, then theexponential response formula may be used. If, more generally,f is a linear combination of functions of the formxneax,xn cos(ax), andxn sin(ax), wheren is a nonnegative integer, anda a constant (which need not be the same in each term), then themethod of undetermined coefficients may be used. Still more general, theannihilator method applies whenf satisfies a homogeneous linear differential equation, typically, aholonomic function.

The most general method is thevariation of constants, which is presented here.

The general solution of the associated homogeneous equationy(n)+a1y(n1)++an1y+any=0{\displaystyle y^{(n)}+a_{1}y^{(n-1)}+\cdots +a_{n-1}y'+a_{n}y=0}isy=u1y1++unyn,{\displaystyle y=u_{1}y_{1}+\cdots +u_{n}y_{n},}where(y1, ...,yn) is a basis of the vector space of the solutions andu1, ...,un are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of consideringu1, ...,un as constants, they can be considered as unknown functions that have to be determined for makingy a solution of the non-homogeneous equation. For this purpose, one adds the constraints0=u1y1+u2y2++unyn0=u1y1+u2y2++unyn0=u1y1(n2)+u2y2(n2)++unyn(n2),{\displaystyle {\begin{aligned}0&=u'_{1}y_{1}+u'_{2}y_{2}+\cdots +u'_{n}y_{n}\\0&=u'_{1}y'_{1}+u'_{2}y'_{2}+\cdots +u'_{n}y'_{n}\\&\;\;\vdots \\0&=u'_{1}y_{1}^{(n-2)}+u'_{2}y_{2}^{(n-2)}+\cdots +u'_{n}y_{n}^{(n-2)},\end{aligned}}}which imply (byproduct rule andinduction)y(i)=u1y1(i)++unyn(i){\displaystyle y^{(i)}=u_{1}y_{1}^{(i)}+\cdots +u_{n}y_{n}^{(i)}}fori = 1, ...,n – 1, andy(n)=u1y1(n)++unyn(n)+u1y1(n1)+u2y2(n1)++unyn(n1).{\displaystyle y^{(n)}=u_{1}y_{1}^{(n)}+\cdots +u_{n}y_{n}^{(n)}+u'_{1}y_{1}^{(n-1)}+u'_{2}y_{2}^{(n-1)}+\cdots +u'_{n}y_{n}^{(n-1)}.}

Replacing in the original equationy and its derivatives by these expressions, and using the fact thaty1, ...,yn are solutions of the original homogeneous equation, one getsf=u1y1(n1)++unyn(n1).{\displaystyle f=u'_{1}y_{1}^{(n-1)}+\cdots +u'_{n}y_{n}^{(n-1)}.}

This equation and the above ones with0 as left-hand side form a system ofn linear equations inu1, ...,un whose coefficients are known functions (f, theyi, and their derivatives). This system can be solved by any method oflinear algebra. The computation ofantiderivatives givesu1, ...,un, and theny =u1y1 + ⋯ +unyn.

As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation.

First-order equation with variable coefficients

[edit]

The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient ofy′(x), is:y(x)=f(x)y(x)+g(x).{\displaystyle y'(x)=f(x)y(x)+g(x).}

If the equation is homogeneous, i.e.g(x) = 0, one may rewrite and integrate:yy=f,logy=k+F,{\displaystyle {\frac {y'}{y}}=f,\qquad \log y=k+F,}wherek is an arbitraryconstant of integration andF=fdx{\displaystyle F=\textstyle \int f\,dx} is anyantiderivative off. Thus, the general solution of the homogeneous equation isy=ceF,{\displaystyle y=ce^{F},}wherec =ek is an arbitrary constant.

For the general non-homogeneous equation, it is useful to multiply both sides of the equation by thereciprocaleF of a solution of the homogeneous equation.[2] This givesyeFyfeF=geF.{\displaystyle y'e^{-F}-yfe^{-F}=ge^{-F}.}AsfeF=ddx(eF),{\displaystyle -fe^{-F}={\tfrac {d}{dx}}\left(e^{-F}\right),} theproduct rule allows rewriting the equation asddx(yeF)=geF.{\displaystyle {\frac {d}{dx}}\left(ye^{-F}\right)=ge^{-F}.}Thus, the general solution isy=ceF+eFgeFdx,{\displaystyle y=ce^{F}+e^{F}\int ge^{-F}dx,}wherec is a constant of integration, andF is any antiderivative off (changing of antiderivative amounts to change the constant of integration).

Example

[edit]

Solving the equationy(x)+y(x)x=3x.{\displaystyle y'(x)+{\frac {y(x)}{x}}=3x.}The associated homogeneous equationy(x)+y(x)x=0{\displaystyle y'(x)+{\frac {y(x)}{x}}=0} givesyy=1x,{\displaystyle {\frac {y'}{y}}=-{\frac {1}{x}},}that isy=cx.{\displaystyle y={\frac {c}{x}}.}

Dividing the original equation by one of these solutions givesxy+y=3x2.{\displaystyle xy'+y=3x^{2}.}That is(xy)=3x2,{\displaystyle (xy)'=3x^{2},}xy=x3+c,{\displaystyle xy=x^{3}+c,}andy(x)=x2+c/x.{\displaystyle y(x)=x^{2}+c/x.}For the initial conditiony(1)=α,{\displaystyle y(1)=\alpha ,}one gets the particular solutiony(x)=x2+α1x.{\displaystyle y(x)=x^{2}+{\frac {\alpha -1}{x}}.}

System of linear differential equations

[edit]
Main article:Matrix differential equation
See also:system of differential equations

A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations.

An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, ify,y,,y(k){\displaystyle y',y'',\ldots ,y^{(k)}} appear in an equation, one may replace them by new unknown functionsy1,,yk{\displaystyle y_{1},\ldots ,y_{k}} that must satisfy the equationsy=y1{\displaystyle y'=y_{1}} andyi=yi+1,{\displaystyle y_{i}'=y_{i+1},} fori = 1, ...,k – 1.

A linear system of the first order, which hasn unknown functions andn differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is adifferential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the formy1(x)=b1(x)+a1,1(x)y1++a1,n(x)ynyn(x)=bn(x)+an,1(x)y1++an,n(x)yn,{\displaystyle {\begin{aligned}y_{1}'(x)&=b_{1}(x)+a_{1,1}(x)y_{1}+\cdots +a_{1,n}(x)y_{n}\\[1ex]&\;\;\vdots \\[1ex]y_{n}'(x)&=b_{n}(x)+a_{n,1}(x)y_{1}+\cdots +a_{n,n}(x)y_{n},\end{aligned}}}wherebn{\displaystyle b_{n}} and theai,j{\displaystyle a_{i,j}} are functions ofx. In matrix notation, this system may be written (omitting "(x)")y=Ay+b.{\displaystyle \mathbf {y} '=A\mathbf {y} +\mathbf {b} .}

The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.

Letu=Au.{\displaystyle \mathbf {u} '=A\mathbf {u} .}be the homogeneous equation associated to the above matrix equation.Its solutions form avector space of dimensionn, and are therefore the columns of asquare matrix of functionsU(x){\displaystyle U(x)}, whosedeterminant is not the zero function. Ifn = 1, orA is a matrix of constants, or, more generally, ifA commutes with itsantiderivativeB=Adx{\displaystyle \textstyle B=\int Adx}, then one may chooseU equal theexponential ofB. In fact, in these cases, one hasddxexp(B)=Aexp(B).{\displaystyle {\frac {d}{dx}}\exp(B)=A\exp(B).}In the general case there is no closed-form solution for the homogeneous equation, and one has to use either anumerical method, or an approximation method such asMagnus expansion.

Knowing the matrixU, the general solution of the non-homogeneous equation isy(x)=U(x)y0+U(x)U1(x)b(x)dx,{\displaystyle \mathbf {y} (x)=U(x)\mathbf {y_{0}} +U(x)\int U^{-1}(x)\mathbf {b} (x)\,dx,}where the column matrixy0{\displaystyle \mathbf {y_{0}} } is an arbitraryconstant of integration.

If initial conditions are given asy(x0)=y0,{\displaystyle \mathbf {y} (x_{0})=\mathbf {y} _{0},}the solution that satisfies these initial conditions isy(x)=U(x)U1(x0)y0+U(x)x0xU1(t)b(t)dt.{\displaystyle \mathbf {y} (x)=U(x)U^{-1}(x_{0})\mathbf {y_{0}} +U(x)\int _{x_{0}}^{x}U^{-1}(t)\mathbf {b} (t)\,dt.}

Higher order with variable coefficients

[edit]

A linear ordinary equation of order one with variable coefficients may be solved byquadrature, which means that the solutions may be expressed in terms ofintegrals. This is not the case for order at least two. This is the main result ofPicard–Vessiot theory which was initiated byÉmile Picard andErnest Vessiot, and whose recent developments are calleddifferential Galois theory.

The impossibility of solving by quadrature can be compared with theAbel–Ruffini theorem, which states that analgebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination ofdifferential Galois theory.

Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers.

Nevertheless, the case of order two with rational coefficients has been completely solved byKovacic's algorithm.

Cauchy–Euler equation

[edit]

Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the formxny(n)(x)+an1xn1y(n1)(x)++a0y(x)=0,{\displaystyle x^{n}y^{(n)}(x)+a_{n-1}x^{n-1}y^{(n-1)}(x)+\cdots +a_{0}y(x)=0,} wherea0,,an1{\displaystyle a_{0},\ldots ,a_{n-1}} are constant coefficients.

Holonomic functions

[edit]
Main article:holonomic function

Aholonomic function, also called aD-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients.

Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions includepolynomials,algebraic functions,logarithm,exponential function,sine,cosine,hyperbolic sine,hyperbolic cosine,inverse trigonometric andinverse hyperbolic functions, and manyspecial functions such asBessel functions andhypergeometric functions.

Holonomic functions have severalclosure properties; in particular, sums, products,derivative andintegrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there arealgorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.[3]

Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.[3]

Aholonomic sequence is a sequence of numbers that may be generated by arecurrence relation with polynomial coefficients. The coefficients of theTaylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of apower series is holonomic, then the series defines a holonomic function (even if theradius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, andvice versa.[3]

It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, mostcalculus operations can be done automatically on these functions, such asderivative,indefinite anddefinite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of theapproximation error,limits, localization ofsingularities,asymptotic behavior at infinity and near singularities, proof of identities, etc.[4]

See also

[edit]

References

[edit]
  1. ^Gershenfeld 1999, p.9
  2. ^Motivation: In analogy tocompleting the square technique we write the equation asy′ −fy =g, and try to modify the left side so it becomes a derivative. Specifically, we seek an "integrating factor"h =h(x) such that multiplying by it makes the left side equal to the derivative ofhy, namelyhy′ −hfy = (hy)′. This meansh′ = −hf, so thath =e−∫fdx =eF, as in the text.
  3. ^abcZeilberger, Doron.A holonomic systems approach to special functions identities. Journal of computational and applied mathematics. 32.3 (1990): 321-368
  4. ^Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., & Salvy, B. (2010, September).The dynamic dictionary of mathematical functions (DDMF). In International Congress on Mathematical Software (pp. 35-41). Springer, Berlin, Heidelberg.
  • Birkhoff, Garrett & Rota, Gian-Carlo (1978),Ordinary Differential Equations, New York: John Wiley and Sons, Inc.,ISBN 0-471-07411-X
  • Gershenfeld, Neil (1999),The Nature of Mathematical Modeling, Cambridge, UK.: Cambridge University Press,ISBN 978-0-521-57095-4
  • Robinson, James C. (2004),An Introduction to Ordinary Differential Equations, Cambridge, UK.: Cambridge University Press,ISBN 0-521-82650-0

External links

[edit]
Classification
Operations
Attributes of variables
Relation to processes
Solutions
Existence/uniqueness
Solution topics
Solution methods
Examples
Mathematicians
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Linear_differential_equation&oldid=1322329031"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp