Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Separation of variables

From Wikipedia, the free encyclopedia
Technique for solving differential equations
Differential equations
Scope
Classification
Solution
People
Specific equations

Inmathematics,separation of variables (also known as theFourier method) is any of several methods for solvingordinary andpartial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

Ordinary differential equations (ODE)

[edit]

A differential equation for the unknownf(x){\displaystyle f(x)} is separable if it can be written in the form

ddxf(x)=g(x)h(f(x)){\displaystyle {\frac {d}{dx}}f(x)=g(x)h(f(x))}

whereg{\displaystyle g} andh{\displaystyle h} are given functions. This is perhaps more transparent when written usingy=f(x){\displaystyle y=f(x)} as:

dydx=g(x)h(y).{\displaystyle {\frac {dy}{dx}}=g(x)h(y).}

So now as long ash(y) ≠ 0, we can rearrange terms to obtain:

dyh(y)=g(x)dx,{\displaystyle {dy \over h(y)}=g(x)\,dx,}

where the two variablesx andy have been separated. Notedx (anddy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition ofdx as adifferential (infinitesimal) is somewhat advanced.

Alternative notation

[edit]

Those who dislikeLeibniz's notation may prefer to write this as

1h(y)dydx=g(x),{\displaystyle {\frac {1}{h(y)}}{\frac {dy}{dx}}=g(x),}

but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect tox{\displaystyle x}, we have

1h(y)dydxdx=g(x)dx,{\displaystyle \int {\frac {1}{h(y)}}{\frac {dy}{dx}}\,dx=\int g(x)\,dx,}A1

or equivalently,

1h(y)dy=g(x)dx{\displaystyle \int {\frac {1}{h(y)}}\,dy=\int g(x)\,dx}

because of thesubstitution rule for integrals.

If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat thederivativedydx{\displaystyle {\frac {dy}{dx}}} as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.

(Note that we do not need to use twoconstants of integration, in equation (A1) as in

1h(y)dy+C1=g(x)dx+C2,{\displaystyle \int {\frac {1}{h(y)}}\,dy+C_{1}=\int g(x)\,dx+C_{2},}

because a single constantC=C2C1{\displaystyle C=C_{2}-C_{1}} is equivalent.)

Example

[edit]

Population growth is often modeled by the "logistic" differential equation

dPdt=kP(1PK){\displaystyle {\frac {dP}{dt}}=kP\left(1-{\frac {P}{K}}\right)}

whereP{\displaystyle P} is the population with respect to timet{\displaystyle t},k{\displaystyle k} is the rate of growth, andK{\displaystyle K} is thecarrying capacity of the environment.Separation of variables now leads to

dPP(1P/K)=kdt{\displaystyle {\begin{aligned}&\int {\frac {dP}{P\left(1-P/K\right)}}=\int k\,dt\end{aligned}}}

which is readily integrated using partial fractions on the left side yielding

P(t)=K1+Aekt{\displaystyle P(t)={\frac {K}{1+Ae^{-kt}}}}

where A is the constant of integration. We can findA{\displaystyle A} in terms ofP(0)=P0{\displaystyle P\left(0\right)=P_{0}} at t=0. Notinge0=1{\displaystyle e^{0}=1} we get

A=KP0P0.{\displaystyle A={\frac {K-P_{0}}{P_{0}}}.}

Generalization of separable ODEs to the nth order

[edit]

Much like one can speak of a separable first-order ODE, one can speak of a separable second-order, third-order ornth-order ODE. Consider the separable first-order ODE:

dydx=f(y)g(x){\displaystyle {\frac {dy}{dx}}=f(y)g(x)}

The derivative can alternatively be written the following way to underscore that it is an operator working on the unknown function,y:

dydx=ddx(y){\displaystyle {\frac {dy}{dx}}={\frac {d}{dx}}(y)}

Thus, when one separates variables for first-order equations, one in fact moves thedx denominator of the operator to the side with thex variable, and thed(y) is left on the side with they variable. The second-derivative operator, by analogy, breaks down as follows:

d2ydx2=ddx(dydx)=ddx(ddx(y)){\displaystyle {\frac {d^{2}y}{dx^{2}}}={\frac {d}{dx}}\left({\frac {dy}{dx}}\right)={\frac {d}{dx}}\left({\frac {d}{dx}}(y)\right)}

The third-, fourth- andnth-derivative operators break down in the same way. Thus, much like a first-order separable ODE is reducible to the form

dydx=f(y)g(x){\displaystyle {\frac {dy}{dx}}=f(y)g(x)}

a separable second-order ODE is reducible to the form

d2ydx2=f(y)g(x){\displaystyle {\frac {d^{2}y}{dx^{2}}}=f\left(y'\right)g(x)}

and an nth-order separable ODE is reducible to

dnydxn=f(y(n1))g(x){\displaystyle {\frac {d^{n}y}{dx^{n}}}=f\!\left(y^{(n-1)}\right)g(x)}

Example

[edit]

Consider the simple nonlinear second-order differential equation:y=(y)2.{\displaystyle y''=(y')^{2}.}This equation is an equation only ofy'' andy', meaning it is reducible to the general form described above and is, therefore, separable. Since it is a second-order separable equation, collect allx variables on one side and ally' variables on the other to get:d(y)(y)2=dx.{\displaystyle {\frac {d(y')}{(y')^{2}}}=dx.}Now, integrate the right side with respect tox and the left with respect toy':d(y)(y)2=dx.{\displaystyle \int {\frac {d(y')}{(y')^{2}}}=\int dx.}This gives1y=x+C1,{\displaystyle -{\frac {1}{y'}}=x+C_{1},}which simplifies to:y=1x+C1 .{\displaystyle y'=-{\frac {1}{x+C_{1}}}~.}This is now a simple integral problem that gives the final answer:y=C2ln|x+C1|.{\displaystyle y=C_{2}-\ln |x+C_{1}|.}

Partial differential equations

[edit]
See also:Separable partial differential equation

The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as theheat equation,wave equation,Laplace equation,Helmholtz equation andbiharmonic equation.

The analytical method of separation of variables for solving partial differential equations has also been generalized into a computational method of decomposition in invariant structures that can be used to solve systems of partial differential equations.[1]

Example: homogeneous case

[edit]

Consider the one-dimensionalheat equation. The equation is

utα2ux2=0{\displaystyle {\frac {\partial u}{\partial t}}-\alpha {\frac {\partial ^{2}u}{\partial x^{2}}}=0}1

The variableu denotes temperature. The boundary condition is homogeneous, that is

u|x=0=u|x=L=0{\displaystyle u{\big |}_{x=0}=u{\big |}_{x=L}=0}2

Let us attempt to find anontrivial solution satisfying the boundary conditions but with the following property:u is a product in which the dependence ofu onx,t is separated, that is:

u(x,t)=X(x)T(t).{\displaystyle u(x,t)=X(x)T(t).}3

Substitutingu back into equation (1) and using theproduct rule,

T(t)αT(t)=X(x)X(x)=λ,{\displaystyle {\frac {T'(t)}{\alpha T(t)}}={\frac {X''(x)}{X(x)}}=-\lambda ,}4

whereλ must be constant since the right hand side depends only onx and the left hand side only ont. Thus:

T(t)=λαT(t),{\displaystyle T'(t)=-\lambda \alpha T(t),}5

and

X(x)=λX(x).{\displaystyle X''(x)=-\lambda X(x).}6

λ here is theeigenvalue for both differential operators, andT(t) andX(x) are correspondingeigenfunctions.

We will now show that solutions forX(x) for values ofλ ≤ 0 cannot occur:

Suppose thatλ < 0. Then there exist real numbersB,C such that

X(x)=Beλx+Ceλx.{\displaystyle X(x)=Be^{{\sqrt {-\lambda }}\,x}+Ce^{-{\sqrt {-\lambda }}\,x}.}

From (2) we get

X(0)=0=X(L),{\displaystyle X(0)=0=X(L),}7

and thereforeB = 0 =C which impliesu is identically 0.

Suppose thatλ = 0. Then there exist real numbersB,C such that

X(x)=Bx+C.{\displaystyle X(x)=Bx+C.}

From (7) we conclude in the same manner as in 1 thatu is identically 0.

Therefore, it must be the case thatλ > 0. Then there exist real numbersA,B,C such that

T(t)=Aeλαt,{\displaystyle T(t)=Ae^{-\lambda \alpha t},}

and

X(x)=Bsin(λx)+Ccos(λx).{\displaystyle X(x)=B\sin({\sqrt {\lambda }}\,x)+C\cos({\sqrt {\lambda }}\,x).}

From (7) we getC = 0 and that for some positive integern,

λ=nπL.{\displaystyle {\sqrt {\lambda }}=n{\frac {\pi }{L}}.}

This solves the heat equation in the special case that the dependence ofu has the special form of (3).

In general, the sum of solutions to (1) which satisfy the boundary conditions (2) also satisfies (1) and (3). Hence a complete solution can be given as

u(x,t)=n=1DnsinnπxLexp(n2π2αtL2),{\displaystyle u(x,t)=\sum _{n=1}^{\infty }D_{n}\sin {\frac {n\pi x}{L}}\exp \left(-{\frac {n^{2}\pi ^{2}\alpha t}{L^{2}}}\right),}

whereDn are coefficients determined by initial condition.

Given the initial condition

u|t=0=f(x),{\displaystyle u{\big |}_{t=0}=f(x),}8

we can get

f(x)=n=1DnsinnπxL.{\displaystyle f(x)=\sum _{n=1}^{\infty }D_{n}\sin {\frac {n\pi x}{L}}.}

This is theFourier sine series expansion off(x) which is amenable toFourier analysis. Multiplying both sides withsinnπxL{\textstyle \sin {\frac {n\pi x}{L}}} and integrating over[0,L] results in

Dn=2L0Lf(x)sinnπxLdx.{\displaystyle D_{n}={\frac {2}{L}}\int _{0}^{L}f(x)\sin {\frac {n\pi x}{L}}\,dx.}

This method requires that the eigenfunctionsX, here{sinnπxL}n=1{\textstyle \left\{\sin {\frac {n\pi x}{L}}\right\}_{n=1}^{\infty }}, areorthogonal andcomplete. In general this is guaranteed bySturm–Liouville theory.

Example: nonhomogeneous case

[edit]

Suppose the equation is nonhomogeneous,

utα2ux2=h(x,t){\displaystyle {\frac {\partial u}{\partial t}}-\alpha {\frac {\partial ^{2}u}{\partial x^{2}}}=h(x,t)}8

with the boundary condition the same as (2) and initial condition same as (8).

Expandh(x,t),u(x,t) andf(x) into

h(x,t)=n=1hn(t)sinnπxL,{\displaystyle h(x,t)=\sum _{n=1}^{\infty }h_{n}(t)\sin {\frac {n\pi x}{L}},}9
u(x,t)=n=1un(t)sinnπxL,{\displaystyle u(x,t)=\sum _{n=1}^{\infty }u_{n}(t)\sin {\frac {n\pi x}{L}},}10
f(x)=n=1bnsinnπxL,{\displaystyle f(x)=\sum _{n=1}^{\infty }b_{n}\sin {\frac {n\pi x}{L}},}11

wherehn(t) andbn can be calculated by integration, whileun(t) is to be determined.

Substitute (9) and (10) back to (8) and considering the orthogonality of sine functions we get

un(t)+αn2π2L2un(t)=hn(t),{\displaystyle u'_{n}(t)+\alpha {\frac {n^{2}\pi ^{2}}{L^{2}}}u_{n}(t)=h_{n}(t),}

which are a sequence oflinear differential equations that can be readily solved with, for instance,Laplace transform, orIntegrating factor. Finally, we can get

un(t)=eαn2π2L2t(bn+0thn(s)eαn2π2L2sds).{\displaystyle u_{n}(t)=e^{-\alpha {\frac {n^{2}\pi ^{2}}{L^{2}}}t}\left(b_{n}+\int _{0}^{t}h_{n}(s)e^{\alpha {\frac {n^{2}\pi ^{2}}{L^{2}}}s}\,ds\right).}

If the boundary condition is nonhomogeneous, then the expansion of (9) and (10) is no longer valid. One has to find a functionv that satisfies the boundary condition only, and subtract it fromu. The functionu-v then satisfies homogeneous boundary condition, and can be solved with the above method.

Example: mixed derivatives

[edit]

For some equations involving mixed derivatives, the equation does not separate as easily as the heat equation did in the first example above, but nonetheless separation of variables may still be applied. Consider the two-dimensionalbiharmonic equation

4ux4+24ux2y2+4uy4=0.{\displaystyle {\frac {\partial ^{4}u}{\partial x^{4}}}+2{\frac {\partial ^{4}u}{\partial x^{2}\partial y^{2}}}+{\frac {\partial ^{4}u}{\partial y^{4}}}=0.}

Proceeding in the usual manner, we look for solutions of the form

u(x,y)=X(x)Y(y){\displaystyle u(x,y)=X(x)Y(y)}

and we obtain the equation

X(4)(x)X(x)+2X(x)X(x)Y(y)Y(y)+Y(4)(y)Y(y)=0.{\displaystyle {\frac {X^{(4)}(x)}{X(x)}}+2{\frac {X''(x)}{X(x)}}{\frac {Y''(y)}{Y(y)}}+{\frac {Y^{(4)}(y)}{Y(y)}}=0.}

Writing this equation in the form

E(x)+F(x)G(y)+H(y)=0,{\displaystyle E(x)+F(x)G(y)+H(y)=0,}

Taking the derivative of this expression with respect tox{\displaystyle x} givesE(x)+F(x)G(y)=0{\displaystyle E'(x)+F'(x)G(y)=0} which meansG(y)=const.{\displaystyle G(y)=const.} orF(x)=0{\displaystyle F'(x)=0} and likewise, taking derivative with respect toy{\displaystyle y} leads toF(x)G(y)+H(y)=0{\displaystyle F(x)G'(y)+H'(y)=0} and thusF(x)=const.{\displaystyle F(x)=const.} orG(y)=0{\displaystyle G'(y)=0}, hence eitherF(x) orG(y) must be a constant, say −λ. This further implies that eitherE(x)=F(x)G(y)+H(y){\displaystyle -E(x)=F(x)G(y)+H(y)} orH(y)=E(x)+F(x)G(y){\displaystyle -H(y)=E(x)+F(x)G(y)} are constant. Returning to the equation forX andY, we have two cases

X(x)=λ1X(x)X(4)(x)=μ1X(x)Y(4)(y)2λ1Y(y)=μ1Y(y){\displaystyle {\begin{aligned}X''(x)&=-\lambda _{1}X(x)\\X^{(4)}(x)&=\mu _{1}X(x)\\Y^{(4)}(y)-2\lambda _{1}Y''(y)&=-\mu _{1}Y(y)\end{aligned}}}

and

Y(y)=λ2Y(y)Y(4)(y)=μ2Y(y)X(4)(x)2λ2X(x)=μ2X(x){\displaystyle {\begin{aligned}Y''(y)&=-\lambda _{2}Y(y)\\Y^{(4)}(y)&=\mu _{2}Y(y)\\X^{(4)}(x)-2\lambda _{2}X''(x)&=-\mu _{2}X(x)\end{aligned}}}

which can each be solved by considering the separate cases forλi<0,λi=0,λi>0{\displaystyle \lambda _{i}<0,\lambda _{i}=0,\lambda _{i}>0} and noting thatμi=λi2{\displaystyle \mu _{i}=\lambda _{i}^{2}}.

Curvilinear coordinates

[edit]

Inorthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. Seespherical harmonics for example.

Applicability

[edit]

Partial differential equations

[edit]

For many PDEs, such as the wave equation, Helmholtz equation and Schrödinger equation, the applicability of separation of variables is a result of thespectral theorem. In some cases, separation of variables may not be possible. Separation of variables may be possible in some coordinate systems but not others,[2] and which coordinate systems allow for separation depends on the symmetry properties of the equation.[3] Below is an outline of an argument demonstrating the applicability of the method to certain linear equations, although the precise method may differ in individual cases (for instance in the biharmonic equation above).

Consider an initial boundary value problem for a functionu(x,t){\displaystyle u(x,t)} onD={(x,t):x[0,l],t0}{\displaystyle D=\{(x,t):x\in [0,l],t\geq 0\}} in two variables:

(Tu)(x,t)=(Su)(x,t){\displaystyle (Tu)(x,t)=(Su)(x,t)}

whereT{\displaystyle T} is a differential operator with respect tox{\displaystyle x} andS{\displaystyle S} is a differential operator with respect tot{\displaystyle t} with boundary data:

(Tu)(0,t)=(Tu)(l,t)=0{\displaystyle (Tu)(0,t)=(Tu)(l,t)=0} fort0{\displaystyle t\geq 0}
(Su)(x,0)=h(x){\displaystyle (Su)(x,0)=h(x)} for0xl{\displaystyle 0\leq x\leq l}

whereh{\displaystyle h} is a known function.

We look for solutions of the formu(x,t)=f(x)g(t){\displaystyle u(x,t)=f(x)g(t)}. Dividing the PDE through byf(x)g(t){\displaystyle f(x)g(t)} gives

Tff=Sgg{\displaystyle {\frac {Tf}{f}}={\frac {Sg}{g}}}

The right hand side depends only onx{\displaystyle x} and the left hand side only ont{\displaystyle t} so both must be equal to a constantK{\displaystyle K}, which gives two ordinary differential equations

Tf=Kf,Sg=Kg{\displaystyle Tf=Kf,Sg=Kg}

which we can recognize as eigenvalue problems for the operators forT{\displaystyle T} andS{\displaystyle S}. IfT{\displaystyle T} is a compact, self-adjoint operator on the spaceL2[0,l]{\displaystyle L^{2}[0,l]} along with the relevant boundary conditions, then by the Spectral theorem there exists a basis forL2[0,l]{\displaystyle L^{2}[0,l]} consisting of eigenfunctions forT{\displaystyle T}. Let the spectrum ofT{\displaystyle T} beE{\displaystyle E} and letfλ{\displaystyle f_{\lambda }} be an eigenfunction with eigenvalueλE{\displaystyle \lambda \in E}. Then for any function which at each timet{\displaystyle t} is square-integrable with respect tox{\displaystyle x}, we can write this function as a linear combination of thefλ{\displaystyle f_{\lambda }}. In particular, we know the solutionu{\displaystyle u} can be written as

u(x,t)=λEcλ(t)fλ(x){\displaystyle u(x,t)=\sum _{\lambda \in E}c_{\lambda }(t)f_{\lambda }(x)}

For some functionscλ(t){\displaystyle c_{\lambda }(t)}. In the separation of variables, these functions are given by solutions toSg=Kg{\displaystyle Sg=Kg}

Hence, the spectral theorem ensures that the separation of variables will (when it is possible) find all the solutions.

For many differential operators, such asd2dx2{\displaystyle {\frac {d^{2}}{dx^{2}}}}, we can show that they are self-adjoint by integration by parts. While these operators may not be compact, their inverses (when they exist) may be, as in the case of the wave equation, and these inverses have the same eigenfunctions and eigenvalues as the original operator (with the possible exception of zero).[4]

Matrices

[edit]

The matrix form of the separation of variables is theKronecker sum.

As an example we consider the 2Ddiscrete Laplacian on aregular grid:

L=DxxDyy=DxxI+IDyy,{\displaystyle L=\mathbf {D_{xx}} \oplus \mathbf {D_{yy}} =\mathbf {D_{xx}} \otimes \mathbf {I} +\mathbf {I} \otimes \mathbf {D_{yy}} ,\,}

whereDxx{\displaystyle \mathbf {D_{xx}} } andDyy{\displaystyle \mathbf {D_{yy}} } are 1D discrete Laplacians in thex- andy-directions, correspondingly, andI{\displaystyle \mathbf {I} } are the identities of appropriate sizes. See the main articleKronecker sum of discrete Laplacians for details.

Software

[edit]

Some mathematicalprograms are able to do separation of variables:Xcas[5] among others.

See also

[edit]

Notes

[edit]
  1. ^Miroshnikov, Victor A. (15 December 2017).Harmonic Wave Systems: Partial Differential Equations of the Helmholtz Decomposition. Scientific Research Publishing, Inc. USA.ISBN 9781618964069.
  2. ^John Renze, Eric W. Weisstein, Separation of variables
  3. ^Willard Miller(1984)Symmetry and Separation of Variables, Cambridge University Press
  4. ^David Benson (2007)Music: A Mathematical Offering, Cambridge University Press, Appendix W
  5. ^"Symbolic algebra and Mathematics with Xcas"(PDF).

References

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Separation_of_variables&oldid=1322591075"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp