Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

State-space representation

From Wikipedia, the free encyclopedia
(Redirected fromState space representation)
Mathematical model of a system in control engineering
Not to be confused withquantum state space orconfiguration space (physics).

Incontrol engineering andsystem identification, astate-space representation is amathematical model of aphysical system that usesstate variables to track how inputs shape system behavior over time throughfirst-order differential equations ordifference equations. These state variables change based on their current values and inputs, while outputs depend on the states and sometimes the inputs too. Thestate space (also calledtime-domain approach and equivalent tophase space in certaindynamical systems) is a geometric space where the axes are these state variables, and the system’s state is represented by a statevector.

Forlinear,time-invariant, and finite-dimensional systems, the equations can be written inmatrix form,[1][2] offering a compact alternative to thefrequency domain’sLaplace transforms formultiple-input and multiple-output (MIMO) systems. Unlike the frequency domain approach, it works for systems beyond just linear ones with zero initial conditions. This approach turnssystems theory into an algebraic framework, making it possible to useKronecker structures for efficient analysis.

State-space models are applied in fields such as economics,[3] statistics,[4] computer science, electrical engineering,[5] and neuroscience.[6] Ineconometrics, for example, state-space models can be used to decompose atime series into trend and cycle, compose individual indicators into a composite index,[7] identify turning points of thebusiness cycle, and estimate GDP using latent and unobserved time series.[8][9] Many applications rely on theKalman Filter or a state observer to produce estimates of the current unknown state variables using their previous observations.[10][11]

State variables

[edit]

The internalstate variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time.[12] The minimum number of state variables required to represent a given system,n{\displaystyle n}, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such ascapacitors andinductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables.

Linear systems

[edit]
Block diagram representation of the linear state-space equations

The most general state-space representation of a linear system withp{\displaystyle p} inputs,q{\displaystyle q} outputs andn{\displaystyle n} state variables is written in the following form:[13]x˙(t)=A(t)x(t)+B(t)u(t){\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)}y(t)=C(t)x(t)+D(t)u(t){\displaystyle \mathbf {y} (t)=\mathbf {C} (t)\mathbf {x} (t)+\mathbf {D} (t)\mathbf {u} (t)}

where:

In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the commonLTI case, matrices will be time invariant. The time variablet{\displaystyle t} can be continuous (e.g.tR{\displaystyle t\in \mathbb {R} }) or discrete (e.g.tZ{\displaystyle t\in \mathbb {Z} }). In the latter case, the time variablek{\displaystyle k} is usually used instead oft{\displaystyle t}.Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:

System typeState-space model
Continuous time-invariantx˙(t)=Ax(t)+Bu(t){\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)}
y(t)=Cx(t)+Du(t){\displaystyle \mathbf {y} (t)=\mathbf {C} \mathbf {x} (t)+\mathbf {D} \mathbf {u} (t)}
Continuous time-variantx˙(t)=A(t)x(t)+B(t)u(t){\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)}
y(t)=C(t)x(t)+D(t)u(t){\displaystyle \mathbf {y} (t)=\mathbf {C} (t)\mathbf {x} (t)+\mathbf {D} (t)\mathbf {u} (t)}
Explicit discrete time-invariantx(k+1)=Ax(k)+Bu(k){\displaystyle \mathbf {x} (k+1)=\mathbf {A} \mathbf {x} (k)+\mathbf {B} \mathbf {u} (k)}
y(k)=Cx(k)+Du(k){\displaystyle \mathbf {y} (k)=\mathbf {C} \mathbf {x} (k)+\mathbf {D} \mathbf {u} (k)}
Explicit discrete time-variantx(k+1)=A(k)x(k)+B(k)u(k){\displaystyle \mathbf {x} (k+1)=\mathbf {A} (k)\mathbf {x} (k)+\mathbf {B} (k)\mathbf {u} (k)}
y(k)=C(k)x(k)+D(k)u(k){\displaystyle \mathbf {y} (k)=\mathbf {C} (k)\mathbf {x} (k)+\mathbf {D} (k)\mathbf {u} (k)}
Laplace domain of
continuous time-invariant
sX(s)x(0)=AX(s)+BU(s){\displaystyle s\mathbf {X} (s)-\mathbf {x} (0)=\mathbf {A} \mathbf {X} (s)+\mathbf {B} \mathbf {U} (s)}
Y(s)=CX(s)+DU(s){\displaystyle \mathbf {Y} (s)=\mathbf {C} \mathbf {X} (s)+\mathbf {D} \mathbf {U} (s)}
Z-domain of
discrete time-invariant
zX(z)zx(0)=AX(z)+BU(z){\displaystyle z\mathbf {X} (z)-z\mathbf {x} (0)=\mathbf {A} \mathbf {X} (z)+\mathbf {B} \mathbf {U} (z)}
Y(z)=CX(z)+DU(z){\displaystyle \mathbf {Y} (z)=\mathbf {C} \mathbf {X} (z)+\mathbf {D} \mathbf {U} (z)}

Example: continuous-time LTI case

[edit]

Stability and natural response characteristics of a continuous-timeLTI system (i.e., linear with matrices that are constant with respect to time) can be studied from theeigenvalues of the matrixA{\displaystyle \mathbf {A} }. The stability of a time-invariant state-space model can be determined by looking at the system'stransfer function in factored form. It will then look something like this:

G(s)=k(sz1)(sz2)(sz3)(sp1)(sp2)(sp3)(sp4).{\displaystyle \mathbf {G} (s)=k{\frac {(s-z_{1})(s-z_{2})(s-z_{3})}{(s-p_{1})(s-p_{2})(s-p_{3})(s-p_{4})}}.}

The denominator of the transfer function is equal to thecharacteristic polynomial found by taking thedeterminant ofsIA{\displaystyle s\mathbf {I} -\mathbf {A} },λ(s)=|sIA|.{\displaystyle \lambda (s)=\left|s\mathbf {I} -\mathbf {A} \right|.}The roots of this polynomial (theeigenvalues) are the system transfer function'spoles (i.e., thesingularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system isasymptotically stable ormarginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system'sLyapunov stability.

The zeros found in the numerator ofG(s){\displaystyle \mathbf {G} (s)} can similarly be used to determine whether the system isminimum phase.

The system may still beinput–output stable (seeBIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function areremovable).

Controllability

[edit]
Main article:Controllability

The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model iscontrollableif and only ifrank[BABA2BAn1B]=n,{\displaystyle \operatorname {rank} {\begin{bmatrix}\mathbf {B} &\mathbf {A} \mathbf {B} &\mathbf {A} ^{2}\mathbf {B} &\cdots &\mathbf {A} ^{n-1}\mathbf {B} \end{bmatrix}}=n,}whererank is the number of linearly independent rows in a matrix, and wheren is the number of state variables.

Observability

[edit]
Main article:Observability

Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system).

A continuous time-invariant linear state-space model isobservable if and only ifrank[CCACAn1]=n.{\displaystyle \operatorname {rank} {\begin{bmatrix}\mathbf {C} \\\mathbf {C} \mathbf {A} \\\vdots \\\mathbf {C} \mathbf {A} ^{n-1}\end{bmatrix}}=n.}

Transfer function

[edit]

The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way:

First, taking theLaplace transform ofx˙(t)=Ax(t)+Bu(t){\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)}

yieldssX(s)x(0)=AX(s)+BU(s).{\displaystyle s\mathbf {X} (s)-\mathbf {x} (0)=\mathbf {A} \mathbf {X} (s)+\mathbf {B} \mathbf {U} (s).}Next, we simplify forX(s){\displaystyle \mathbf {X} (s)}, giving(sIA)X(s)=x(0)+BU(s){\displaystyle (s\mathbf {I} -\mathbf {A} )\mathbf {X} (s)=\mathbf {x} (0)+\mathbf {B} \mathbf {U} (s)}and thusX(s)=(sIA)1x(0)+(sIA)1BU(s).{\displaystyle \mathbf {X} (s)=(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} \mathbf {U} (s).}

Substituting forX(s){\displaystyle \mathbf {X} (s)} in the output equation

Y(s)=CX(s)+DU(s),{\displaystyle \mathbf {Y} (s)=\mathbf {C} \mathbf {X} (s)+\mathbf {D} \mathbf {U} (s),}givingY(s)=C((sIA)1x(0)+(sIA)1BU(s))+DU(s).{\displaystyle \mathbf {Y} (s)=\mathbf {C} ((s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} \mathbf {U} (s))+\mathbf {D} \mathbf {U} (s).}

Assuming zero initial conditionsx(0)=0{\displaystyle \mathbf {x} (0)=\mathbf {0} } and asingle-input single-output (SISO) system, thetransfer function is defined as the ratio of output and inputG(s)=Y(s)/U(s){\displaystyle G(s)=Y(s)/U(s)}. For amultiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, thetransfer function matrix is derived fromY(s)=G(s)U(s){\displaystyle \mathbf {Y} (s)=\mathbf {G} (s)\mathbf {U} (s)}

using the method of equating the coefficients which yields

G(s)=C(sIA)1B+D.{\displaystyle \mathbf {G} (s)=\mathbf {C} (s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} +\mathbf {D} .}

Consequently,G(s){\displaystyle \mathbf {G} (s)} is a matrix with the dimensionq×p{\displaystyle q\times p} which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. TheRosenbrock system matrix provides a bridge between the state-space representation and itstransfer function.

Canonical realizations

[edit]
Main article:Realization (systems)

Any given transfer function which isstrictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system):

Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:G(s)=n1s3+n2s2+n3s+n4s4+d1s3+d2s2+d3s+d4.{\displaystyle \mathbf {G} (s)={\frac {n_{1}s^{3}+n_{2}s^{2}+n_{3}s+n_{4}}{s^{4}+d_{1}s^{3}+d_{2}s^{2}+d_{3}s+d_{4}}}.}

The coefficients can now be inserted directly into the state-space model by the following approach:x˙(t)=[010000100001d4d3d2d1]x(t)+[0001]u(t){\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\-d_{4}&-d_{3}&-d_{2}&-d_{1}\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}0\\0\\0\\1\end{bmatrix}}\mathbf {u} (t)}

y(t)=[n4n3n2n1]x(t).{\displaystyle \mathbf {y} (t)={\begin{bmatrix}n_{4}&n_{3}&n_{2}&n_{1}\end{bmatrix}}\mathbf {x} (t).}

This state-space realization is calledcontrollable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).

The transfer function coefficients can also be used to construct another type of canonical formx˙(t)=[000d4100d3010d2001d1]x(t)+[n4n3n2n1]u(t){\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}0&0&0&-d_{4}\\1&0&0&-d_{3}\\0&1&0&-d_{2}\\0&0&1&-d_{1}\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}n_{4}\\n_{3}\\n_{2}\\n_{1}\end{bmatrix}}\mathbf {u} (t)}y(t)=[0001]x(t).{\displaystyle \mathbf {y} (t)={\begin{bmatrix}0&0&0&1\end{bmatrix}}\mathbf {x} (t).}

This state-space realization is calledobservable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).

Proper transfer functions

[edit]

Transfer functions which are onlyproper (and notstrictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant.G(s)=GSP(s)+G().{\displaystyle \mathbf {G} (s)=\mathbf {G} _{\mathrm {SP} }(s)+\mathbf {G} (\infty ).}

The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is triviallyy(t)=G()u(t){\displaystyle \mathbf {y} (t)=\mathbf {G} (\infty )\mathbf {u} (t)}. Together we then get a state-space realization with matricesA,B andC determined by the strictly proper part, and matrixD determined by the constant.

Here is an example to clear things up a bit:G(s)=s2+3s+3s2+2s+1=s+2s2+2s+1+1{\displaystyle \mathbf {G} (s)={\frac {s^{2}+3s+3}{s^{2}+2s+1}}={\frac {s+2}{s^{2}+2s+1}}+1}which yields the following controllable realizationx˙(t)=[2110]x(t)+[10]u(t){\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}-2&-1\\1&0\\\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}1\\0\end{bmatrix}}\mathbf {u} (t)}y(t)=[12]x(t)+[1]u(t){\displaystyle \mathbf {y} (t)={\begin{bmatrix}1&2\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}1\end{bmatrix}}\mathbf {u} (t)}Notice how the output also depends directly on the input. This is due to theG(){\displaystyle \mathbf {G} (\infty )} constant in the transfer function.

Feedback

[edit]
Typical state-space model with feedback

A common method for feedback is to multiply the output by a matrixK and setting this as the input to the system:u(t)=Ky(t){\displaystyle \mathbf {u} (t)=K\mathbf {y} (t)}.Since the values ofK are unrestricted the values can easily be negated fornegative feedback.The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.

x˙(t)=Ax(t)+Bu(t){\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}y(t)=Cx(t)+Du(t){\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}

becomes

x˙(t)=Ax(t)+BKy(t){\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+BK\mathbf {y} (t)}y(t)=Cx(t)+DKy(t){\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+DK\mathbf {y} (t)}

solving the output equation fory(t){\displaystyle \mathbf {y} (t)} and substituting in the state equation results in

x˙(t)=(A+BK(IDK)1C)x(t){\displaystyle {\dot {\mathbf {x} }}(t)=\left(A+BK\left(I-DK\right)^{-1}C\right)\mathbf {x} (t)}y(t)=(IDK)1Cx(t){\displaystyle \mathbf {y} (t)=\left(I-DK\right)^{-1}C\mathbf {x} (t)}

The advantage of this is that theeigenvalues ofA can be controlled by settingK appropriately througheigendecomposition of(A+BK(IDK)1C){\displaystyle \left(A+BK\left(I-DK\right)^{-1}C\right)}.This assumes that the closed-loop system iscontrollable or that the unstable eigenvalues ofA can be made stable through appropriate choice ofK.

Example

[edit]

For a strictly proper systemD equals zero. Another fairly common situation is when all states are outputs, i.e.y =x, which yieldsC =I, theidentity matrix. This would then result in the simpler equations

x˙(t)=(A+BK)x(t){\displaystyle {\dot {\mathbf {x} }}(t)=\left(A+BK\right)\mathbf {x} (t)}y(t)=x(t){\displaystyle \mathbf {y} (t)=\mathbf {x} (t)}

This reduces the necessary eigendecomposition to justA+BK{\displaystyle A+BK}.

Feedback with setpoint (reference) input

[edit]
Output feedback with set point

In addition to feedback, an input,r(t){\displaystyle r(t)}, can be added such thatu(t)=Ky(t)+r(t){\displaystyle \mathbf {u} (t)=-K\mathbf {y} (t)+\mathbf {r} (t)}.

x˙(t)=Ax(t)+Bu(t){\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}y(t)=Cx(t)+Du(t){\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}

becomes

x˙(t)=Ax(t)BKy(t)+Br(t){\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)-BK\mathbf {y} (t)+B\mathbf {r} (t)}y(t)=Cx(t)DKy(t)+Dr(t){\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)-DK\mathbf {y} (t)+D\mathbf {r} (t)}

solving the output equation fory(t){\displaystyle \mathbf {y} (t)} and substituting in the state equation results in

x˙(t)=(ABK(I+DK)1C)x(t)+B(IK(I+DK)1D)r(t){\displaystyle {\dot {\mathbf {x} }}(t)=\left(A-BK\left(I+DK\right)^{-1}C\right)\mathbf {x} (t)+B\left(I-K\left(I+DK\right)^{-1}D\right)\mathbf {r} (t)}y(t)=(I+DK)1Cx(t)+(I+DK)1Dr(t){\displaystyle \mathbf {y} (t)=\left(I+DK\right)^{-1}C\mathbf {x} (t)+\left(I+DK\right)^{-1}D\mathbf {r} (t)}

One fairly common simplification to this system is removingD, which reduces the equations to

x˙(t)=(ABKC)x(t)+Br(t){\displaystyle {\dot {\mathbf {x} }}(t)=\left(A-BKC\right)\mathbf {x} (t)+B\mathbf {r} (t)}y(t)=Cx(t){\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)}

Moving object example

[edit]

A classical linear system is that of one-dimensional movement of an object (e.g., a cart).Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring:

my¨(t)=u(t)by˙(t)ky(t){\displaystyle m{\ddot {y}}(t)=u(t)-b{\dot {y}}(t)-ky(t)}

where

The state equation would then become

[x˙1(t)x˙2(t)]=[01kmbm][x1(t)x2(t)]+[01m]u(t){\displaystyle {\begin{bmatrix}{\dot {\mathbf {x} }}_{1}(t)\\{\dot {\mathbf {x} }}_{2}(t)\end{bmatrix}}={\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}{\begin{bmatrix}\mathbf {x} _{1}(t)\\\mathbf {x} _{2}(t)\end{bmatrix}}+{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}\mathbf {u} (t)}y(t)=[10][x1(t)x2(t)]{\displaystyle \mathbf {y} (t)=\left[{\begin{matrix}1&0\end{matrix}}\right]\left[{\begin{matrix}\mathbf {x_{1}} (t)\\\mathbf {x_{2}} (t)\end{matrix}}\right]}

where

Thecontrollability test is then

[BAB]=[[01m][01kmbm][01m]]=[01m1mbm2]{\displaystyle {\begin{bmatrix}B&AB\end{bmatrix}}={\begin{bmatrix}{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}&{\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}\end{bmatrix}}={\begin{bmatrix}0&{\frac {1}{m}}\\{\frac {1}{m}}&-{\frac {b}{m^{2}}}\end{bmatrix}}}

which has full rank for allb{\displaystyle b} andm{\displaystyle m}. This means, that if initial state of the system is known (y(t){\displaystyle y(t)},y˙(t){\displaystyle {\dot {y}}(t)},y¨(t){\displaystyle {\ddot {y}}(t)}), and if theb{\displaystyle b} andm{\displaystyle m} are constants, then there is a forceu{\displaystyle u} that could move the cart into any other position in the system.

Theobservability test is then

[CCA]=[[10][10][01kmbm]]=[1001]{\displaystyle {\begin{bmatrix}C\\CA\end{bmatrix}}={\begin{bmatrix}{\begin{bmatrix}1&0\end{bmatrix}}\\{\begin{bmatrix}1&0\end{bmatrix}}{\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}}

which also has full rank. Therefore, this system is both controllable and observable.

Nonlinear systems

[edit]

The more general form of a state-space model can be written as two functions.

x˙(t)=f(t,x(t),u(t)){\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {f} (t,x(t),u(t))}y(t)=h(t,x(t),u(t)){\displaystyle \mathbf {y} (t)=\mathbf {h} (t,x(t),u(t))}

The first is the state equation and the latter is the output equation.If the functionf(,,){\displaystyle f(\cdot ,\cdot ,\cdot )} is a linear combination of states and inputs then the equations can be written in matrix notation like above.Theu(t){\displaystyle u(t)} argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).

Pendulum example

[edit]

A classicnonlinear system is a simple unforcedpendulum

m2θ¨(t)=mgsinθ(t)kθ˙(t){\displaystyle m\ell ^{2}{\ddot {\theta }}(t)=-m\ell g\sin \theta (t)-k\ell {\dot {\theta }}(t)}

where

The state equations are then

x˙1(t)=x2(t){\displaystyle {\dot {x}}_{1}(t)=x_{2}(t)}x˙2(t)=gsinx1(t)kmx2(t){\displaystyle {\dot {x}}_{2}(t)=-{\frac {g}{\ell }}\sin {x_{1}}(t)-{\frac {k}{m\ell }}{x_{2}}(t)}

where

Instead, the state equation can be written in the general form

x˙(t)=[x˙1(t)x˙2(t)]=f(t,x(t))=[x2(t)gsinx1(t)kmx2(t)].{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}{\dot {x}}_{1}(t)\\{\dot {x}}_{2}(t)\end{bmatrix}}=\mathbf {f} (t,x(t))={\begin{bmatrix}x_{2}(t)\\-{\frac {g}{\ell }}\sin {x_{1}}(t)-{\frac {k}{m\ell }}{x_{2}}(t)\end{bmatrix}}.}

Theequilibrium/stationary points of a system are whenx˙=0{\displaystyle {\dot {x}}=0} and so the equilibrium points of a pendulum are those that satisfy

[x1x2]=[nπ0]{\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}n\pi \\0\end{bmatrix}}}

for integersn.

See also

[edit]

References

[edit]
  1. ^Katalin M. Hangos; R. Lakner & M. Gerzson (2001).Intelligent Control Systems: An Introduction with Examples. Springer. p. 254.ISBN 978-1-4020-0134-5.
  2. ^Katalin M. Hangos; József Bokor & Gábor Szederkényi (2004).Analysis and Control of Nonlinear Process Systems. Springer. p. 25.ISBN 978-1-85233-600-4.
  3. ^Stock, J.H.; Watson, M.W. (2016), "Dynamic Factor Models, Factor-Augmented Vector Autoregressions, and Structural Vector Autoregressions in Macroeconomics",Handbook of Macroeconomics, vol. 2, Elsevier, pp. 415–525,doi:10.1016/bs.hesmac.2016.04.002,ISBN 978-0-444-59487-7
  4. ^Durbin, James; Koopman, Siem Jan (2012).Time series analysis by state space methods. Oxford University Press.ISBN 978-0-19-964117-8.OCLC 794591362.
  5. ^Roesser, R. (1975). "A discrete state-space model for linear image processing".IEEE Transactions on Automatic Control.20 (1):1–10.doi:10.1109/tac.1975.1100844.ISSN 0018-9286.
  6. ^Smith, Anne C.; Brown, Emery N. (2003). "Estimating a State-Space Model from Point Process Observations".Neural Computation.15 (5):965–991.doi:10.1162/089976603765202622.ISSN 0899-7667.PMID 12803953.S2CID 10020032.
  7. ^James H. Stock & Mark W. Watson, 1989."New Indexes of Coincident and Leading Economic Indicators," NBER Chapters, in: NBER Macroeconomics Annual 1989, Volume 4, pages 351-409, National Bureau of Economic Research, Inc.
  8. ^Bańbura, Marta; Modugno, Michele (2012-11-12). "Maximum Likelihood Estimation of Factor Models on Datasets with Arbitrary Pattern of Missing Data".Journal of Applied Econometrics.29 (1):133–160.doi:10.1002/jae.2306.hdl:10419/153623.ISSN 0883-7252.S2CID 14231301.
  9. ^"State-Space Models with Markov Switching and Gibbs-Sampling",State-Space Models with Regime Switching, The MIT Press, pp. 237–274, 2017,doi:10.7551/mitpress/6444.003.0013,ISBN 978-0-262-27711-2
  10. ^Kalman, R. E. (1960-03-01)."A New Approach to Linear Filtering and Prediction Problems".Journal of Basic Engineering.82 (1):35–45.doi:10.1115/1.3662552.ISSN 0021-9223.S2CID 259115248.
  11. ^Harvey, Andrew C. (1990).Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge University Press. doi:10.1017/CBO9781107049994
  12. ^Nise, Norman S. (2010).Control Systems Engineering (6th ed.). John Wiley & Sons, Inc.ISBN 978-0-470-54756-4.
  13. ^Brogan, William L. (1974).Modern Control Theory (1st ed.). Quantum Publishers, Inc. p. 172.

Further reading

[edit]
On the applications of state-space models in econometrics
  • Durbin, J.; Koopman, S. (2001).Time series analysis by state space methods. Oxford, UK: Oxford University Press.ISBN 978-0-19-852354-3.

External links

[edit]
Differentiable computing
General
Hardware
Software libraries
Retrieved from "https://en.wikipedia.org/w/index.php?title=State-space_representation&oldid=1313166087"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp