Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Recursive Bayesian estimation

From Wikipedia, the free encyclopedia
Process for estimating a probability density function
This article is about Bayes filter, a general probabilistic approach. For the spam filter with a similar name, seeNaive Bayes spam filtering.

Inprobability theory,statistics, andmachine learning,recursive Bayesian estimation, also known as aBayes filter, is a general probabilistic approach forestimating an unknownprobability density function (PDF) recursively over time using incoming measurements and a mathematical process model. The process relies heavily upon mathematical concepts and models that are theorized within a study of prior and posterior probabilities known asBayesian statistics.

In robotics

[edit]

A Bayes filter is an algorithm used incomputer science for calculating the probabilities of multiple beliefs to allow arobot to infer its position and orientation. Essentially, Bayes filters allow robots to continuously update their most likely position within a coordinate system, based on the most recently acquired sensor data. This is a recursive algorithm. It consists of two parts: prediction and innovation. If the variables arenormally distributed and the transitions are linear, the Bayes filter becomes equal to theKalman filter.

In a simple example, a robot moving throughout a grid may have several different sensors that provide it with information about its surroundings. The robot may begin with certainty that it is at position (0,0). However, as it moves further and further from its original position, the robot has continuously less certainty about its position; using a Bayes filter, a probability can be assigned to the robot's belief about its current position, and that probability can be continuously updated from additional sensor information.

Model

[edit]

The measurementsz{\displaystyle z} are themanifestations of ahidden Markov model (HMM), which means the true statex{\displaystyle x} is assumed to be an unobservedMarkov process. The following picture presents aBayesian network of a HMM.

Hidden Markov model
Hidden Markov model

Because of the Markov assumption, the probability of the current true state given the immediately previous one is conditionally independent of the other earlier states.

p(xk|xk1,xk2,,x0)=p(xk|xk1){\displaystyle p({\textbf {x}}_{k}|{\textbf {x}}_{k-1},{\textbf {x}}_{k-2},\dots ,{\textbf {x}}_{0})=p({\textbf {x}}_{k}|{\textbf {x}}_{k-1})}

Similarly, the measurement at thek-th timestep is dependent only upon the current state, so is conditionally independent of all other states given the current state.

p(zk|xk,xk1,,x0)=p(zk|xk){\displaystyle p({\textbf {z}}_{k}|{\textbf {x}}_{k},{\textbf {x}}_{k-1},\dots ,{\textbf {x}}_{0})=p({\textbf {z}}_{k}|{\textbf {x}}_{k})}

Using these assumptions the probability distribution over all states of the HMM can be written simply as

p(x0,,xk,z1,,zk)=p(x0)i=1kp(zi|xi)p(xi|xi1).{\displaystyle p({\textbf {x}}_{0},\dots ,{\textbf {x}}_{k},{\textbf {z}}_{1},\dots ,{\textbf {z}}_{k})=p({\textbf {x}}_{0})\prod _{i=1}^{k}p({\textbf {z}}_{i}|{\textbf {x}}_{i})p({\textbf {x}}_{i}|{\textbf {x}}_{i-1}).}

However, when using the Kalman filter to estimate the statex, the probability distribution of interest is associated with the current states conditioned on the measurements up to the current timestep. (This is achieved by marginalising out the previous states and dividing by the probability of the measurement set.)

This leads to thepredict andupdate steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k - 1)-th timestep to thek-th and the probability distribution associated with the previous state, over all possiblexk1{\displaystyle x_{k-1}}.

p(xk|z1:k1)=p(xk|xk1)p(xk1|z1:k1)dxk1{\displaystyle p({\textbf {x}}_{k}|{\textbf {z}}_{1:k-1})=\int p({\textbf {x}}_{k}|{\textbf {x}}_{k-1})p({\textbf {x}}_{k-1}|{\textbf {z}}_{1:k-1})\,d{\textbf {x}}_{k-1}}

The probability distribution of update is proportional to the product of the measurement likelihood and the predicted state.

p(xk|z1:k)=p(zk|xk)p(xk|z1:k1)p(zk|z1:k1)p(zk|xk)p(xk|z1:k1){\displaystyle p({\textbf {x}}_{k}|{\textbf {z}}_{1:k})={\frac {p({\textbf {z}}_{k}|{\textbf {x}}_{k})p({\textbf {x}}_{k}|{\textbf {z}}_{1:k-1})}{p({\textbf {z}}_{k}|{\textbf {z}}_{1:k-1})}}\propto p({\textbf {z}}_{k}|{\textbf {x}}_{k})p({\textbf {x}}_{k}|{\textbf {z}}_{1:k-1})}

The denominator

p(zk|z1:k1)=p(zk|xk)p(xk|z1:k1)dxk{\displaystyle p({\textbf {z}}_{k}|{\textbf {z}}_{1:k-1})=\int p({\textbf {z}}_{k}|{\textbf {x}}_{k})p({\textbf {x}}_{k}|{\textbf {z}}_{1:k-1})d{\textbf {x}}_{k}}

is constant relative tox{\displaystyle x}, so we can always substitute it for a coefficientα{\displaystyle \alpha }, which can usually be ignored in practice. The numerator can be calculated and then simply normalized, since its integral must be unity.

Applications

[edit]

Sequential Bayesian filtering

[edit]

Sequential Bayesian filtering is the extension of the Bayesian estimation for the case when the observed value changes in time. It is a method to estimate the real value of an observed variable that evolves in time.

There are several variations:

filtering
when estimating thecurrent value given past and current observations,
smoothing
when estimatingpast values given past and current observations, and
prediction
when estimating a probablefuture value given past and current observations.

The notion of Sequential Bayesian filtering is extensively used incontrol androbotics.

Further reading

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Recursive_Bayesian_estimation&oldid=1254365058"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp