Inprobability theory andstatistics, theBernoulli distribution, named after Swiss mathematicianJacob Bernoulli,[1] is thediscrete probability distribution of arandom variable which takes the value 1 with probability and the value 0 with probability. Less formally, it can be thought of as a model for the set of possible outcomes of any singleexperiment that asks ayes–no question. Such questions lead tooutcomes that areBoolean-valued: a singlebit whose value is success/yes/true/one withprobabilityp and failure/no/false/zero with probabilityq. It can be used to represent a (possibly biased)coin toss where 1 and 0 would represent "heads" and "tails", respectively, andp would be the probability of the coin landing on heads (or vice versa where 1 would represent tails andp would be the probability of tails). In particular, unfair coins would have
The Bernoulli distribution is a special case of thebinomial distribution where a single trial is conducted (son would be 1 for such a binomial distribution). It is also a special case of thetwo-point distribution, for which the possible outcomes need not be 0 and 1.[2]
Thekurtosis goes to infinity for high and low values of but for the two-point distributions including the Bernoulli distribution have a lowerexcess kurtosis, namely −2, than any other probability distribution.
Theskewness is. When we take the standardized Bernoulli distributed random variable we find that this random variable attains with probability and attains with probability. Thus we get
The central moment of order is given byThe first six central moments areThe higher central moments can be expressed more compactly in terms of andThe first six cumulants are
Entropy is a measure of uncertainty or randomness in a probability distribution. For a Bernoulli random variable with success probability and failure probability, the entropy is defined as:
The entropy is maximized when, indicating the highest level of uncertainty when both outcomes are equally likely. The entropy is zero when or, where one outcome is certain.
Fisher information measures the amount of information that an observable random variable carries about an unknown parameter upon which the probability of depends. For the Bernoulli distribution, the Fisher information with respect to the parameter is given by:
Proof:
TheLikelihood Function for a Bernoulli random variable is: This represents the probability of observing given the parameter.
TheLog-Likelihood Function is:
The Score Function (the first derivative of the log-likelihood with respect to is:
The second derivative of the log-likelihood function is:
Fisher information is calculated as the negative expected value of the second derivative of the log-likelihood:
It is maximized when, reflecting maximum uncertainty and thus maximum information about the parameter.
^Dekking, Frederik; Kraaikamp, Cornelis; Lopuhaä, Hendrik; Meester, Ludolf (9 October 2010).A Modern Introduction to Probability and Statistics (1 ed.).Springer London. pp. 43–48.ISBN9781849969529.