| Part of a series on |
| Regression analysis |
|---|
| Models |
| Estimation |
| Background |
Instatistics, aprobit model is a type ofregression where thedependent variable can take only two values, for example married or not married. The word is aportmanteau, coming fromprobability +unit.[1] The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type ofbinary classification model.
Aprobit model is a popular specification for abinary response model. As such it treats the same set of problems as doeslogistic regression using similar techniques. When viewed in thegeneralized linear model framework, the probit model employs aprobitlink function.[2] It is most often estimated using themaximum likelihood procedure,[3] such an estimation being called aprobit regression.
Suppose a response variableY isbinary, that is it can have onlytwo possible outcomes which we will denote as 1 and 0. For example,Y may represent presence/absence of a certain condition, success/failure of some device, answer yes/no on a survey, etc. We also have a vector ofregressorsX, which are assumed to influence the outcomeY. Specifically, we assume that the model takes the form
whereP is theprobability and is the cumulative distribution function (CDF) of thestandard normal distribution. The parametersβ are typically estimated bymaximum likelihood.
It is possible to motivate the probit model as alatent variable model. Suppose there exists an auxiliary random variable
whereε ~N(0, 1). ThenY can be viewed as an indicator for whether this latent variable is positive:
The use of the standard normal distribution causes noloss of generality compared with the use of a normal distribution with an arbitrary mean and standard deviation, because adding a fixed amount to the mean can be compensated by subtracting the same amount from the intercept, and multiplying the standard deviation by a fixed amount can be compensated by multiplying the weights by the same amount.
To see that the two models are equivalent, note that
Suppose data set containsn independentstatistical units corresponding to the model above.
For the single observation, conditional on the vector of inputs of that observation, we have:
where is a vector of inputs, and is a vector of coefficients.
The likelihood of a single observation is then
In fact, if, then, and if, then.
Since the observations are independent and identically distributed, then the likelihood of the entire sample, or thejoint likelihood, will be equal to the product of the likelihoods of the single observations:
The joint log-likelihood function is thus
The estimator which maximizes this function will beconsistent, asymptotically normal andefficient provided that exists and is not singular. It can be shown that this log-likelihood function is globallyconcave in, and therefore standard numerical algorithms for optimization will converge rapidly to the unique maximum.
Asymptotic distribution for is given by
where
and is the Probability Density Function (PDF) of standard normal distribution.
Semi-parametric and non-parametric maximum likelihood methods for probit-type and other related models are also available.[4]
This method can be applied only when there are many observations of response variable having the same value of the vector of regressors (such situation may be referred to as "many observations per cell"). More specifically, the model can be formulated as follows.
Suppose amongn observations there are onlyT distinct values of the regressors, which can be denoted as. Let be the number of observations with and the number of such observations with. We assume that there are indeed "many" observations per each "cell": for each.
Denote
ThenBerkson's minimum chi-square estimator is ageneralized least squares estimator in a regression of on with weights:
It can be shown that this estimator is consistent (asn→∞ andT fixed), asymptotically normal and efficient.[citation needed] Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated counts,, and (for example in the analysis of voting behavior).
Gibbs sampling of a probit model is possible with the introduction of normally distributed latent variablesz, which are observed as 1 if positive and 0 otherwise. This approach was introduced in Albert and Chib (1993),[5] which demonstrated how Gibbs sampling could be applied to binary and polychotomous response models within a Bayesian framework. Under a multivariate normalprior distribution over the weights, the model can be described as
From this, Albert and Chib (1993)[5] derive the following full conditional distributions in the Gibbs sampling algorithm:
The result for is given in the article onBayesian linear regression, although specified with different notation, while the conditional posterior distributions of the latent variables follow atruncated normal distribution within the given ranges. The notation is theIverson bracket, sometimes written or similar. Thus, knowledge of the observed outcomes serves to restrict the support of the latent variables.
Sampling of the weightsgiven the latent vector from the multinormal distribution is standard. For sampling the latent variables from the truncated normal posterior distributions, one can take advantage of the inverse-cdf method, implemented in the followingR vectorized function, making it straightforward to implement the method.
zbinprobit<-function(y,X,beta,n){meanv<-X%*%betau<-runif(n)# uniform(0,1) random variatescd<-pnorm(-meanv)# cumulative normal CDFpu<-(u*cd)*(1-2*y)+(u+cd)*ycpui<-qnorm(pu)# inverse normal CDFz<-meanv+cpui# latent vectorreturn(z)}
The suitability of an estimated binary model can be evaluated by counting the number of true observations equaling 1, and the number equaling zero, for which the model assigns a correct predicted classification by treating any estimated probability above 1/2 (or, below 1/2), as an assignment of a prediction of 1 (or, of 0). SeeLogistic regression § Model for details.
This section mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:Need to adopt the notation of the rest of the article, fix grammar, and make prose clearer. Please helpimprove this section if you can.(June 2019) (Learn how and when to remove this message) |
Consider the latent variable model formulation of the probit model. When thevariance of conditional on is not constant but dependent on, then theheteroscedasticity issue arises. For example, suppose and where is a continuous positive explanatory variable. Under heteroskedasticity, the probit estimator for is usually inconsistent, and most of the tests about the coefficients are invalid. More importantly, the estimator for becomes inconsistent, too. To deal with this problem, the original model needs to be transformed to be homoskedastic. For instance, in the same example, can be rewritten as, where. Therefore, and running probit on generates a consistent estimator for theconditional probability
When the assumption that is normally distributed fails to hold, then a functional formmisspecification issue arises: if the model is still estimated as a probit model, the estimators of the coefficients are inconsistent. For instance, if follows alogistic distribution in the true model, but the model is estimated by probit, the estimates will be generally smaller than the true value. However, the inconsistency of the coefficient estimates is practically irrelevant because the estimates for thepartial effects,, will be close to the estimates given by the true logit model.[6]
To avoid the issue of distribution misspecification, one may adopt a general distribution assumption for the error term, such that many different types of distribution can be included in the model. The cost is heavier computation and lower accuracy for the increase of the number of parameter.[7] In most of the cases in practice where the distribution form is misspecified, the estimators for the coefficients are inconsistent, but estimators for the conditional probability and the partial effects are still very good.[citation needed]
One can also take semi-parametric or non-parametric approaches, e.g., via local-likelihood or nonparametricquasi-likelihood methods, which avoid assumptions on a parametric form for the index function and is robust to the choice of the link function (e.g., probit or logit).[4]
The probit model is usually credited toChester Bliss, who coined the term "probit" in 1934,[8] and toJohn Gaddum (1933), who systematized earlier work.[9] However, the basic model dates to theWeber–Fechner law byGustav Fechner, published inFechner (1860), and was repeatedly rediscovered until the 1930s; seeFinney (1971, Chapter 3.6) andAitchison & Brown (1957, Chapter 1.2).[9]
A fast method for computingmaximum likelihood estimates for the probit model was proposed byRonald Fisher as an appendix to Bliss' work in 1935.[10]
These arbitrary probability units have been called 'probits'.
{{cite book}}:ISBN / Date incompatibility (help)