Inmachine learning, alinear classifier makes aclassification decision for each object based on alinear combination of itsfeatures. Such classifiers work well for practical problems such asdocument classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use.[1]
If the input feature vector to the classifier is areal vector, then the output score is
where is a real vector of weights andf is a function that converts thedot product of the two vectors into the desired output. (In other words, is aone-form orlinear functional mapping ontoR.) The weight vector is learned from a set of labeled training samples. Oftenf is athreshold function, which maps all values of above a certain threshold to the first class and all other values to the second class; e.g.,
The superscript T indicates the transpose and is a scalar threshold. A more complexf might give the probability that an item belongs to a certain class.
For a two-class classification problem, one can visualize the operation of a linear classifier as splitting ahigh-dimensional input space with ahyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no".
A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when is sparse. Also, linear classifiers often work very well when the number of dimensions in is large, as indocument classification, where each element in is typically the number of occurrences of a word in a document (seedocument-term matrix). In such cases, the classifier should be well-regularized.
There are two broad classes of methods for determining the parameters of a linear classifier. They can begenerative anddiscriminative models.[2][3] Methods of the former modeljoint probability distribution, whereas methods of the latter modelconditional density functions. Examples of such algorithms include:
The second set of methods includesdiscriminative models, which attempt to maximize the quality of the output on atraining set. Additional terms in the training cost function can easily performregularization of the final model. Examples of discriminative training of linear classifiers include:
Note: Despite its name, LDA does not belong to the class of discriminative models in this taxonomy. However, its name makes sense when we compare LDA to the other main lineardimensionality reduction algorithm:principal components analysis (PCA). LDA is asupervised learning algorithm that utilizes the labels of the data, while PCA is anunsupervised learning algorithm that ignores the labels. To summarize, the name is a historical artifact.[5]
Discriminative training often yields higher accuracy than modeling the conditional density functions[citation needed]. However, handling missing data is often easier with conditional density models[citation needed].
All of the linear classifier algorithms listed above can be converted into non-linear algorithms operating on a different input space, using thekernel trick.
Discriminative training of linear classifiers usually proceeds in asupervised way, by means of anoptimization algorithm that is given a training set with desired outputs and aloss function that measures the discrepancy between the classifier's outputs and the desired outputs. Thus, the learning algorithm solves an optimization problem of the form[1]
where
Popular loss functions include thehinge loss (for linear SVMs) and thelog loss (for linear logistic regression). If the regularization functionR isconvex, then the above is aconvex problem.[1] Many algorithms exist for solving such problems; popular ones for linear classification include (stochastic)gradient descent,L-BFGS,coordinate descent andNewton methods.