Whenclassification is performed by a computer, statistical methods are normally used to develop the algorithm.
Often, the individual observations are analyzed into a set of quantifiable properties, known variously asexplanatory variables orfeatures. These properties may variously becategorical (e.g. "A", "B", "AB" or "O", forblood type),ordinal (e.g. "large", "medium" or "small"),integer-valued (e.g. the number of occurrences of a particular word in anemail) orreal-valued (e.g. a measurement ofblood pressure). Other classifiers work by comparing observations to previous observations by means of asimilarity ordistance function.
Analgorithm that implements classification, especially in a concrete implementation, is known as aclassifier. The term "classifier" sometimes also refers to the mathematicalfunction, implemented by a classification algorithm, that maps input data to a category.
Terminology across fields is quite varied. Instatistics, where classification is often done withlogistic regression or a similar procedure, the properties of observations are termedexplanatory variables (orindependent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of thedependent variable. Inmachine learning, the observations are often known asinstances, the explanatory variables are termedfeatures (grouped into afeature vector), and the possible categories to be predicted areclasses. Other fields may use different terminology: e.g. incommunity ecology, the term "classification" normally refers tocluster analysis.
Classification and clustering are examples of the more general problem ofpattern recognition, which is the assignment of some sort of output value to a given input value. Other examples areregression, which assigns a real-valued output to each input;sequence labeling, which assigns a class to each member of a sequence of values (for example,part of speech tagging, which assigns apart of speech to each word in an input sentence);parsing, which assigns aparse tree to an input sentence, describing thesyntactic structure of the sentence; etc.
A common subclass of classification isprobabilistic classification. Algorithms of this nature usestatistical inference to find the best class for a given instance. Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output aprobability of the instance being a member of each of the possible classes. The best class is normally then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classifiers:
Early work on statistical classification was undertaken byFisher,[1][2] in the context of two-group problems, leading toFisher's linear discriminant function as the rule for assigning a group to a new observation.[3] This early work assumed that data-values within each of the two groups had amultivariate normal distribution. The extension of this same context to more than two groups has also been considered with a restriction imposed that the classification rule should belinear.[3][4] Later work for the multivariate normal distribution allowed the classifier to benonlinear:[5] several classification rules can be derived based on different adjustments of theMahalanobis distance, with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population.[6] Bayesian procedures tend to be computationally expensive and, in the days beforeMarkov chain Monte Carlo computations were developed, approximations for Bayesian clustering rules were devised.[7]
Some Bayesian procedures involve the calculation ofgroup-membership probabilities: these provide a more informative outcome than a simple attribution of a single group-label to each new observation.
Classification can be thought of as two separate problems –binary classification andmulticlass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes.[8] Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers.
Most algorithms describe an individual instance whose category is to be predicted using afeature vector of individual, measurable properties of the instance. Each property is termed afeature, also known in statistics as anexplanatory variable (orindependent variable, although features may or may not bestatistically independent). Features may variously bebinary (e.g. "on" or "off");categorical (e.g. "A", "B", "AB" or "O", forblood type);ordinal (e.g. "large", "medium" or "small");integer-valued (e.g. the number of occurrences of a particular word in an email); orreal-valued (e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data bediscretized into groups (e.g. less than 5, between 5 and 10, or greater than 10).
A large number ofalgorithms for classification can be phrased in terms of alinear function that assigns a score to each possible categoryk bycombining the feature vector of an instance with a vector of weights, using adot product. The predicted category is the one with the highest score. This type of score function is known as alinear predictor function and has the following general form:whereXi is the feature vector for instancei,βk is the vector of weights corresponding to categoryk, and score(Xi,k) is the score associated with assigning instancei to categoryk. Indiscrete choice theory, where instances represent people and categories represent choices, the score is considered theutility associated with personi choosing categoryk.
Algorithms with this basic setup are known aslinear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted.
Examples of such algorithms include
Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms has been developed. The most commonly used include:[9]
Choices between different possible algorithms are frequently made on the basis of quantitativeevaluation of accuracy.
Classification has many applications. In some of these, it is employed as adata mining procedure, while in others more detailed statistical modeling is undertaken.
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(January 2010) (Learn how and when to remove this message) |