Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Classification rule

From Wikipedia, the free encyclopedia
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Classification rule" – news ·newspapers ·books ·scholar ·JSTOR
(May 2011) (Learn how and when to remove this message)

Given a population whose members each belong to one of a number of different sets orclasses, aclassification rule orclassifier is a procedure by which the elements of the population set are each predicted to belong to one of the classes.[1] A perfectclassification is one for which every element in the population is assigned to the class it really belongs to. Thebayes classifier is the classifier which assigns classes optimally based on the known attributes (i.e. features or regressors) of the elements to be classified.

A special kind of classification rule isbinary classification, for problems in which there are only two classes.

Testing classification rules

[edit]

Given a data set consisting of pairsx andy, wherex denotes an element of the population andy the class it belongs to, a classification ruleh(x) is a function that assigns each elementx to a predicted classy^=h(x).{\displaystyle {\hat {y}}=h(x).} A binary classification is such that the labely can take only one of two values.

The true labelsyi can be known but will not necessarily match their approximationsyi^=h(xi){\displaystyle {\hat {y_{i}}}=h(x_{i})}. In a binary classification, the elements that are not correctly classified are named false positives and false negatives.

Some classification rules are static functions. Others can be computer programs. Acomputer classifier can be able to learn or can implement static classification rules. For a training data-set, the true labelsyj are unknown, but it is a prime target for the classification procedure that the approximationyj^=h(xj)yj{\displaystyle {\hat {y_{j}}}=h(x_{j})\approx y_{j}} as well as possible, where the quality of this approximation needs to be judged on the basis of the statistical or probabilistic properties of the overall population from which future observations will be drawn.

Given a classification rule, aclassification test is the result of applying the rule to a finite sample of the initial data set.

Binary and multiclass classification

[edit]

Classification can be thought of as two separate problems –binary classification andmulticlass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes.[2] Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers. An important point is that in many practical binary classification problems, the two groups are not symmetric – rather than overall accuracy, the relative proportion of different types of errors is of interest. For example, in medical testing, a false positive (detecting a disease when it is not present) is considered differently from a false negative (not detecting a disease when it is present). In multiclass classifications, the classes may be considered symmetrically (all errors are equivalent), or asymmetrically, which is considerably more complicated.

Binary classification methods includeprobit regression andlogistic regression. Multiclass classification methods includemultinomial probit andmultinomial logit.

Confusion Matrix and Classifiers

[edit]
The left, and right, halves respectively contain instances that in fact have, and do not have, the condition. The oval contains instances that are classified (predicted) as positive (having the condition). Green and red respectively contain instances that are correctly (true), and wrongly (false), classified.
TP=True Positive; TN=True Negative; FP=False Positive (type I error); FN=False Negative (type II error); TPR=True Positive Rate; FPR=False Positive Rate; PPV=Positive Predictive Value; NPV=Negative Predictive Value.

When the classification function is not perfect, false results will appear. In the example in the image to the right. There are 20 dots on the left side of the line (true side) while only 8 of those 20 were actually true. In a similar situation for the right side of the line (false side) where there are 16 dots on the right side and 4 of those 16 dots were inaccurately marked as true. Using the dot locations, we can build a confusion matrix to express the values. We can use 4 different metrics to express the 4 different possible outcomes. There is true positive (TP), false positive (FP), false negative (FN), and true negative (TN).

Example confusion matrix
  Predicted

Actual
TrueFalse
True84
False1212

False positives

[edit]

False positives result when a test falsely (incorrectly) reports a positive result. For example, amedical test for adisease may return a positive result indicating that the patient has the disease even if the patient does not have the disease. False positive is commonly denoted as the top right (Condition negative X test outcome positive) unit in aConfusion matrix.

False negatives

[edit]

On the other hand,false negatives result when a test falsely or incorrectly reports a negative result. For example, a medical test for a disease may return a negative result indicating that patient does not have a disease even though the patient actually has the disease. False negative is commonly denoted as the bottom left (Condition positive X test outcome negative) unit in aConfusion matrix.

True positives

[edit]

True positives result when a test correctly reports a positive result. As an example, amedical test for adisease may return a positive result indicating that the patient has the disease. This is shown to be true when the patient test confirms the existence of the disease. True positive is commonly denoted as the top left (Condition positive X test outcome positive) unit in aConfusion matrix.

True negatives

[edit]

True negative result when a test correctly reports a negative result. As an example, amedical test for adisease may return a positive result indicating that the patient does not have the disease. This is shown to be true when the patients test also reports not having the disease. True negative is commonly denoted as the bottom right (Condition negative X test outcome negative) unit in aConfusion matrix.

Application with Bayes’ Theorem

[edit]

We can also calculate true positives, false positive, true negative, and false negatives usingBayes' theorem. UsingBayes' theorem will help describe theProbability of anEvent (probability theory), based on prior knowledge of conditions that might be related to the event. Expressed are the four classifications using the example below.

  • If a tested patient does not have the disease, the test returns a positive result 5% of the time, or with a probability of 0.05.
  • Suppose that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease.
  • LetA represent the condition in which the patient has the disease
  • Let ¬A represent the condition in which the patient does not have the disease
  • LetB represent the evidence of a positive test result.
  • Let ¬B represent the evidence of a negative test result.

In terms of true positive, false positive, false negative, and true negative:

  • False positive is the probabilityP that ¬A (The patient does not have the disease) thenB (The patient tests positive for the disease) also expressed asPA|B)
  • False negative is the probabilityP thatA (The patient has the disease) then ¬B (The patient tests negative for the disease) also expressed asP(AB)
  • True positive is the probabilityP thatA (The patient has the disease) thenB (The patient tests positive for the disease) also expressed asP(A|B)
  • True negative is the probabilityP that ¬A (The patient does not have the disease) then ¬B (The patient tests negative for the disease) also expressed asPAB)

False positives

[edit]

We can useBayes' theorem to determine the probability that a positive result is in fact a false positive. We find that if a disease is rare, then the majority of positive results may be false positives, even if the test is relatively accurate.

Naively, one might think that only 5% of positive test results are false, but that is quite wrong, as we shall see.

Suppose that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease.

We can use Bayes' theorem to calculate the probability that a positive test result is a false positive.

P(A|B)=P(B|A)P(A)P(B|A)P(A)+P(B|¬A)P(¬A)=0.99×0.0010.99×0.001+0.05×0.999 0.019.{\displaystyle {\begin{aligned}P(A|B)&={\frac {P(B|A)P(A)}{P(B|A)P(A)+P(B|\neg A)P(\neg A)}}\\\\&={\frac {0.99\times 0.001}{0.99\times 0.001+0.05\times 0.999}}\\~\\&\approx 0.019.\end{aligned}}}

and hence the probability that a positive result is a false positive is about 1 − 0.019 = 0.98, or 98%.

Despite the apparent high accuracy of the test, the incidence of the disease is so low that the vast majority of patients who test positive do not have the disease. Nonetheless, the fraction of patients who test positive who do have the disease (0.019) is 19 times the fraction of people who have not yet taken the test who have the disease (0.001). Thus the test is not useless, and re-testing may improve the reliability of the result.

In order to reduce the problem of false positives, a test should be very accurate in reporting anegative result when the patient does not have the disease. If the test reported a negative result in patients without the disease with probability 0.999, then

P(A|B)=0.99×0.0010.99×0.001+0.001×0.9990.5,{\displaystyle P(A|B)={\frac {0.99\times 0.001}{0.99\times 0.001+0.001\times 0.999}}\approx 0.5,}

so that 1 − 0.5 = 0.5 now is the probability of a false positive.

False negatives

[edit]

We can useBayes' theorem to determine the probability that the negative result is in fact a false negative using the example from above:

P(A|¬B)=P(¬B|A)P(A)P(¬B|A)P(A)+P(¬B|¬A)P(¬A)=0.01×0.0010.01×0.001+0.95×0.999 0.0000105.{\displaystyle {\begin{aligned}P(A|\neg B)&={\frac {P(\neg B|A)P(A)}{P(\neg B|A)P(A)+P(\neg B|\neg A)P(\neg A)}}\\\\&={\frac {0.01\times 0.001}{0.01\times 0.001+0.95\times 0.999}}\\~\\&\approx 0.0000105.\end{aligned}}}

The probability that a negative result is a false negative is about 0.0000105 or 0.00105%. When a disease is rare, false negatives will not be a major problem with the test.

But if 60% of the population had the disease, then the probability of a false negative would be greater. With the above test, the probability of a false negative would be

P(A|¬B)=P(¬B|A)P(A)P(¬B|A)P(A)+P(¬B|¬A)P(¬A)=0.01×0.60.01×0.6+0.95×0.4 0.0155.{\displaystyle {\begin{aligned}P(A|\neg B)&={\frac {P(\neg B|A)P(A)}{P(\neg B|A)P(A)+P(\neg B|\neg A)P(\neg A)}}\\\\&={\frac {0.01\times 0.6}{0.01\times 0.6+0.95\times 0.4}}\\~\\&\approx 0.0155.\end{aligned}}}

The probability that a negative result is a false negative rises to 0.0155 or 1.55%.

True positives

[edit]

We can useBayes' theorem to determine the probability that the positive result is in fact a true positive using the example from above:

  • If a tested patient has the disease, the test returns a positive result 99% of the time, or with a probability of 0.99.
  • If a tested patient does not have the disease, the test returns a positive result 5% of the time, or with a probability of 0.05.
  • Suppose that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease.

Let A represent the condition in which the patient has the disease, and B represent the evidence of a positive test result. Then, the probability that the patient actually has the disease given a positive test result is:

P(A|B)=P(B|A)P(A)P(B|A)P(A)+P(B|¬A)P(¬A)=0.99×0.0010.99×0.001+0.05×0.999 0.019.{\displaystyle {\begin{aligned}P(A|B)&={\frac {P(B|A)P(A)}{P(B|A)P(A)+P(B|\neg A)P(\neg A)}}\\\\&={\frac {0.99\times 0.001}{0.99\times 0.001+0.05\times 0.999}}\\~\\&\approx 0.019.\end{aligned}}}

The probability that a positive result is a true positive is about 0.019%

True negatives

[edit]

We can also useBayes' theorem to calculate the probability of true negative. Using the examples above:

  • If a tested patient has the disease, the test returns a positive result 99% of the time, or with a probability of 0.99.
P(¬A|¬B)=P(¬B|¬A)P(A)P(¬B|¬A)P(¬A)+P(¬B|A)P(A)=0.99×0.9990.99×0.999+0.05×0.001 0.0000105.{\displaystyle {\begin{aligned}P(\neg A|\neg B)&={\frac {P(\neg B|\neg A)P(A)}{P(\neg B|\neg A)P(\neg A)+P(\neg B|A)P(A)}}\\\\&={\frac {0.99\times 0.999}{0.99\times 0.999+0.05\times 0.001}}\\~\\&\approx 0.0000105.\end{aligned}}}

The probability that a negative result is a true negative is 0.9999494 or 99.99%. Since the disease is rare and the positive to positive rate is high and the negative to negative rate is also high, this will produce a large True Negative rate.

Measuring a classifier with sensitivity and specificity

[edit]

In training a classifier, one may wish to measure its performance using the well-accepted metrics of sensitivity and specificity. It may be instructive to compare the classifier to a random classifier that flips a coin based on the prevalence of a disease. Suppose that the probability a person has the disease isp{\displaystyle p} and the probability that they do not isq=1p{\displaystyle q=1-p}. Suppose then that we have a random classifier that guesses that the patient has the disease with that same probabilityp{\displaystyle p} and guesses that he does not with the same probabilityq{\displaystyle q}.

The probability of a true positive is the probability that the patient has the disease times the probability that the random classifier guesses this correctly, orp2{\displaystyle p^{2}}. With similar reasoning, the probability of a false negative ispq{\displaystyle pq}. From the definitions above, the sensitivity of this classifier isp2/(p2+pq)=p{\displaystyle p^{2}/(p^{2}+pq)=p}. With similar reasoning, we can calculate the specificity asq2/(q2+pq)=q{\displaystyle q^{2}/(q^{2}+pq)=q}.

So, while the measure itself is independent of disease prevalence, the performance of this random classifier depends on disease prevalence. The classifier may have performance that is like this random classifier, but with a better-weighted coin (higher sensitivity and specificity). So, these measures may be influenced by disease prevalence. An alternative measure of performance is theMatthews correlation coefficient, for which any random classifier will get an average score of 0.

The extension of this concept to non-binary classifications yields theconfusion matrix.

See also

[edit]

Notes

[edit]

References

[edit]
  1. ^Mathworld article for statistical test
  2. ^Har-Peled, S., Roth, D., Zimak, D. (2003) "Constraint Classification for Multiclass Classification and Ranking." In: Becker, B., Thrun, S., Obermayer, K. (Eds)Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, MIT Press.ISBN 0-262-02550-7
Retrieved from "https://en.wikipedia.org/w/index.php?title=Classification_rule&oldid=1275678652"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp