This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|
| Part of a series on |
| Machine learning anddata mining |
|---|
Learning with humans |
Model diagnostics |
Labeled data is a group ofsamples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags calledjudgments. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor.
Labels can be obtained by having humans make judgments about a given piece of unlabeled data.[1] Labeled data is significantly more expensive to obtain than the raw unlabeled data.
The quality of labeled data directly influences the performance ofsupervised machine learning models in operation, as these models learn from the provided labels.[2]
In 2006,Fei-Fei Li, the co-director of theStanford Human-Centered AI Institute, initiated research to improve theartificial intelligence models and algorithms for image recognition by significantly enlarging thetraining data. The researchers downloaded millions of images from theWorld Wide Web and a team of undergraduates started to apply labels for objects to each image. In 2007, Li outsourced the data labeling work onAmazon Mechanical Turk, anonline marketplace for digitalpiece work. The 3.2 million images that were labeled by more than 49,000 workers formed the basis forImageNet, one of the largest hand-labeled database foroutline of object recognition.[3]
After obtaining a labeled dataset,machine learning models can be applied to the data so that new unlabeled data can be presented to the model and a likely label can be guessed or predicted for that piece of unlabeled data.[4]
Algorithmic decision-making is subject to programmer-driven bias as well as data-driven bias. Training data that relies on bias labeled data will result in prejudices and omissions in apredictive model, despite the machine learning algorithm being legitimate. The labeled data used to train a specific machine learning algorithm needs to be a statisticallyrepresentative sample to not bias the results.[5] For example, infacial recognition systems underrepresented groups are subsequently often misclassified if the labeled data available to train has not been representative of the population,. In 2018, a study byJoy Buolamwini andTimnit Gebru demonstrated that two facial analysis datasets that have been used to train facial recognition algorithms, IJB-A and Adience, are composed of 79.6% and 86.2% lighter skinned humans respectively.[6]
Human annotators are prone to errors and biases when labeling data. This can lead to inconsistent labels and affect the quality of the data set. The inconsistency can affect themachine learning model's ability to generalize well.[7]
Certain fields, such as legal document analysis ormedical imaging, require annotators with specialized domain knowledge. Without the expertise, the annotations or labeled data may be inaccurate, negatively impacting the machine learning model's performance in a real-world scenario.[8]
{{citation}}: CS1 maint: work parameter with ISBN (link)