Movatterモバイル変換


[0]ホーム

URL:


Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
Thehttps:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

NIH NLM Logo
Log inShow account info
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
pubmed logo
Advanced Clipboard
User Guide

Full text links

Free PMC article
Full text links

Actions

.2020 Jan-Dec:6:10.1177/2378023120967171.
doi: 10.1177/2378023120967171. Epub 2020 Nov 11.

Diagnosing Gender Bias in Image Recognition Systems

Affiliations

Diagnosing Gender Bias in Image Recognition Systems

Carsten Schwemmer et al. Socius.2020 Jan-Dec.

Abstract

Image recognition systems offer the promise to learn from images at scale without requiring expert knowledge. However, past research suggests that machine learning systems often produce biased output. In this article, we evaluate potential gender biases of commercial image recognition platforms using photographs of U.S. members of Congress and a large number of Twitter images posted by these politicians. Our crowdsourced validation shows that commercial image recognition systems can produce labels that are correct and biased at the same time as they selectively report a subset of many possible true labels. We find that images of women received three times more annotations related to physical appearance. Moreover, women in images are recognized at substantially lower rates in comparison with men. We discuss how encoded biases such as these affect the visibility of women, reinforce harmful gender stereotypes, and limit the validity of the insights that can be gathered from such data.

Keywords: bias; computational social science; gender; image recognition; stereotypes.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Example of the information that Google’s Cloud Vision platform can return when asked to label a portrait of former U.S. president Barack H. Obama.
Figure 2.
Figure 2.
Relationship between Google Cloud Vision (GCV) confidence and human agreement. Numbers in parentheses denote observations for corresponding confidence score thresholds.
Figure 3.
Figure 3.
Accuracy of person detection of Google Cloud Vision (GCV). Percentages shown were determined by comparing gender of members of Congress depicted in uniform data (professional photographs) with annotations from the object recognition software.
Figure 4.
Figure 4.
Accuracy of person detection of Google Cloud Vision (GCV). Percentages shown were determined by comparing human agreement about the presence of men or women in Twitter images with annotations from the object recognition software.
Figure 5.
Figure 5.
Two images of U.S. members of Congress with their corresponding labels as assigned by Google Cloud Vision. On the left is Steve Daines, a Republican senator from Montana. On the right is Lucille Roybal-Allard, a Democratic representative from California’s 40th congressional district. Percentages next to labels denote confidence scores of Google Cloud Vision.
Figure 6.
Figure 6.
Google Cloud Vision labels applied to control dataset (professional photos). The 25 most gendered labels for men and women were identified with χ2 tests (p ≤ .01). Labels are sorted by absolute frequencies. Bars denote the percentage of images for a certain label by gender.
Figure 7.
Figure 7.
Predicted labels counts for images of men and women. Results are based on the Wikipedia photographs of U.S. members of Congress and negative binomial regressions, controlling for party and ethnicity. Circles describe point estimates, and bars describe 95 percent confidence intervals.
Figure 8.
Figure 8.
Google Cloud Vision labels applied to found data set (Twitter images). The 25 most gendered labels for men and women were identified using χ2 tests (p ≤ .01). Labels are sorted by absolute frequencies. Bars denote the percentage of images for a certain label by gender.
Figure 9.
Figure 9.
Amazon Rekognition labels applied to professional photographs of members of Congress. The 25 most gendered labels for men and women were identified with χ2 tests (p ≤ .01). Bars denote the percentage of images for a certain label by gender.
Figure 10.
Figure 10.
Microsoft Azure Computer Vision labels applied to professional photographs of members of Congress. The 25 most gendered labels for men and women were identified with χ2 tests (p ≤ .01). Bars denote the percentage of images for a certain label by gender.
See this image and copyright information in PMC

References

    1. Anastasopoulos LJ, Badani Dhruvil, Lee Crystal, Ginosar Shiry, and Williams Jake. 2016. “Photographic Home Styles in Congress: A Computer Vision Approach.” arXiv. Retrieved October 10, 2020.https://arxiv.org/abs/1611.09942.
    1. Angwin Julia, Larsen Jeff, Mattu Surya, and Kirchner Lauren. 2016. “Machine Bias.” ProPublica. Available at:https://www.propublica.org/article/machine-bias-risk-assessments-in-crim.... Accessed October 13, 2020.
    1. Becker Howard S. 1974. “Photography and Sociology.” Studies in Visual Communication 1(1):3–26.
    1. Benjamin Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity.
    1. Bolukbasi Tolga, Chang Kai-Wei, Zou James Y., Saligrama Venkatesh, and Kalai Adam T.. 2016. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” Advances in Neural Information Processing Systems 29:4349–57.

Grants and funding

LinkOut - more resources

Full text links
Free PMC article
Cite
Send To

NCBI Literature Resources

MeSHPMCBookshelfDisclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.


[8]ページ先頭

©2009-2025 Movatter.jp