• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Thesis

Gender Shades

Aug. 10, 2017

Buolamwini, J. (2017, MIT Master's Thesis) Gender Shades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers

Abstract

This thesis (1) characterizes the gender and skin type distribution of IJB-A, a government facial recognition benchmark, and Adience, a gender classification benchmark, (2) outlines an approach for capturing images with more diverse skin types which is then applied to develop the Pilot Parliaments Benchmark (PPB), and (3) uses PPB to assess the classification accuracy of Adience, IBM, Microsoft, and Face++ gender classifiers with respect to gender, skin type, and the intersection of skin type and gender.

The datasets evaluated are overwhelming lighter skinned: 79.6% - 86.2%. IJB-A includes only 24.6% female and 4.4% darker female, and features 59.4% lighter males. By construction, Adience achieves rough gender parity at 52% female but has only 13.8% darker skin. The Parliaments method for creating a more skin-type-balanced benchmark resulted in a dataset that is 44.4% female and 47% darker skin. An evaluation of four gender classifiers revealed a  significant gap exists when comparing gender classification accuracies of females vs males (9 - 20%) and darker skin vs lighter  skin (10 - 21%). Lighter males were in general the best classified group, and darker females were the worst classified group. 37% - 83% of classification errors resulted from the misclassification of darker females. Lighter males contributed the least to overall classification error (.4% - 3%).

For the best performing classifier, darker females were 32 times more likely to be misclassified than lighter males. To increase the accuracy of these systems, more phenotypically diverse datasets need to be developed. Benchmark performance metrics need to be disaggregated not just by gender or skin type but by the intersection of gender and skin type. At a minimum, human-focused computer vision models should report accuracy on four subgroups: darker females, lighter females, darker males, and lighter males.

The thesis concludes with a discussion of the implications of misclassification and the importance of building inclusive training sets and benchmarks.

Related Content
{{#each images}}
{{/each}}
Previous Next

of

{{#each images}}

{{ title }}

{{{ caption }}}
{{/each}}