Herington2020MeasuringFairness
Herington, Jonathan 2020 Measuring Fairness in an Unfair World
Bibliographic info
|Herington, J. (2020, February). Measuring fairness in an unfair world. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 286-292).
Retrieved from: Measuring Fairness in an Unfair World | Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
Commentary
Excerpts & Key Quotes
- Page 287:
Unconditional independence is indeed not always useful for fairness, but this should depend on the field it is applied in
The most obvious of these is that independence is sensitive not only to bias introduced by the algorithm, but also to genuine differences in the distribution of the target property. For example, if the rate of malignancy in skin lesions is higher for women than men, then a perfectly accurate algorithm will classify women as higher risk for malignancy than men. Such an algorithm would fail to satisfy measures of independence, but only because it perfectly reflects the unequal distribution of malignancy in the actual world. This problem has motivated two kinds of conditional measures that seek to control for base rate differences.
Comment:
This excerpt is showing a striking example for when measures of unconditional independence would not work at all. Unconditional independence measures require that classifications are statistically independent from sensitive attributes. To me it is very thought-provoking to think about this particular case of healthcare and predictive systems. I agree with the author, but I also disagree. Namely, for a measure like statistical parity as we are considering here, it is very significant in what situation we use the measure. For medical conditions, generalization is always risky I think. Also I believe sensitive attributes in the medical sphere should actually always be indicators for predictive systems, as they influence the characteristics of the body and thus the characteristics of health. This is in contrary to for example the legal sphere, where public values include equal treatment.
On the last page the author is mentioning the same type of thought as I stated here above.
- page 291:
This view avoids the levelling-down objection, since it allows us to accept inequalities so long as we have done as well as we could for the least well-off. Nonetheless, while this kind of prioritarian principle has a long history and is intuitively attractive [26], it may be inappropriate for some kinds of currencies of justice. Consider that we should demand strict equality for certain kinds of outcomes – e.g. voting access. More work is therefore needed to assess the special cases where strict equality is demanded by the particular currency of justice.
The Paradox between Rectification of justice and data protection laws
- Page 4:
The possibility that companies may underemphasise the role AI plays in their decision making in order to avoid new regulation.
3.3 Rectification of Injustice
Finally, this confluence of unjust circumstances sometimes makes it legitimate for sensitive attributes to cause classifications. The classic example is affirmative action policies that make hiring or admission decisions explicitly based on race in order to compensate for the historical disenfranchisement and exclusion of blacks from education and professional employment. Rectifying historical injustices, in so far as it involves the redistribution of resources, will necessarily require statistical associations between attributes and classifications. More subtly, in the context of injustice we might think unintended associations between sensitive attributes and classifications are legitimate if they prioritize aid to the most vulnerable. Consider a simple algorithm used to predict an elderly population’s need for health support services. It would be unsurprising to find an association between those scores and race. What matters here is the direction of the association (i.e. is it biased towards whites or blacks), and its effect on all things considered equality between races, genders and other sensitive attributes.
Comment:
I wanted to highlight this part as it is interesting to think about rectification of justice in light of privacy and the GDPR laws. According to the GDPR we should minimize the amount of data requests to users. However, if we consider rectification methods for fairness we will need to have sensitive attributes to investigate unjustness in algorithms. This is a difficult paradox. On one hand you would want to avoid using and collecting sensitive attributes because you want to be "fair" and not use the sensitive attributes as indicators for your model, on the other hand, as we know historical bias influenced the Target behavior, we should actually keep track of the sensitive features and create statistical distributions to be able to research how the bias is incorporated and how we could best balance out this bias.
What to make fair in this fairness proposition?
- Page 289:
Thinking about fairness in this way allows us to draw on a rich philosophical literature on the nature of distributive justice. In particular, there is a deep debate over the kinds of benefits and burdens we ought to distribute equally. Some people have the intuition that, all other things being equal, we ought to minimize inequality of income, wealth, resources or wellbeing [30]. Others, skeptical that we can eliminate inequalities in wellbeing, want to eliminate inequalities in opportunities or luck [4,26]. Still others want to eliminate inequalities in our status or power as citizens [3]. This debate over the nature of distributive justice is an underutilized resource in the discussion over fair ML (c.f. [8,16]), and suggests a new family of measures under the broad classification of distributive fairness.
Comment:
I am highlighting this excerpt as I find this an interesting point. Whenever we talk about fairness, we think of the example like the Pro Publica Recidivism case, where there occurred discrimination against Afro-American citizens, or Fraud prediction where marginalized groups are the disadvantaged or worst well-off group. However, if we want to use fairness measures we should think about the type of injustice we want to eliminate, or the type of justice we want to proactively enable. I have not yet thought about fairness in this way. Further in this paragraph, the author states how for fairness we should research the model before and after implementation to see the effects on the different sensitive groups and how to best handle the fairness issue.