FazelpourDanks2021AlgorithmicBias
FazelpourDanks2021AlgorithmicBias
S. Fazelpour and D. Danks
Algorithmic bias: senses, sources, solutions
Bibliographic information
Fazelpour, Sina & Danks, David. (2021). Algorithmic bias: Senses, sources, solutions.
Philosophy Compass. 16. 10.1111/phc3.12760.
Commentary
The first thing that stood out to me was that the structure is proper. The article starts off
with a disclaimer that related topics like trust and transparency will be omitted in favor
of being able to discuss algorithmic bias adequately. Moreover, I liked the fact that they
frequently referred to the student success example, making the article more coherent.
As a last note on the structure, the sources of bias were described in a logical order in
which they could appear in the process of building and using an algorithm.
In terms of the content, they are thorough in describing all possible biases, and they
make sure that those biases are put in a nuanced light, in my opinion. In terms of
weaknesses, however, the paper provides different popular debiasing strategies but
then proceeds to provide arguments why they are difficult to use and how they might
shift the bias instead of eliminating it. They suggest a way forward by focusing on the
social epistemology and broadening their scope. In my opinion, these suggestions are
too vague and abstract to function as an answer. The future directions the authors
describe could use more elaboration.
“Not all statistically (or legally) biased behaviors are ethically or morally problematic, while not all statistically fair or unbiased predictions are ethically or morally acceptable.”
This quote goes to show how contextual bias is. Bias is not inherently harmful. However,
the consequences of using an algorithm that is biased in a certain way could be
harmful.
“Different values in political settings sometimes imply the same policy. In contrast, different values almost always imply different objective functions for learning a model and so almost never result in the same algorithm.”
This begs the question of whether algorithms should be used for policymaking. The text
states that ‘even small differences in values lead to underdetermination of algorithms
(and possible biases)’. This is a problem, since we live in a value-pluralistic society. Is it
worth using algorithms in order to gain efficiency at the expense of fair treatment?
“These trade-offs are all heightened when algorithms are used repeatedly for multiple decisions, and so we may have to decide whether to allow some short-run ethical harms in order to gain additional knowledge that can enable long-term reduction in ethical harms.”
They make an interesting point here. Given that current research focuses on minimizing
immediate errors and biases, it is possible that a solution that will prove to be a good
solution later will be missed. I had not considered this argumentation myself.