Birhane2021AlgorithmicInjustice

Literature note on Abeba Birhane, “Algorithmic injustice: a relational ethics approach.”

Bibliographic info

Abeba Birhane. “Algorithmic injustice: a relational ethics approach.” Patterns 2, (February, 2021): 1-9.

Commentary

The overall strength of Birhane’s text lies in its identification of the overarching framework formatting Western thought and hence AI development in general, namely ‘rationality.’ Rationality, for Birhane, lies at the source of algorithmic injustice and the problems with society’s current means to combat it, i.e. debiasing datasets, the mathematization of social problems and focusing on seemingly apolitical concepts such as ‘fair’ and ‘good.’ For a nine-page paper, Birhane does a great job in proposing an approach to substitute the framework of rationality for or at least supplement it with: the approach of ‘relationality.’

Before turning to the merits of relationality over rationality, I think it is fitting to discuss the flaws of rationality motivating Birhane to write against it. What Birhane calls rationality ultimately goes back to Plato, but is mainly characterized by the trio of Cartesianism and Newtionianism (framing Western thought in general) and Bayesianism (characteristic of data science in particular). The Cartesian-Newtonian worldview reduces the complex world to ‘objective’ descriptions, while at the same time framing knowing as an abstract, rational and contemplative affair. Birhane calls this abstract framework ‘reductionist’ and claims this foregoes the messiness of social reality, which is irreducible to either billiard ball models of nature, or to metaphysically totalitarian images of what a subject in essence is. The consequence Birhane connects to this is that: “The reality of the Western straight white male then masquerades as the invisible background that is taken as the ‘‘normal,’’ ‘‘standard,’’ or ‘‘universal’’ position. Anything outside of it is often cast as ‘‘dubious’’ or an ‘‘outlier.’” (p. 3) This is a clear problem for data science and AI, because the consequences of this analysis are that data science falsely aspires to work from a morally neutral perspective, while actually taking the perspective of cis-gender white males. The second problem is related to Bayesianism: Bayes’ theorems have led, according to Birhane, to the belief that using statistical inferences on past data, accurate inferences about future events can be made. But this naively reproduces the used data’s flaws: “If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe,” Birhane quotes Horgan. Hence data science and AI, prima facie lay claim to a ‘view from nowhere,’ a ‘God’s eye perspective,’ while, diving deeper, it becomes clear that rationality reproduces harmful and discriminatory outcomes and is guilty of algorithmic injustice.

Inspired by Black feminist thought, enactive cognitive science, sub-Saharan philosophy and Bakhtinian dialogism, Birhane sees the solution to these problems in the adoption of relationality, which views existence as an ever-changing web of relations of practical knowledge and actions, in digital ethics. Recognizing the singularity of individual experience and wisdom (Afro-feminism), the dynamic nature of human behaviour (enactive cognitive science) and the individual’s inheritance of moral value from her community (ubuntu), Birhane comes to several pillars for a relational digital ethics.(i) Within data ethics, construction of knowledge should center on human relationships; (ii) those most disproportionately impacted by data science or AI should be centered in an endeavor to do them justice; (iii) data scientific outcomes should be interrogated in light of the origin of its data, rather than considering debiasing an approach of the truth inherent in the data; (iv) understanding the world implicitly present in or malformed by the data used should be prioritized over predicting future instances; and, finally (v) perceiving data science as something that heavily influences and alters society rather than as something decoupled from it.

How does Birhane see relational ethics as the core of points (i) to (v)? Giving primacy to relational existence over individual subjectivity leads to the insight that no human endeavor (like data science) escapes the embeddedness in relationships. As such, claims to truth (points (i) and (iv)), objectivity (point (iii)) and political and moral neutrality (points (ii) and (v)), while plausible under the guise of rationality, begin to lose their charms and merits. The strength of this paper lies in the power of the message of relational ethics and the concreteness and necessity of the measures to change the way data science is practiced and thought of in society.

However one could object to Birhane’s exploitation of the concept of rationality, posing it as a straw man to make his view of relationality look better. In the global history of ideas however, at least in the West, rationality is not just an objectively bad phenomenon, and a critical reader will only see a fertile future for relationality alongside rationality, rather than without it. The rationality Birhane objects to, can better be framed as the totalitarian enforcement of a product of reason. The totalitarian aspect we can substitute by relationality, but the human faculty of reason we cannot do without and neither is this fundamentally connected to ‘one’ truth, form of experience or perspective.

Excerpts & Key Quotes

“The growing body of work exposing algorithmic injustice has indeed brought forth increased awareness of these problems, subsequently spurring the development of various techniques and tactics to mitigate bias, discrimination, and harms. However, many of the ‘‘solutions’’ put forward (1) revolve around technical fixes and (2) do not center individuals and communities that are disproportionally impacted. Relational ethics, at its core, is an attempt to unravel our assumptions and presuppositions and to rethink ethics in a broader manner via engaged epistemology in a way that puts the needs and welfare of the most impacted and marginalized at the center.”

Comment:

Where Birhane sees point (2) as unaddressed in society, I think he is wrong. Currently, in European directives like the GDPR, and companies’ and institutions’ principles for fair data science and AI, increasing attention is being given to individuals and communities that are disproportionally impacted by certain technologies. But while he exaggerates the point sometimes, he is not totally wrong: these laws, directives and principles fail to be fully adequate because they approach citizens in their individuality, rather than in their relationality.

“Adopting relational ethics means that we view our understandings, proposed solutions, and definitions of bias, fairness, and ethics as partially open. This partial openness allows for revision and reiteration in accordance with the dynamic development of such
challenges. This also means that this work is never done.”

Comment:

What is essential to this passage is the last sentence, the realization that ethics are never over, the moral good in decision making never grasped indefinitely. This sets apart relational ethics from utilitarianism, [consequentialism] and virtue ethics: there are no set of moral standards I can appeal to again and again to reach a good decision (whether we can distinguish them or not). The human condition varies with the human considered, and dynamically through time indefinitely. This means relational ethics varies alongside it, while attempting to forego the fall into relativism, which, in my opinion, is more present than Birhane admits it is.

“A data practice that prioritizes understanding over prediction is one that interrogates prior beliefs instead of using the evidence to confirm such belief and one that seeks alternative explanations by placing the evidence in a social, historical, and cultural context. In doing so, we ask challenging but important questions such as ‘‘to what extent do our initial beliefs originate in stereotypically held intuitions about groups or cultures?’’, ‘‘why are we finding the ‘evidence’ (patterns) that we are finding?’’, and ‘‘how can we leverage data practices in order to gain an in-depth understanding of certain problems as situated in structural inequalities and oppression?’’”

Comment:

While I fully agree with the impetus of this passage, the hypothesized implementation of this change of mindset regarding how to use data science and AI is farfetched. The current conception of data science and AI’s societal merits of consists in prediction, classification and autonomous decision making, ergo: in prediction rather than in understanding. In this lightning-fast society, there is little (corporate) power to be gained in the (scientific) understanding of the past present in data as compared to the glimpses of the future, mined by AI, that form input to the digital economic markets. How can this wicked society redirect data practices if they cannot be capitalized?

“Ethical practice, especially with regard to algorithmic predictions of social outcomes, requires a fundamental rethinking of justice, fairness, and ethics above and beyond technical solutions. Ethics in this regard is not merely a methodology, a tool, or simply a matter of constructing a philosophically coherent theory but a down-to-earth practice that is best viewed as a habit— a practice that alters the way we do data science. Relational ethics is a process that emerges through the re-examination of the nature of existence, knowledge, oppression, and injustice. Algorithmic systems never emerge in a social, historical, and political vacuum, and to divorce them from the contingent background in which they are embedded is erroneous. Relational ethics provides the framework to rethink the nature of data science through a relational understanding of being and knowing

Comment:

With this powerful paragraph, Birhane closes the paper. What I think is important with regard to this passage, is the proposed fusion of ethics and data science into a habitual practice and the continual realization of the embeddedness of algorithms and their effects in society. Ethics is too often misused (cf. Floridi’s five problematic uses regarding digital ethics), sentenced to abstraction, or regarded as marginal in comparison to legal demands.