Metz2021AfricanReasons
Bibliographic info
Thaddeus Metz. “African Reasons Why AI Should Not Maximize Utility.” In African Values, Ethics, and Technology, edited by B. D. Okyere-Manu, 55-72. Cham: Palgrave Macmillan, 2021.
url: https://doi.org/10.1007/978-3-030-70550-3_4
title: "African Reasons Why AI Should Not Maximize Utility"
description: "Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a..."
Commentary
Metz’s objective in this paper is to point out the shortcomings of utilitarianism as a tool for programming ethical actions into AI technology. With the shortcomings spread out, Metz turns to African moral theory, specifically insights from sub-Saharan ethics, and builds the argument that instead of maximizing utility, AI technologies should have regard for human dignity. Dignity, and other distinctly African values, are a blind spot of the Western-value focused tech-industry. This goes beyond the abstract egalitarianism of utilitarianism, for whom all human beings have the same value. For starters, he describes what utilitarianism is and how it shapes current societies and the development of AI technology. Utilitarianism is the ethical doctrine “…for any action to be rational and moral, it must be expected to maximize what is good for human beings (and perhaps animals) and to minimize what is bad for them in the long run” (p.58). So in the context of AI, such as automatic weapons or ESDiT-shared - OLD 2021-11-03/esditHuman/self-driving cars, utilitarianism boils down to a cost-benefit analysis of the options the AI has at hand regarding maximizing human well-being. Each human being is of the same value in this framework for rational choice. How the AI constructs such an analysis, is of course pre-programmed or -parameterized in a vein that will maximize human value (i.e. least lives lost, highest potential of positive future value).
This looks egalitarian, fair and seems to fit programming like a glove, but according to Metz there is a catch: utilitarian AI fail in the light of convincing counterarguments formed by African values. Utilitarianism uses human lives as means to obtain the maximum of value and here it foregoes the African value of human dignity: human lives, the fact of life itself, may never be degraded to means and have value regardless of their contribution to the ‘happiness calculus.’ Metz admits that it is cross-culturally controversial how this dignity is grounded, but what is clear, and convincingly so, is that dignity bestows human rights on the individual, which turn into duties for other individuals regarding that individual. Utilitarian AI on the other hand is restricted to using the fact of life as a means to maximize utility.
The other problems Metz brings up are those of (i) group rights, (ii) family first problems; and (iii) self-sacrifice. While the shortcoming discussed in the previous paragraph was clearly convincing, I will discuss if these three problems are as well.
1. The idea that groups have rights is an African value, though controversial in other parts of the world. Utilitarian AI however cannot adequately deal with group rights, Metz contends, because it does not distinguish between individuals particular values, which are all equal. I think this is a convincing shortcoming as Metz shows the utilitarian indifference to groups leaves room for oppression and imperialism by means of AI if the majority of humans benefit from it.
2. Metz holds that on top of the basic value of dignity, other relations between individuals shape their moral value in an ethical deliberation. For example saving your mom versus saving a stranger or saving an elderly person (say, your grandma) or saving a random child. Utilitarianism would not see a distinction in the first example, while choosing the child over the grandma in the second, because the probability of a longer life means being of a higher utility to human well-being. So here the egalitarianism obstructs the partial duties one has to one’s family. I think this is a convincing ethical problem in itself, but not necessarily for a utilitarian AI, for the only thing that would have to change in its programming is to bestow a higher value of utility on the owner’s family members than on strangers. This would not be strictly utilitarian, but, according to some, bestowing a higher value on humans than on animals is also controversial. So we can concede that what the perfect parameters are for utilitarian AI is also an open question.
3. Metz argues utilitarian AI would not allow for self-sacrifice, where sometimes it is one’s duty to another person to sacrifice oneself, as substantially helping other is an African value. The argument that it should be an admissible action to sacrifice oneself for the sake of others, even if it means contradicting utilitarianism, is convincing.
I fully agree with Metz’s thesis that maximization of utility leaves a lot to be desired regarding certain moral choices that would be made devoid of any relational context. I do think however that Metz overestimates the manner in which utilitarianisms quantification of morality can be divorced from AI programming. How would AI be capable of calculating what the right decision is, if the decisions have (sometimes culturally relative) unquantifiable parameters? For example, in utilitarian AI, each human can be quantified as the value one, the lower the sum total value of an option, the more it maximizes utility and the more preferable it is. But, taking the perspective of a dignity-based approach, if one times dignity is not less than two times dignity and mom times dignity is more than stranger times dignity, what kind of assessments of calculations would an AI need to perform to come to a good decision? I may be ridiculing the state of affairs a little, but what I mean to say, and to close with, is that Metz is advocating a new moral AI programming paradigm, but is not relating to the technical reality of development enough (apart from the self-sacrifice problem, which could be implemented by creating reachable finite states where the owner would be sacrificed). Metz does ask the ‘how-questions’ in the first paragraph, but only answers them at a high level of abstraction, which, while valuable, clearly leaves room for further development of the intersection between programming and dignity-based ethics.
Excerpts & Key Quotes
- Page 57:
“Here I aim to make some headway when it comes to heeding these calls [claims that culturally inspired moral theory collapses into relativism]. I do not do so for reasons of relativism. It is not my view that the values that should govern technology in a certain society are necessarily those held by most in that society. I believe that majorities can be mistaken about right and wrong action, as nineteenth-century Americans were in respect of slavery. Instead, I draw on under-considered African ethical perspectives in the thought that any long-standing philosophical tradition probably has some insight into the human condition and has something to teach those outside it. Many of the values I identify as ‘African’ will, upon construing them abstractly, be taken seriously by many moral philosophers, professional ethicists, and the like around the world, especially, but not solely, outside the West.”
Comment:
In this passage, Metz supplies his inspiration on African values with some much needed context. The danger of using culture as a source of ethics is cultural relativism: that universal moral values do not exist and morality boils down to traditions of the majority’s opinion about ethics in a certain region of the world. However, Metz rejects the rule of the majority with regard to ethics. On the contrary, he emphasizes that while the values discussed as originating from Africa, viewed sub specie aeternitatis, while be of recognizable moral value for Western thought as well (for example the parallel Metz draws between the African concept of dignity and Kantian ethics). This means values can be of different (explicit) importance in different cultures, but not be culturally relative because of the (universal) cross-culture import of the value. Note that this leaves the door open for the construction of an objectivist ethics that trumps all actual cultural ethics.
- Page 62:
“So long as individual persons have a dignity that merits respectful treatment, regardless of what confers that dignity on us, a moral agent will be forbidden from treating persons merely as a means to the greater good. That is, a typical dignity-based ethic will accord human rights to each person, where to have a human right is for others to have a duty not to subordinate or harm a person that should be upheld even if not doing so would promote a marginally greater amount of value in the world.”
Comment:
This passage is very important to understand how Metz sees the tension between utilitarian and dignity based approaches to AI. Under the dignity based approach, human beings have duties towards one another, the central duty being not treating the lives of others as means. But exactly that duty is the essence of utilitarianism: reducing the fact of life to a means to maximize human well-being.
- Page 68:
“Consider how the permissibility of making sacrifices for others, despite a net loss of subjective well-being in the world, could influence the way to programme smart machines. Basically, any time one is in charge of how such a machine is deployed, one should be able to opt to direct it to save others before oneself.”
Comment:
This citation shows the way Metz talks about the relation between ethics and programming. The example of self-sacrifice works because it could be implemented as an if-clause asking is self-sacrifice is turned on, determined by a Boolean. What is think is problematic about the way Metz talks about the relation between ethics and programming is that he does not dive into how the dignity-based approach could be technically realized, i.e. quantified and algorithmized. My suggestion above would namely be a dignity-based supplement to the utilitarian AI. The problematic question then becomes: is the desired realization a utilitarian main class, with components regulating the AI’s behaviour with regard to human dignity or is the utilitarian core itself problematic? The question becomes more complex once one starts to think about implementing strains of different ethical theory’s within the same complex AI system.