Sparrow2021MoralMachines
Robbert Sparrow, "Why Machines cannot be moral"
Bibliographic info
Sparrow, R. (2021). Why machines cannot be moral. AI & SOCIETY, 36(3), 685-693.
Commentary
Sparrow introduces the idea that the aim to "build ethics" into machines presupposes of a flawed understanding of the nature of ethics. He claims that ethical dilemmas are highly personal and thus cannot be approached in an objective manner which makes it unsuitable to be "programmed" into a machine. In general, I find this a very strong and interesting argument that a lot can be said for. However, I think Sparrow does not really come to a final conclusion or deepening of the argument. He proposes a few examples that he uses to defend his argument but he is not able to formulate a more general foundation for his claims.
Excerpts & Key Quotes
Difference between theoretical and ethical dilemmas
- Page 688
When it comes to performing a mathematical calculation or analysing a mechanism someone else could make “my” decision because any consideration for them is also a consideration for me and vice versa. By contrast, ethical dilemmas attach to agents in such a way that they are essentially dilemmas for particular people.
Comment:
This illustrates one of the most leading argument that Sparrow proposes. He claims that the main difference between theoretical/mathematical and ethical dilemmas is the fact that when considering a theoretical dilemma the consideration would be the same for every person and when considering an ethical dilemma the consideration is different for every person. He continues by describing that the life history of a person has a big influence on the way they might think about certain ethical dilemmas, e.g. religion, past experiences etc. I would say I do agree with his argument and believe that two people can make a very different choice when faced with the same ethical dilemma, without one decision being better than the other. I do however believe that this only the case for a fragment of ethical dilemmas. There is a certain baseline that everyone would agree on. I think that is something Sparrow does not address here.
Moral Authority
- Page 691
Machines do not have sufficient moral personality to possess moral authority. They cannot stand behind their words in the way people do, because they lack lives of the sort that might demonstrate their understanding of issues at stake (Gaita 2004, pp 267, 279) and they lack bodies and faces with the expressive capacities required to sustain the distinctions that are essential to our judgements of the worth of the advice of others.
Comment:
Sparrow continues by introducing the notion of moral authoriy and moral personality. He says that moral claims have a different impact depending on who makes them. In his opinion, machines do not have enough moral personality to possess moral authority. In addition to that, to be able to make a powerful moral statement and decision an entity should be able to feel some kind of remorse, since this is not possible for machines he argues that they should not be able to make these kinds of decisions. Personally, I do not agree with his line of reasoning here. I believe it is very context- and especially time-dependent on how moral authority is perceived. The fact that machines are not yet viewed as possessing moral personality now, does not mean that they will not in the future. Furthermore, the ability of making moral decision might contribute to their moral personality. If you would restrict them from making these kinds of decision you could become stuck in a vicious circle since you will not allow them to enlarge their moral personality.
(Final quote)
- Page 692
Before we try to build ethics into machines, we should ensure that we understand ethics.
Comment:
This is a statement I can fully agree with. In my opinion, we should first try to fully understand what ethics is before we try to build it into machines. More particularly, we should decide on what the ethically correct thing to do is in very specific situations in order to be able to correctly let machines learn this. If we do not define this quite strictly,[1] we are at risk of training machines on incomplete or incorrect data which might have devastating consequences.
#comment/Anderson : what do you mean here by "strictly"? ↩︎