Lundgren2021MachineDecisions-A

There are multiple notes about this text. See also: Lundgren2021MachineDecisions-B

Björn Lundgren "Ethical machine decisions and the input-selection problem"

Bibliographic info

⇒ Lundgren, B. Ethical machine decisions and the input-selection problem. Synthese 199, 11423–11443 (2021). https://doi-org.proxy.library.uu.nl/10.1007/s11229-021-03296-0

Commentary

⇒ What is interesting is Lundgren's criticism on the use of the 'standard approach' for the normative evaluation. The 'standard approach' is widely used in moral evaluations, for example, in the Moral Machine Experiment, where a normative evaluation is made by participants on the decisions of an autonomous vehicle. Lundgren argues that normative evaluations based on the standard approach lack external validity because they are based on idealized cases. In these cases, people make decisions based on outcomes that are certain, so without the use of probabilities, and make decisions without taking the amount of input that a machine learning algorithm needs to make a decision into account. Whilst Lundgren makes good points, there is a major issue concerning these points, namely the extent to which we can apply these ideas in practice. Take for instance the point on the amount of input needed: Lundgren argues that this amount should be taken into account in the normative evaluation, however, the amount of input needed and, more importantly, what can be done with this input is often put into secrecy through black box and complex machine learning models. Therefore, often, we will not know the possible usages of this input by the machine. In addition, fast development of new technologies will make it hard to define a static amount of input needed, as development might reduce the amount of input needed for certain decisions. Because of this fast-changing nature, applying this amount-of-input constraint in its static form in the normative evaluation will not reflect the accuracy of the current situation in technology that is needed for the evaluation.

Excerpts & Key Quotes

The Uncertainty Approach

"..what makes a machine decision right or wrong arguably depends on what factual uncertainty is normatively acceptable, which is a normative question."

Comment:

Lundgren makes a good point here about the potential changes in a normative evaluation once uncertainties are introduced in a decision. For example, if I know with 100% certainty that if I kill myself with the car that everyone on the street will still live, I would decide to kill myself. However, if there is still a chance that the people on the street will also get killed, I might choose differently, as then I might be less eager to offer myself in the situation. The standard approach lacks to take such probabilities into account for the normative evaluation, thereby lacking to take the changes in behavior once probabilities are introduced into account. As machines (with machine learning algorithms) almost exclusively work on probabilities, the inclusion of uncertainty should be present in the normative evaluations concerning machine ethics.

The Grandma Problem

"The problem is that it is difficult to determine whether someone has the property of being a grandmother. A simple model could predict that 'x is a grandmother' by first determining that 'x is a human' ... that 'x is a woman' and that 'x is old'. ... The predictor is neither necessary nor sufficient (i.e., there are young grandmothers and there are old women who are not grandmothers)."

Comment:

This grandmother example illustrates the necessity of additional input. When the normative evaluation reveals that it is ethically right to save grandmothers in case there is an accident, additional input is necessary for a machine learning algorithm to determine whether someone is a grandmother or not. However, in the standard approach the kind of input and the amount of input that a machine needs for such decisions is neglected. It would be important to include this in the normative evaluation as people might change their decisions based on the input.

The Risk of Mass Surveillance

"One idea is to equip the autonomous vehicle with facial recognition capability and access to an appropriate database. ... it should be obvious that equipping autonomous vehicles with such technologies would be highly detrimental. It would not only be a privacy invasion for the individual, we would also enable an extreme mass surveillance system..."

Comment:

This quote illustrates why the amount of input matters, namely, in some situations people might alter their moral decisions based on the potential risks of the amount of input needed. Once we know that distinguishing grandmothers from 'regular old people' will result in a potential for mass surveillance with the current technical capabilities, we might change our normative evaluations. For example, in the case of autonomous vehicles, we might not want the vehicle to particularly spare grandmothers over regular old people.