Nyholm2022ControlProblem-B
Sven Nyholm, "A new control problem? Humanoid robots, artificial intelligence, and the value of control"
Bibliographic info
⇒ Nyholm, S. (2022). A new control problem? Humanoid robots, artificial intelligence, and the value of control. AI and Ethics, 1-11. https://doi.org/10.1007/s43681-022-00231-y
Commentary
Nyholm argues that there are effects of control that are often disregarded when considering the problem of controlling AI. He describes how self-control is often seen as good and if controlling AI can be seen as a form of self-control, it could be good beyond its instrumental value. However, if AI systems are seen as persons, it could be ethically problematic to control them, as it is problematic to control other persons. He suggests that we should therefore build AI systems which do not have a humanoid form.
While the argument is very interesting and raises concerns regarding how AI systems are often developed to be similar to humans, I find it difficult to imagine what a system that is intelligent but not similar to humans would look like. While non-human animals possess intelligence as well, it is more feasible to create intelligence that mimics humans than to find new forms of intelligence, especially considering that these AI systems are developed with the aim to be perform tasks typically executed by humans. Therefore, I think it would have been a valuable addition to the paper to extend on the argument that AI systems should differ from humans and what exactly that would entail.
Excerpts & Key Quotes
The control dilemma
- Page 7:
In relation to any AI agents which might be regarded as being or representing some form of persons, we could say that this does not only create a new control problem, but also a control dilemma. Losing control over these AI agents that appear to be some form of persons might be problematic or bad because it might be unsafe, on the one hand. Having control over these AI agents might be morally problematic because it would be, or represent, control over another person, on the other hand.
Comment:
I find this an interesting dilemma, as it compares the instrumental problem of not being in control of AI to the moral problem of being in control of an AI agent regarded as some form of person. It is difficult to decide which of the problems is more important to solve, especially because the consequences of not having control over AI systems are uncertain. While not having control over AI systems appear to be unsafe, it is unclear whether and to which extent it can lead to harm. Not knowing for certain how much harm follows not being in control of AI systems makes it difficult to weigh it against the moral harm of controlling an AI agent that is seen as a person.
The ethical problem with AI control
- Page 8:
If we perform acts of violence against robots made to look and behave like human beings, for example, this can be viewed as ethically problematic because it glorifies or glamourizes violence against human beings.
Comment:
This quote illustrates Nyholm's argument for why it might be bad to have control over AI systems. I think it is an interesting point, especially since the anthropomorphizing of AI systems seems to already be happening today. Even ChatGPT, which does not physically resemble humans at all, behaves like a human in some aspects, as it acts very similarly in its usage of natural language. Accordingly, some people treat it as a person, for example by referring to it with gendered pronouns or being friendly and polite while texting it. Nyholm's argument would encourage this behavior, as it might be problematic to treat ChatGPT badly if it is seen as a person, because it could promote this treatment towards other humans. However, to which point this argument should be extended is unclear, as one could argue that playing violent computer games does not lead to an increase in violent behavior either.
Avoiding the creation of humanoid robots
- Page 10:
And again, it might be best to avoid creating humanoid robots to begin with, since we can then avoid these kinds of worries about whether there is something symbolically or otherwise ethically problematic about wanting to be in complete control over these AI agents [cf. 3].
Comment:
This quote is taken from the conclusion to this article, and seems to be one of the key points of the argument. While I agree that creating humanoid robots can definitely be problematic if we want to be in control of them, I find it difficult to imagine a robot that does not resemble a human in some way. Even if there was a way to make robots look completely different from humans phyiscally, chances are that they would still be similar to some non-human animals, which might also raise ethical problems regarding control. Moreover, if the robot acts intelligently, it is likely that it will still be viewed as similar to humans. I think this might be even more difficult to avoid, since an AI system that has a different intelligence from humans might be inefficient in carrying out the tasks it was developed for, as defining these tasks and judging whether the robot is able to solve them is done by humans and involves human intelligence.