responsibility gaps-A
There are multiple notes about this text. See also: responsibility gaps-B
Responsibility gap
Definition of the Responsibility Gap
The term responsibility gap was first used in relation with AI by Andreas Matthias to describe the lack of ability to trace responsibility over the actions of autonomous AI systems that are capable to operate using unfixed or learned rules.[1] With responsibility we can mean two things: firstly, when a person is responsible for something they should be able to offer an explanation of the intentions, secondly, that person is rightly, in cases where there is control over the behaviour and outcome, subject to a reaction based on the performed action (such as praise or punishment). Traditionally, manufactures were responsible for the consequences of machines, but when machines make autonomous decisions, the control condition is violated and the manufacturer potentially cannot be held responsible.
A term related to the responsibility gap is explainability
. When a AI system is explainable the first form of responsibility could be achieved, since we can track based on what beliefs and reasons a system makes a decision. However, the responsibility gap is not gone when AI is explainable given the second form, being able to praise or punish someone for the outcome, since the manufacturer still does not have control and it is questionable if an AI can effectively be subject to praise or punishment. A second term that is related is responsibility for harm caused by artificial agents. This term falls under the term responsibility gap but is not equal to it. The responsibility gap forms to all forms of outcomes based on AI's while harm is example of such an outcome.
Implications of commitment to Responsibility Gaps
An implication of commitment to the responsibility gap is the question whether is justified to apply AI's where no one can be held responsible for the outcomes. An example that is discussed extensively in academic research is the case of Lethal Autonomous Weapon Systems, which can be seen as killer robots. Since in a situation where such a robot commits a war crime no one can be held responsible, a lot of regulations are being developed to prevent the use of such weapon systems.
Societal transformations required for addressing concern raised by the Responsibility Gaps
As I discussed here, explainability could address the first type of responsibility (also called moral accountability), for the second type a societal transformation is needed in the sense that we have to accept/learn that when applying AI this could lead to situations where no one can either be praised or punished for the action. This would mean the entire way that we create and apply laws has to be transformed. Another potential transformation is that the way we look at control is changed. For example, when an army commander deploys a killer robot, it is known that this could lead to a warcrime and indirectly the commander is in control over that outcome. Although this commander is not directly in control, the fact that there is a decision to deploy or not deploy could mean that person is (at least partly) responsible.