Theunissen2022ExplainableAI

Mark Theunissen and Jacob Browning, "Putting explainable AI in context: institutional explanations for medical AI

Bibliographic info

Theunissen, M., & Browning, J. (2022). Putting explainable AI in context: institutional explanations for medical AI. Ethics and Information Technology, 24(2), 1-10.

Commentary

For applications of AI in the field of health care, there is a debate amongst scholars whether these machine learning algorithms need to be explainable. The use of black box algorithms in health care raises questions and challenges for their applicability: Do we have sufficient reasons to trust the outcomes of these algorithms when we cannot understand how they are obtained? Can doctors or health care providers be held responsible for a medical diagnosis based on these AI systems? The main arguments for explainability
involve creating trust and ensuring accurate decisions, while those against argue that high reliability of black models itself is sufficient ground to base decisions on.

Currently, most research has been on creating post hoc explanations of black box models and here the focus was on technological solutions. Theunissen and Browning argue that another kind of explanation is needed to minimise the risk, namely an institutional explanation. An institutional explanation are more pragmatic explanations, communicating information about the machine that aims to alleviate the epistemic concerns of the medical professionals relying on it. Most of the explanations that are created now, are only useful for AI engineers and not for health care professionals, therefore these explanations do not create trust. Furthermore, the authors argue that ideally algorithms will need to be one that addresses the concerns of practitioners and that likely will not be known until the system is evaluated in context.

What I found interesting in this paper was that the authors really examined the human-machine interaction that is needed to create trust. The paper goes further than mainly discussing whether an explanation or reliability is the main factor to create trust in medical AI. The situation and circumstances of the decision by the machine and the explanation of that decision are examined and it is discussed what fosters trust in the explanation and the system itself. I found it interesting that the papers discusses state-of-the-art XAI methodologies and what we can actually do with those kinds of explanations.

A weakness I found is that the authors argue that explanations focusing on the human decisions involved in and a part of the design and deployment of medical AI systems are essential for creating post hoc explanations and interpreting the reliability of medical AIs. Here I think the authors underestimate the distrust people have against machines making decisions that involve them, since the "intentions" of a machine are unknown. Furthermore, the potential of bias in the data set used to train the models does not have the depend on human decisions, for example since the bias is hidden. Generally speaking, there is more trust in people that work in health care (also AI engineers) than robots themselves. Thus, explanations of human decisions in the development of the AI system does not form the root- problem of distrust to AI's.

Excerpts & Key Quotes

⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this

Agent versus explanation

Comment:

I think the point the authors make here is very important for the discussion of explainability
in medical AI. There is a relation, between the threefold: the result, the explanation and the agent (doctor). An accurate result with a confusing explanation might lead a user to wrong distrust of a system, where an inaccurate result with a clear explanation can lead to incorrect trust. Thus it can be argued that the correct explanation in these cases is related to the agent. Here there is also gap in existing literature, most explainable ai tools are geared towards engineers that understand the systems, and not on a doctor who knows a lot about patients but less about AI.

Agent versus explanation

Page 4:
"We take computational reliabilism as being essential for ensuring the accuracy of a system is robust and properly formed with expert input. However, this approach focuses primarily on how engineer’s design and test the technology in the lab, with less recognition of how it needs to be integrated into the medical practice. This concern with the technological design understates the challenge of making the machine a useful tool for the medical professional in context, which requires a more robust validation—one that makes clear it is working appropriately in the field."

Comment:

Here I think it would have been interesting if the authors did not only discuss the practical implications of computational reliabilism (technical challenges), but also whether conceptually (assuming perfect validation mechanisms exist) it is possible that accuracy replaces explainability
and could create trust. I argue that in health care, we often do not have an explainability of why something works but since we have evidence it does works we trust it. For example, the workings of aspirin are still unknown to us, while it is used on a very large scale.

Institutional explanations

Comment:

Here I wonder whether these types of explanations would really form the foundation for trust, like the authors argue. I do not doubt that such institutional explanations would help to create trust but I think in general there is already quite a lot of trust in health care institutions. In many cases a patient also does not understand why a doctor makes a decision but trust the expertise, experience and good intentions. The authors claim to hope that institutional explanations can reduce the opacity. However, the decision making process of black box models remains just as black as before, since it is inherently opaque. Putting the black box in a white room does not suddenly make the black box less black.