Markus2020XAIHealthcare

"The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies", by Aniek F. Markus et al.

Bibliographic info

Markus, A.F., Kors, J.A., & Rijnbeek, P.R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care. Journal of Biomedical Informatics, 113, 103655. doi:10.1016/j.jbi.2020.103655.

Commentary
This paper describes the necessity of the usage of explainable AI in healthcare. It argues that explainability fosters trust and reliability in AI systems, terms that are inherently valuable in the context of healthcare. In the article, there is emphasis on the fact that AI can significantly enhance healthcare outcomes, because the opaque nature of many AI models poses a barrier for good understanding, and thus for widespread adoption. By making AI more explainable, clinicians can better understand (and trust) algorithmic decisions. This is crucial in scenarios where the stakes are high. The paper breaks down the types of explainability methodically, describing their relevance and application in healthcare. The paper stresses that the demand for explainability varies based on the intended use of the systems.

Excerpts & Key Quotes

Importance of explainability in healthcare AI

Quoting on 2024-07-02 at 17:19
Page 2: "Lack of transparency is identified as one of the key barriers to implementation. As clinicians should be confident that AI systems can be trusted, explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI."

My comment: This excerpt shows the core thesis of the paper: transparency and understandability of AI systems are essential for their unanimous acceptance and widespread usage in healthcare settings. The authors argue that without explainability, AI systems won't be able to achieve the necessary level of trust among healthcare professionals, who have to be able to work with these systems and understand how and why decisions are made to justify their use when treating patients.

Framework for explainable AI

Page 3: "We propose a framework to guide the choice between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations)."

My comment: In this excerpt, the authors do not only advocate for the integration of explainability into AI systems, but they also provide a structured approach to determining whether the appropriate type of explainability is used, based on the specific healthcare needs. The proposed framework is pivotal, as it helps in the practical application of explainable AI, and it guides developers on how to implement explainability into AI systems in a way that aligns with healthcare requirements and enhancing trustworthiness.

Challenges in implementing explainable AI

Page 4: "Often-mentioned concerns include potential algorithmic bias and lack of model robustness or generalizability. Other problems include the inability to explain the decision-making progress of the AI system to physicians and patients, difficulty to assign accountability for mistakes, and X Formerly Shared folders/Atlas/Dots/Things/Vulnerability to malicious attacks."

My comment: This excerpt shows the many challenges that arise when implementing AI systems in a setting such as healthcare, which go beyond just technical problems to involve ethical, legal and social implications. The authors illustrate concerns related to algorithmic bias, which may develop prejudiced outcomes in AI systems, because they are trained on flawed date. The passage shows the necessity of addressing a multitude of challenges comprehensively. By developing XAI, we can develop technically proficient systems which are also clear and understandable, which might genuinely enhance healthcare as a whole.