Peters2022ExplainableAI-B

There are multiple notes about this text. See also: Peters2022ExplainableAI-A

Uwe Peters, "Explainable AI lacks regulative reasons: why AI and human decision‐making are not equally opaque"

Bibliographic info

  1. U. Peters, “Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque,” AI And Ethics, Sep. 2022, doi: 10.1007/s43681-022-00217-w.
  2. O. Gillath, A. Ting, M. S. Branicky, S. Keshmiri, R. M. Davison, and R. Spaulding, “Attachment and trust in artificial intelligence,” Computers in Human Behavior, vol. 115, p. 106607, Feb. 2021, doi: 10.1016/j.chb.2020.106607.

Commentary

The aim of this study is to analyze the differences between human decision making and AI decision making with respect to their opacity and transparency. The author argues that human decision making may be more transparent and trustworthy due to mindshaping or a "regulative" function that allows it to be more predictable to other people, whereas AI lacks this "function" and may be more opaque and therefore less trustworthy. The author also points out that humans have a greater capacity for reasoning, and that on the contrary AI due to this lack of reasoning and corrective actions can lead to erroneous and misleading conclusions. From my point of view, while I understand the author's arguments, I believe that they do not guarantee that the explanations will be more transparent or trustworthy. Precisely because of that regulatory "function" humans can hide information to convince another person of the veracity of our statement or even lie. On the other hand, AI systems (unless they are designed to do so) would not perform these actions. On the other hand, humans are also prone to assert things without weighty arguments, simply out of belief. Here the corrective mechanism mentioned by the author would come into play, but from my point of view I do not think it is sufficient in all cases. However, if we assume that the human is well informed and documented, I do believe that he/she could provide better reasoning and that mindshaping plays an important role. During the explanation, the human can detect which points need to be emphasized or deduce what may be generating doubts and therefore try to offer a better explanation. Artificial intelligence, on the other hand, would not be able to deduce that and would depend on the person being able to express his questions clearly. As a final criticism, I would point out that the author has written some parts of the article making statements for which he has no empirical evidence that this is so "One might have the intuition that AI rationalizing explanations are more predictively accurate because black-box systems cannot deceive and are not as complex as human brains. But it seems equally intuitively plausible to hold that exactly because human brains are more complex and human cognition is more sophisticated, human explanations of HDM are much more accurate. As of now, to the best of my knowledge, there is no empirical study that has pitted the two kinds of explanations against each other to compare their predictive accuracy. There is thus currently no empirical test to adjudicate between these two intuitions." This reinforces my comment that we humans can make assertions without a solid basis, solely based on beliefs.

Excerpts & Key Quotes

Assumtion of regulative feature in AI

"The problem that has gone unnoticed is that if people do partly trust AI systems' explanations more because they implicitly assume that the regulative feature known from human explanations is present, then their trust allocation is partly unwarranted."

Comment:

This passage is interesting because it opens up the debate of how much trust people really have in AI systems. On the one hand, I more or less agree with the author that, if people assume that systems are endowed with this regulatory system, their trust in them would not be entirely justified. However, I believe that if these people assume this regulatory function, it is not because they are "experts" in a specific field, but because they have access to a huge amount of data and the ability to find patterns and similarity. In this case, such trust would be justified for these reasons. In any case, there is not total confidence in AI systems today[2]. So I think that before talking about whether this regulatory function is presupposed, it would be more interesting to talk about trust in general.

opacity in human decision-making

"The opaque parts of the mind that determine HDM outcomes include intuitions, fast automatic response tendencies, and heuristics."

Comment:

This passage is interesting because it opens the debate about how opaque the decisions made by humans are and that, even if the structure is opaque, the source can be transparent. This point is interesting because as discussed in the general commentary of the article, that people's explanations are based on beliefs or intuitions can lead to erroneous explanations. In addition, because of mindshapping one could convince the other person that he or she is right or that what is being explained is justified. However, it is also true that it is possible to know whether the person is basing his explanation on a reliable source or on a mere intuition. In this way the other person can understand where the explanation is coming from and decide whether to believe it or not. On the other hand, an AI system, which has based its explanation on stereotypes for example (which are part of its training data) will always have a "fact-based" reason to justify its decision and therefore it is more difficult to discern whether it is true or not and it will be more difficult to decide whether to believe it or not.

Trust in human decision-making

"One implication of the outlined differences between HDM and ADM is that human reason-giving explanations of HDM and experts' professional decisions can be viewed as in some cases more trustworthy, especially in domains where social feedback on individuals' HDM is common and promotes the regulative impact of the explanations."

Comment:

This passage is interesting because it opens up the debate about how much trust people have in human decisions. I agree with the author that people tend to have greater trust in human justifications, especially if they come from experts. For example, a doctor talking about medicine. Also, I agree that in many cases it is likely to be more reliable or equally reliable. However, the author bases this explanation on the fact that humans have this "self-corrective function" and that is why it is more reliable. But in some cases that may not be enough. Continuing with the example of medicine, in rare diseases or rare cases, an AI system that has access to a lot of data, from many patients, hospitals, countries, could better detect or analyze these cases. A doctor could diagnose wrongly and maybe there is no one who can correct him and activate this "self-corrective function" leaving us with the wrong explanation. So, although I believe that the general public could trust the human explanation more, I don't think it should always be like that.