explainability-B

There are multiple notes about this text. See also:

Definition of Explainability

Explainability of AI entails the possibility of accounting for outcomes of decisions that are made by AI, and that the logic behind the decision making can be provided (Andrew Selbst & Solon Barocas, 2018). Explainability of AI thus is a characteristic that allows scrutinizing machine decisions.
This is similar to opacity and transparency, but not the same. opacity is a negatively phrased, kind of antonym concept, as it involves the inscrutability of machine decisions by black-boxing the AI. Ensuring explainability therefore is a possible solution to the problem of opacity. This is not the same as transparency or openness, which would entail completely disclosing the workings of the AI and how it makes decisions. However, this is not always possible due to complexity of AI, nor does this ensure that it becomes scrutable or understandable at that point, which is the case for explainability.

Implications of commitment to (Explainability)

⇒ To what does one commit oneself when one commits oneself to this ethical value/principle (or, in the case of a negative concept like "manipulation," when one commits oneself to diminishing its role)? Put differently, what is at stake here? What key requirements for the appropriate design of AI technologies are raised by this concept?

Explainability has implications both for the AI itself, but also for us humans as data subjects (Selbst & Barocas, 2018). For the AI, explainability entails that the technology is transparent - enough - to learn the outcomes and logic behind the outcomes, in such a way that they can be interpreted and understood. The techniques must be made intelligible enough to get a sense of the operation of the AI. This can involve documentation of the processes for instance. On the other hand, this will involve created a mindset similar to big data literacy (Sander, 2020). This involves personal education and emancipation on topics concerning data and AI, in order to understand the techniques and to enable scrutiny. This is why mere transparency of the underlying logic is not enough, we must be able to perceive and make informed decisions ourselves as well.

Societal transformations required for addressing concern raised by explainability

⇒ What cultural, educational, institutional, or societal changes are needed to address concerns related to this concept?

Societally, we must not merely accept opacity because the AI works, or simply because it the outcomes are accepted. This may lead to unethical practices if we do (example of the COMPAS algorithm, Angwin et al. 2016).
Institutionally, we must therefore scrutinize current AI in place and future AI before implementation and decide if we understand it, and could explain its logic and outcomes to see if we deem it ethical enough. I have a public administration background, so this is really important for decisions affecting everyday lives of people.
Educationally, this also involves updating education in universities and schools to foster this mindset, which is also why I followed this course.
An institutional and educational example is the IAMA (Impact Assessment Mensenrechts and Algoritmes, developed by Utrecht Data School and a.o. Mirko Schäfer) that is now obligatory for employees at the Rijksoverheid. This involves scrutinizing AI and its ethical impact.