SkaugSætra2020DefenceTechnocracy
Henrik Skaug Sætra, “A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government”
Bibliographic info
Henrik Skaug Sætra. “A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government.” Technology in Society 62, (2020): 1-10. https://doi.org/10.1016/j.techsoc.2020.101283.
Commentary
Skaug Sætra’s topic is original: examining the merits of the concept of an AI technocracy in a neutral manner, without letting the negative aspects of an AI technocracy degrade it immediately. Technocracy, as Skaug Sætra defines it, is a society in which technologists and scientists wield political decision making power based upon their expertise. An AI technocracy in particular, is the delegation of decision making power not to technicians or scientists, but to artificially intelligent algorithms. Skaug Sætra argues against what he sees as the prejudice that (AI) technocracy is something that is morally deficient in itself, because he holds that the concept, like that of democracy, is not good or bad by itself, but that it depends on the situational realization. The definitions of technocracy used are clearly founded in the literature and the modifications Skaug Sætra makes to it are well argued for. Skaug Sætra discusses several pros and cons of the AI technocracy. As for cons, he names (i) the demographic argument that technocracy privileges certain groups above others; (ii) the bias argument that people (and computers) are flawed too and hence technocracy is not the perfect option for delegation of decision power; (iii) the process of technocracy is illegitimate, because it moves away from participatory politics; (iv) deliberation will lead to better outcomes than deferring to experts; (v) knowledge is decentralized in a society and hence the group of experts will not have access to all knowledge to base good decisions on.
Apart from these serious challenges, Skaug Sætra sees merit in AI technocracy, based upon what he sees as the essence of politics, namely that “finding [the] moral values [of a certain society] is the first purpose of politics.” The best policies are found if tested for these moral values, and, of course, the best policies should be implemented. One can easily criticize this definition of the essence of politics, by claiming that politicians are never seeking society’s moral values and giving a more accurate description of politics. Rather (and alas) politicians are doing something way more pragmatic: they are balancing the interests of the group of people they represent in the light of the interests of their party regarding a certain decision to be made against the interests of other groups of people represented by parties. But let us move on to the defence of AI technocracy. The gist of the argument is that if we have a computer (AI better than humans at acting upon science, engineering and complex societal and macroeconomic issues) that can make expert political decisions for us that clearly makes less errors than the exports currently making these decisions, we should use it, based upon the premise that the best decisions lead to the best policies, which should be implemented.
Well, what’s the catch? (a) This will change humanity’s ‘political nature.’ Skaug Sætra retorts that the algorithmic outsourcing of decision making is no different from bureaucracy. (b) Even the algorithmic decisions are the best (effective), they remain illegitimate (because citizens no longer exercise participation), which should never be endorsed. Skaug Sætra replies that participation and algorithms could be combined in principle. ( c ) Bringing computers into play with regard to decisions affecting human lives is immoral. Here Skaug Sætra’s counterobjections are not that strong: (1) he points to the fact that algorithms already are involved in moral decisions, (2) we have no grip on politicians’ morality as well and (3) that it is advantageous that AI cannot be corrupted. All these points fail, in my opinion, because (1) and (2) appeal to the de facto situation as an argument against why something should not be right. People are being raped every day, but that does not make it excusable, or something we should not strive to prevent. As for (3) an algorithm may not be a moral agent, but it can be misused and deformed to serve other purposes and produce other outcomes. So in what sense can AI not serve moral corruption?
Furthermore, arguments against AI technocracy include that from transparency, i.e. we do not understand the AI-decision, and from accountability, i.e. since the algorithm is no moral agent, who is accountable for the algorithmic decision?
In conclusion, Skaug Sætra reflects on the pros and cons, where he proceeds with a very nuanced discussion. He claims that political participation is necessary for a good society but that algorithmic decision making can strengthen this participation rather than weaken it. He admits that all technologies have moral implications and he sees this as a fact that diminishes the importance of arguments against the usage of AI for political decisions. While he admits an actual implementation of AI technocracy today would be flawed, because it would disproportionately empower the people creating the algorithms, he thinks the charges of transparency are unfounded, because nothing of the algorithm itself is hidden; it is just difficult to understand. In short, Skaug Sætra offers a series of interesting arguments and counterarguments regarding AI technocracy that, while not always convincing, do show that the discussion regarding AI technocracy is far from finished. Given AI technocracy is somewhat of a reality, it is imperative to speed up the discussion regarding its desirability.
Excerpts & Key Quotes
- Page 4:
“With regard to the applications of AI, I posit that we have already moved a long way towards AI decision making, and this is an important point to keep in mind when we consider the technocracy of AI. As I will show, a condemnation of the technocracy of AI that I propose will in many respects simultaneously involve a condemnation of many current practices.”
Comment:
The point Skaug Sætra makes here is very important. Reflections on AI technocracy do not pertain to a distant future, rather they are applicable to reality as is in Western society.
- Page 8:
“I argue that AI is transparent, but that we simply have a hard time
understanding how known methods and known input lead to decisions
that are better than those we could make ourselves. If nothing is hidden or secret, a decision is by definition transparent, and whilst further work on explainable AI will be beneficial, Objection 4 [algorithmic decision making is not transparent] does not appear to be particularly strong.”
Comment:
I think Skaug Sætra conflates two things here: that something is in principle understandable for somebody, does not make it transparent. Transparency has to do with access. What if the person that is capable of understanding the algorithm has no access to it? Taking a step back, Skaug Sætra falls into his own trap: even if there is transparency for somebody, who is it? Some elite programmer or software developer. Conceiving of transparency simply in terms of abstract understandability then strengthens the arguments from epistocracy against the AI technocracy.
- Page 5:
“Firstly, the purpose of politics is to find the best policies, in accordance with some set moral basis. Secondly, we should enact the best policies we have identified. Thirdly, AI is better than human beings at creating and identifying the best policies. All this leads to the preliminary conclusion that AI should be given the power to discover and enact our policies.”
Comment:
This quotation contains the main argument of the paper. This argument is the starting point for the shallow defence of AI technocracy against the set of cons discussed by