Danaher2016ThreatAlgocracy-C

(J. Danaher, "The Treat of Algocracy: reality, Resistance and Accomadation.")

Bibliographic info

Danaher, J. The Threat of Algocracy: Reality, Resistance and Accommodation. Philos. Technol. 29, 245–268 (2016). https://doi.org/10.1007/s13347-015-0211-1

⇒ This text makes some interesting points on legitimate decision-making processes and explainability
, which you could take a different stance on. It will be interesting to see if their somewhat pessimistic conclusion can be helped by changing such a stance. I do however agree with their conclusion that caution is warranted in the algocratic sytem. We need to be more aware of the risks and make decisions around AI accordingly. While some advances may not be able to be stopped, some we might be able to control and choose where and if we want them in our society and governmental decision-making processes.

Commentary

Excerpts & Key Quotes

⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this

What is the treat of algocracy?

"Legitimate decision-making procedures must allow for human participation in and comprehension of those decision-making procedures."

Comment:

In the definition of legitimate decision making given in the text, human participation is crucial. However, human participation is seen in a very narrow and direct sense. Human participation can be interpreted as many different things. It can be seen as being able to control or check the decisions that are being made by an algorithm in this algocratic system. However, couldn't human participation also be present in choosing the algotihms or calling the shots on where we might not want these technologies. Having a say in the algocratic system is also human participation. Like politicians do for their voters in order to reach certain goals. They receive their legitimacy from the backing of their voters. Couldn't such a decision making process in the algocratic system also receive its legitimacy from our decision to allow it to make the decisions and accept the risks it might bring along. In the worst case we can always stop using it, just like we can demand that politicians stand down. I think we have to accept the uncertainty and risk that comes with AI, just like we are not sure if the politician we choose is going to fulfill our wishes.

Secondly, Danaher argues that comprehension must be present of the decision-making procedures. He goes on to argue that citizens don't need an extremely in depth understanding of the algocratic system, but more than just the general rationale. While I agree we would all benefit from understanding AI better, I don't think such an understanding is necessary from legitimate decision-making procedures. One reason we have an indirect democracy, is so that we don't have to concern ourselves with these difficult decisions every day. We chose people we then entrust to make these decisions for us. I think most people already do not understand most of the governmental policy and otherwise only in a very basic sense: the general rationale. With AI it can become the case that nobody fully understands, although this might also be the case with current legislation, this shouldn't be too steep of a hurdle, as then there are people that are aware of our limitation of understanding and the risks that brings.

Can we accomodate the threat?

"The second reason for doubting the reviewability solution is that, to the extent that algocratic systems could be made to rely on interpretable processes, the likely effect would be to replace the threat of algocracy with the threat of epistocracy. It is highly unlikely that any particular citizen would have the background knowledge and expertise to review, engage and understand the algorithmic processes by themselves."

Comment:

Even if AI was explainable, it would still be almost impossible to explain to the ordinary citizen. We can't all be experts on everything. That is why trust is so important. If we trust our scientists and politicians, we can entust them with weighing the options and making the decision. Then they weigh the possible risks to the possible benefits. Taking more caution the greater the uncertainty. In this frame, it remains important to epistemicaly enhance citizens as they will be able to make better decisions when voting and legitimizing their government.

Conclusion

"In short, we may be on the cusp of creating a governance system which severely constrains and limits the opportunities for human engagement, without any readily available solution. This may be necessary to achieve other instrumental or procedural gains, but we need to be sure we can live with the trade-off."

Comment:

This conclusion does not seem so pessimistic to me as the text makes it out to be. It shows we have a choice, a choice we have to think about deeply, as we cannot have it all. Every choice has risks and uncertainty. This is why we should act accordingly. Be careful with potentially harmful technology. Consequently, we might choose explainability
over efficiency, because maximal efficiency should not be 'normal' and is not necessary or necessarily better overal. Something else will probably suffer as a result. Finding the right balance will prove to an ongoing challenge.