Danaher2016ThreatAlgocracy-B

There are multiple notes about this text. See also: Danaher2016ThreatAlgocracy-A and Danaher2016ThreatAlgocracy-C

John Danaher, "The Threat of Algocracy: Reality, Resistance and Accommodation"

Bibliographic info

Danaher, J. (2016). The threat of algocracy: Reality, resistance and accommodation. Philosophy & Technology, 29(3), 245-268. (APA)

Commentary

This text focuses on the dangers and consequences of relying on algorithms to make decisions. Specifically, this text addresses the issue of algorithmic decision making in public decision-making processes. The text brings up two notions that are especially interesting. Firstly, the introduction of the term 'algocracy' and secondly, the societal power imbalance that occurs in an algocracy.

To start, the term algocracy is defined as a governance system that uses algorithms to make decisions. These algorithm driven decisions affect society's structure and the limitations enforced on citizens. The author identifies two main dangers that arise in an algocracy: the hiddenness concern and the opacity concern. The hiddenness concern addresses the process of data collection that is required to be able to feed enough data into computer algorithms in order to extract valuable conclusions. The concern is that this process happens in an obscure manner and that citizens' information is being gathered without their explicit consent.

The second notion worth mentioning that is brought up by the author revolves around the opacity concern. This concern revolves around humans' inability to understand the decisions made by algorithms. What is most interesting with respect to this argument is that the author splits this notion of opacity in two groups. Firstly, the most popular definition of opaque is brought up, namely; algorithms that are in principle uninterpretable by humans but do perform well.

However, the most interesting interpretation of opacity is the one brought up next. There are algorithms that are in principle interpretable by humans, however, due to the complexity of these algorithms and the background knowledge required to understand these algorithms, they are still to be considered as opaque for the majority of individuals. The author argues that this changes the threat of algocracy into a threat of epistocracy. The conclusion remains the same whether labelled algocracy or epistocracy, algorithm-driven decision making obscures the process of decision making. This infringes on the political legitimacy of a society (as defined by Estlund) as it becomes harder or even impossible for most individuals to understand the procedures that affect them.
These risks together form what the author refers to as the 'threat of algocracy'.

The text also aims to provide solutions to the dangers that come along with an algocracy. However, one of the suggested solutions seems to be unreasonable and lacks convincing arguments. The author suggests human enhancement as a possible antidote to the opacity caused by complex decision-making algorithms and to the epistemic imbalanced caused by these algorithms. However, in the current day and age science is nowhere near enhancements of the human brain on that scale. Bringing in the notion of human enhancement to the discussion seems to be unreasonable as there is no evidence that such enhancements could ever be physically attainable. Furthermore, from a more abstract perspective, introducing an unrealistic solution to a problem does solve the problem. In this specific scenario, the only conclusion that can be drawn is that if such human enhancements ever become available, they would level the playing field and the understanding of decisions made by algorithms would be equal among humans. Again, such enhancements are very far from our current reality and does not solve the current problem that an algocracy causes.

Excerpts & Key Quotes

⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this

Defining algocracy

"So what do I mean by use of the term ‘algocracy’? I use it to describe a particular kind of governance system, one which is organised and structured on the basis of computer-programmed algorithms. To be more precise, I use it to describe a system in which algorithms are used to collect, collate and organise the data upon which decisions are typically made and to assist in how that data is processed and communicated through the relevant governance system."

Comment:

This excerpts shows the definition of an algocracy as given by the author for this text. One of the most important observations regarding this definition is that a society is defined as an algocracy if algorithms constrain and structure the actions a member of society can undertake. However, I think that this paper focuses more on the threats of a subset of all algorithms. Computer-programmed algorithms have been used since computers have been around. However, non-deterministic machine learning algorithms have come into use more recently. In my opinion, it is the latter that is mostly addressed in this paper. Non-deterministic algorithms are in most cases uninterpretable for humans and therefore restrain and structure members of society in a more severe and opaque way than regular algorithms. For instance, a more simple algorithm, like the one managing the switch between red and green lights for a traffic light, does not pose a threat to society as its role is very basic and it will never produce unforeseen behavior. .

Dangers of an algocracy

"HIDDENNESS CONCERN: This is the concern about the manner in which our data is collected and used by these systems. People are concerned that this is done in a covert and hidden manner, without the consent of those whose data it is.
OPACITY CONCERN: This is a concern about the intellectual and rational basis for these algocratic systems. There is a worry that these systems work in ways that are inaccessible or opaque to human reason and understanding."

Comment:

In this text excerpt the author introduces the two main threats to society caused by an algocratic governance system. Firstly, the hiddenness concern revolves around the notion that the data used in decision-making algorithms are obtained such that the data subjects (individuals from which the data originates) are unaware of the fact that their data is being collected and used. Consequently, this data could be used in decision-making processes that negatively affect them and for which they have not given explicit consent.

Secondly, the notion of opacity applies in multiple contexts. The first interpretation revolves around the fact that in order to understand algorithms, a large amount of knowledge is required. Therefore, understanding algorithms becomes (in practice) impossible as many members of society do not have this knowledge, nor do they have the time and resources to obtain the knowledge. The second interpretation is more fundamental. In the field of artificial intelligence, a subset of algorithms has recently shown to be very successful. These algorithms contain a so-called 'black-box' component, meaning that the steps leading from input to output are in principle not understandable to the human brain. In this interpretation opacity applies to the entirety of society. Both interpretations pose the same problem; citizens lose the ability to understand decisions that affect them if they are made by 'opaque' algorithm-driven decision-making processes.

Empowering humans to prevent an algocracy-driven power imbalance

"Where the sousveillance advocate calls for everyone to have their own data-monitoring technologies in order to hold authorities to account, the advocate of non-integrative algorithmic partnerships simply adds to this the claim that everyone should have their own data-mining algorithms. This is effectively like having your own private AI assistant, who can help you to comprehend and understand the other algorithmic processes that affect your life."

Comment:

This passage addresses one of the possible solutions suggested to solve the problem of unequal understanding of algorithms caused by the 'algocracy'. The author suggests that every individual should have their own data-mining algorithms that allow them to understand other algorithmic processes affecting their lives. However, I would like to point out that this does not solve the root of the problem but merely shifts the problem of understanding to another level. If every individual would be in possession of a data-mining algorithm (that allows them to understand other algorithmic processes) such algorithms would need to be build one way or another. Therefore, a set of specialists with enough understanding to make these algorithms would be required to realize such private AI assistants. Again, there would be an inequality between the people building these assistants and the normal people making use of them. Thus, the problem of inequality would not be solved. This passage shows that although succeeds in pointing out a possible societal issue that comes with the embedding of algorithms into our decision-making systems, the suggested solutions are unrealistic and do not solve the root of the problem.