Arvan2022PanpsychismAI

Marcus Arvan and Corey J. Maley, "Panpsychism and AI consciousness

Bibliographic info

⇒ Arvan, M., & Maley, C. J. (2022). Panpsychism and AI consciousness. Synthese, 200(3), 244. https://doi.org/10.1007/s11229-022-03695-x

Commentary

Arvan and Maley make an interesting argument about how digital computers do not represent magnitudes in the same way as human brains, which represents them in an analog way. Connecting this to micropsychism, they argue that digital computers cannot be conscious if micropsychism is true, since their digital representation of microphenomenal magnitudes abstracts away from the phenomenal experience of them and they therefore cannot be combined in a meaningful way to create a macrophenomenal consciousness. The paper further emphasizes the need to investigate the combination problem (how integrating microphenomenal properties can lead to macroconsciousness), and whether phenomenal consciousness is truly analog.

A weakness of the paper in my view is that the authors assume that because our brain appears to create macrophenomenal consciousness by integrating microphenomenal magnitudes in an analog way, the analog representation of those magnitudes is necessary for consciousness according to micropanpsychism. However, considering that the combination problem is not yet solved and it is therefore unknown how these magnitudes are combined to create macrophenomenal consciousness, I think it is not that certain that digital representations cannot lead to consciousness. While the authors provide arguments why they believe this is not possible, I find this difficult to conclude. Moreover, even though other theories of consciousness are discussed and it is mentioned that AI consciousness might be possible for those theories, it is not clearly stated whether and why panpsychism should be preferred over these other theories. Instead, it is argued that only micropsychism entails that digital representations of microphenomenal magnitudes cannot lead to macroconsciousness.

Excerpts & Key Quotes

Digital representation vs analog representation

In short, digital representation requires the physical implementation of symbols (i.e. numerals), and those symbols (in the right sequence) in turn represent numbers (and the numbers, in turn, can represent time, temperature, musical notes, etc.). A digital representation is, in this way, a kind of second-order representation. Analog representation requires the physical implementation of magnitudes, and those magnitudes represent numbers (where, again, those numbers can represent time, temperature, musical notes, etc.). An analog representation is, in contrast, a kind of first-order representation.

Comment:

This quote illustrates the difference between digital representations and analog representations. I find it interesting that the authors name them second-order representations and first-order representations, as this implies that digital representations abstract further away from the thing to be represented than analog representations. This seems to be a reasonable assumption to me, as similarly continuous variables can provide more nuance than discrete variables.

Micropsychism vs other theories of consciousness

Thus, while there are complex and interesting questions regarding whether digital AI could have coherent macrophenomenal experiences according to other theories of consciousness, none of these other theories unambiguously entail what we will now argue is true of micropsychism: namely, that if micropsychism is true, then digital AI may not be capable of realizing coherent macroconsciousness in any meaningful way because of how digital computation abstracts away from microphysical-phenomenal magnitudes.

Comment:

This quote seems to explain the motivation behind looking at AI consciousness from a micropsychism perspective. The authors argue that micropsychism is the only theory that unambiguously entails that digital AI cannot be able of a meaningful form of macroconsciousness. In the previous paragraphs, they discussed how the digital abstraction of microphysical-phenomenal magnitudes by AI does not contradict macroconsciousness for other theories of consciousness. While it explains how their argument provides an important point for micropsychism in contrast to other theories of consciousness, I think it could have been strenghtened by providing reasons for why micropsychism should be preferred over the other theories.

The problem with the abstraction of digital representations

Consider again our base-4 example: we need four different voltage levels to represent four different digits. But which four voltages? Any four will work. They could be zero, one, two, and three; they could be negative-two, four, one, and seven; they could be six and a half, seven, negative-thirty, and thirty. It does not matter. All that matters is that there are four different voltage levels to represent the different digits. This may seem strange, but it is no different from the fact that the Arabic numerals we use to represent, say, the numbers one, two, three, and four have no systematic relationship among them: they are simply different from one another. In any case, from the perspective of the digital functioning of the system, all that matters is that four digits are represented: 0, 1, 2, and 3. Thus, qua digital system, any of these voltage-to-digit mappings would be possible (as well as infinitely many more). But if micropsychism is true, then any particular microphysical instantiation that happened to correspond to a coherent phenomenal experience would be completely a matter of chance.

Comment:

This quote is part of the answer the authors provide to the argument that while human consciousness appears to require analog features, we cannot conclude that macroconsciousness integrated from digital features is not possible. I find it difficult to see how this argument aims to refute that digital representations cannot lead to consciousness. I do not see a problem with a particular microphysical instantiation being a matter of chance, and I would argue that as long as the same instantiation is activated in response to a phenomenal experience, it does not matter how exactly the instantiation looks like. The same phenomenal experience does not need to have the exact same instatiation in every human brain, and similarly, whether a phenomenal experience is represented as a "0", "1" or "300" should not make it impossible to integrate the representation into a macrophenomenal consciousness.

Further commentary

#comment/Anderson :
From #ChatGPT: Micropsychism is a philosophical position related to panpsychism, the view that all things possess some form of consciousness or mental properties. Micropsychism, more specifically, posits that even the smallest parts of physical reality, such as elementary particles, have some simple or basic form of consciousness or mental experience. This view challenges the conventional understanding of consciousness as something that only emerges in complex systems like human brains.

Discussion, Joel Anderson:

The implications of micropsychism for the ethics of AI are significant, since if even the simplest physical systems possess some form of consciousness, then AI systems could be said to have some form of conscious experience. On many understandings of the moral status of robot rights, including the right to democratic inclusion of AI, the question of whether AI systems are conscious is particularly central. If AI systems are viewed as potentially conscious, they may be able to suffer or experience events as painful. Relatedly, the potential for AGI to have emotions or subjective experiences might lead to new laws and regulations concerning their use.
Weber-Guskar 2021 How to feel about emotionalized artificial intelligence