artificial morality
Artificial Morality
Definition of Artificial Morality
Artificial morality refers to the field that attempts to bring moral values to artificial intelligence systems. To try to achieve this, attempts are made to simulate the cognitive capabilities of human beings. By integrating moral values into Artificial Intelligence systems, the aim is to enable them to reason ethically, follow ethical principles and behave in a "reasonably and responsibly" way in a different contexts[1].
Two concepts related to artificial morality are Machine Ethics and artificial moral agents. The first one only refers to machines having some ethical principles, but these must be provided by the developers. They do not imply that machines have the ability to reason about these values and make decisions based on them[2]. The second, Artificial Moral Agent, is a more specific concept than artificial morality. artificial moral agents are agents that already have this capability built into their system[3]. While the concept of artificial morality is more generic. It is the previous step for these agents to have this capability. With artificial morality is intended to achieve (maybe unsuccessfully) that the systems have this capability.
Implications of commitment to Artificial Morality
Committing to Artificial Morality means recognizing the importance of incorporating ethical considerations into AI design and development. This involves a reassessment of what an ethical principle is and how it will be interpreted by the system. As Misselhorn[1] argues in the example of a vacuum cleaner, "Should it vacuum and hence kill a ladybug that comes in its way or should it pull around it or chase it away? How about a spider? Should it extinguish the spider or save it as well?". A moral act can change significantly from person to person, and wanting to develop systems that can make moral decisions has many implications. First, it involves developing a framework that carefully considers what is an ethical response and takes into account more than one perspective. On the other hand, the more technical part of developing the system and the code of how these responses are to be carried out must be considered. In this case, it is also necessary to consider possible unexpected outcomes, i.e., that the system may carry out the task in an unexpected way resulting in something undesirable/unplanned that may deviate from the moral ground[4]. Finally, work must be done on the transparency and explainability
of these systems so that users can know the reasons why these decisions have been made.
Societal transformations required for addressing concern raised by Artificial Morality
Addressing concerns related to Artificial Morality requires a combination of cultural, educational, institutional and social changes, among them:
Ethics education: Promote ethics education on what ethical frameworks are, implications, etc. so that developers can take all these things into account when developing systems and be clear about the framework on which to work.
Multidisciplinary collaboration: It is of vital importance that different points of view are taken into account when developing this ethical framework and the systems. It is therefore necessary that people with different backgrounds, perspectives, etc. collaborate in developing it.
Ethical review processes: In addition, this framework from which to develop as well as the systems that use it should be under constant review and evaluation. This could be an attempt to ensure that the systems comply with these standards and guidelines in the desired manner.
References
C. Misselhorn, "Artificial Morality. Concepts, Issues and Challenges," Society, vol. 55, no. 2, pp. 161-169, Feb. 2018, doi: 10.1007/s12115-018-0229-y. ↩︎
M. W. Anderson and S. L. Anderson, Machine Ethics. 2011. doi: 10.1017/cbo9780511978036. ↩︎
C. Misselhorn, "Artificial Moral Agents," in Cambridge University Press eBooks, 2022, pp. 31-49. doi: 10.1017/9781009207898.005. ↩︎
J. Schulman, "Concrete Problems in AI Safety," arXiv.org, Jul. 2016, doi: 10.48550/arXiv.1606.06565. ↩︎