Leslie2021AIHumanRightsDemocracy

Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., & Briggs, M. (2021). "Artificial intelligence, human rights, democracy, and the rule of law: a primer."

Bibliographic info

Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., & Briggs, M. (2021). Artificial intelligence, human rights, democracy, and the rule of law: a primer. arXiv preprint arXiv:2104.04147.

Commentary

This article is a primer and takes the reader through the main sections of the CAHAI's (Committee on Artificial Intelligence) Feasibility Study and provides further background information on AI technologies and the Study's connection with human rights, democracy, and the rule of law.

My general impression is, what I feel with guidelines articles, that it makes me a bit cynical. I often naturally distrust the implementation of guidelines and rules for AI. Don't get me wrong, I agree with all the policies, except for some adjustments and clarifications, but I think more vital agents are needed. They say: "As the work of the CAHAI now enters the stakeholder consultation and outreach phase, it must be emphasized that the quality and success of this important effort will now depend on the wisdom and insights of as wide and inclusive a group of participants as possible."
In short, I think they are excellent guidelines and that the connection between legal and human rights is significant to make it more weighty, but I find it difficult to imagine an effect in the future.

And one more general comment, I find it strange that in the context of privacy nothing is said in the text about meta data that can be deanonymized and traced back to persons, in my opinion one of the biggest privacy problems and necessary to be included in guidelines. to include.

Excerpts & Key Quotes

Who is Monitoring?

Monitoring
After the model is implemented by the team, it must be monitored to ensure that it is still serving the desired purpose, being used responsibly and within the intended scope, and is responsive to emergent, real-world conditions. For instance, the team notices that a new variable to measure water quality was released by a standards agency. This could cause a lack of standardisation across the data, as it was not an original variable included in the training data set. They decide to incorporate this change into the model to stay current with agriculture norms and practices.

Comment:

This entire text is an advice full of guidelines on how AI can improve in all kinds of areas. I agree with the problem mentioned above and the task of monitoring a system. However, I do not agree with or question that it is not noted who bears this responsibility. The quote says the responsibility is with "the team", this passage was also about a rather abstract description of the development and deployment of an AI system, so that is maybe why “the team” is quite vague. However, I still think that clarification and putting responsibility on someone is essential. In reality, it's often seen that the developers, "the team", create a system and then take their hands off it. Problems arise here because the task of monitoring is not fulfilled at all.

HUMAN DIGNITY

HUMAN DIGNITY
All individuals are inherently and inviolably worthy of respect by mere virtue of their status as human beings. Humans should be treated as moral subjects, and not as objects to be algorithmically scored or manipulated.

Comment:

Pretty short comment, but I think this is how AI works. Everything is seen as objects to be algorithmically scored or manipulated. 'Manipulated' is then a negative choice of words, that will certainly not always be the case, but it just works with scores and not with 'moral subjects'.

HUMAN DIGNITY - Key obligations

Member States should require AI deployers to inform human beings of the fact that they are interacting with an AI system rather than with a human being in any context where confusion could arise.

Comment:

I think if you want to implement such a measure and avoid confusion, you should not leave it to the AI ​​deployers to decide when "any context where confusion could arise". At least that's how I take it. Because we AI deployers no longer have a neutral image in this, we think it is logical that a chatbot is not a human being, so that should not cause confusion in crazy conversations, many people do not think this. Or in the case of less innocent medical advice, it is not logical for everyone that there is no real person behind it. I understand it can get irritating, but rather irritating and obvious when something is entirely AI than leaving people confused and not complying with the stated substantive rights:
-The right to human dignity, the right to life (Art. 2 ECHR), and the right to physical and mental integrity.
-The right to be informed of the fact that one is interacting with an AI system rather than with a human being.
-The right to refuse interaction with an AI system whenever this could adversely impact human dignity.

(Replace this heading text for this passage)

To ensure that the design, development, and deployment of AI systems do not violate human rights, it is vital that organisations exercise due diligence. The use of impact assessments is one practical means for identifying, preventing, mitigating, and accounting for adverse human rights impacts that may arise from the use of AI- enabled systems. The effective use of impact assessments will depend on the socioeconomic indicators used and the data that are collected. For instance, an impact assessment may want to explore the impact that an AI-enabled system has on individual well-being, public health, freedom, accessibility of information, socioeconomic inequality, environmental sustainability, and more.

Comment:

As I've said before, I like this article with guidelines that makes the connection with human rights. These weighty rules and laws apply worldwide; the only thing missing for me are clear consequences attached to them. Furthermore, especially a positive comment that, in my opinion, drawing this connection ensures that AI problems and 'violations' that are unknown to us due to ignorance about AI now become more tangible with laws and rules that are known to us and hopefully more people see the dangers.