Clarke2021SubmissionOfFeedback

Clarke, S. et al 2021 Submission of Feedback to the European Commission’s Proposal for a Regulation laying down harmonised rules on artificial intelligence

Bibliographic info

Clarke, S., & Whittlestone, J. (2021). Submission of feedback to the european
commission’s proposal for a regulation laying
down harmonised rules on artificial intelligence. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665626_en

Commentary

The feedback paper is concise and points out some very interesting weaknesses in the AI Act that should be addressed. They are stating primarily that the 8 high-risk categories are not sufficient and as AI technologies are rapidly evolving they are making some suggestions for improvement.

Excerpts & Key Quotes

How ‘high risk’ AI systems are identified and classified for the purpose of regulation may also need to adapt over time. [Like: ….]
The emergence of increasingly general-purpose systems, which are then adapted and deployed for specific domains by secondary developers or users.

Comment:

I Think this an especially important notion that is made here. The use of these type of foundation models for finetuning to specific tasking is not covered enough through the AI Act. The 8 high-risk categories are quite specific. While general purpose AI’s are widely used like this.

The possibility that companies may underemphasise the role AI plays in their decision making in order to avoid new regulation.

One way providers or users may seek to avoid the obligations placed upon them is by presenting the use of AI systems as marginal in their decisions, when in fact AI systems are very important. For example, a company could use an AI system to generate some insight, destroy the model but allow human decision-makers to apply the knowledge. In such a case, we believe that the provider should be subject to the same requirements, but determining whether and to what extent an AI system has been involved in a decision may be very difficult to determine. These considerations will become more and more important as and if ‘hybrid’ human-AI systems, or collective intelligence systems, develop and are used more widely.

Comment:

This is a very concerning point the authors are making. Not the “bluewashing” they are focusing on primarily. I want to dissect this statement into bluewashing and accountability and focus on the latter.

It is hard to decide if this is a risk and where we should put this responsibility. In the AI Act and therefore by law, or more into policy and consciousness of decision makers. For example, in the case of Generative AI in education and workplaces, I believe it should be policy that genAI tools are merely used as help and output should always be fully understood and interpretable and traceable before being used in educational and work practice. In this way you keep governance with students, educators and employees, just like how they are using sources from google searches.