Quebec should take the lead and adopt its own legislative framework setting out principles that should govern AI.
by Jocelyn Maclure, Alexis Morin-Martel. Originally published on Policy Options
October 6, 2023
(Version française disponible ici)
The artificial intelligence system ChatGPT landed in the public domain with a bang on a date that may become a benchmark in history: Nov. 30, 2022.
This large language model is already changing the way we use search engines and, with varying degrees of success, automating tasks ranging from computer programming to suggestions for recipes, reading or travel. Its ability to instantly produce essays on a multitude of subjects has also caused headaches for teaching staff around the world. Some even see it as a further step toward an artificial general intelligence that could escape human control.
Where’s the ethical framework?
ChatGPT may impress and terrify in equal measure but its arrival has raised collective awareness of the real ethical and political issues involved in developing and marketing systems that use artificial intelligence (AI).
Canada was not totally unprepared for the arrival of generative AI. Indeed, the federal government’s Bill C-27, which proposes legislation specifically dedicated to AI and in particular to the management of ‟high-impact systems,” was already at first reading in June 2022. Nevertheless, ChatGPT has significantly accelerated the political will to set up a more robust ethical and legislative framework.
The sense of urgency gained momentum last spring following an open letter signed by more than a thousand experts, including Montreal researcher Yoshua Bengio, calling for a moratorium of at least six months on the development of systems more powerful than GPT-4 while security protocols are drawn up.
While the authors of the open letter from the Future of Life Institute resorted to a hyperbolic tone, the fact remains the ethical, legal and political framework for AI must be a priority for governments and international organisations. The Quebec government, for one, seems to recognize the importance of taking action, having mandated the Quebec Innovation Council (Conseil de l’innovation du Québec) to launch a collective reflection on the issues raised by AI.
It is vital to take into account the proposals put forward by researchers working in AI ethics, a theoretical and practical field in full effervescence. Generally speaking, ethical frameworks in specific sectors of professional activity can have two objectives: to serve as a basis for the creation of a binding legal framework, or to enable practitioners and users to develop ethical know-how to guide their practice.
In the specific context of the AI industry, there are good reasons to be sceptical about the effectiveness of developing this ethical know-how in the absence of binding rules. Nearly 100 (non-binding) codes of ethics have been adopted by major AI companies in the last five years and almost all of them put forward the same principles (benevolence, fairness, accountability, transparency, respect for autonomy and privacy, etc.) But studies show that the concrete effect of these codes on ethical practices is generally weak.
Turning principles into practice
Abstract principles tend to remain vague without rules guiding their application. Promoting fine principles such as transparency and responsibility is not enough for them to automatically translate into practice.
In fields where ethical empowerment works, such as medicine, the famous ‟first, do no harm” from the Hippocratic Oath calls on doctors to take on a distinctive understanding of their role and constitutes a form of internalised ideal for them. This is a good starting point. Even so, if medical ethics were confined only to such vague principles, we would have cause for skepticism about their real effectiveness.
Fortunately, the practice of medicine is strongly regulated by law. An abstract principle such as respect for patient autonomy can be broken down into a series of more explicit rules and procedures, such as the need to obtain free, ongoing and informed consent for proposed care.
To guide practice appropriately, abstract principles must be supplemented by more precise and circumscribed rules that link the general and the specific, including legal standards and ethical obligations. In the case of medicine, for example, the presence of coercive standards that can be imposed by a professional order (such as colleges of physicians) makes a major contribution to the determination of abstract principles and encourages the emergence of ethical know-how.
In contrast, the vast majority of experts involved in the development and marketing of AI systems are not governed by a professional order, and there is no law specifically governing their activities (although some sector-specific laws of general application do partially regulate them). There is therefore no specific coherent legal structure that could guide them in their interpretation of principles.
Given this absence of concrete support underpinning principles, it is hardly surprising that an abstract concept such as transparency should come to be interpreted in so many different ways. Not to mention the difficulty associated with resolving the conflicts among values that will rise from the use of AI.
The threat of ethical laundering
A second problem caused by the proliferation of ethical frameworks in the absence of positive law is that of “ethical laundering.” This occurs when ethical frameworks drawn up by private companies are not really intended to limit their practices, but rather to burnish their image while minimising public perception that binding law to regulate their activities is required. Without ascribing malicious intent to companies, several studies show that non-binding abstract principles can easily be ignored or interpreted in a self-serving manner.
Companies that implement a rigorous ethical approach should be praised, especially when they do so without being forced by law. However, when the risks of drift and inaction are so high, we cannot rely on good faith alone. The social function of companies is to produce goods and services with the objective of making a profit, not to serve the common good and justice directly. A democratic state governed by the rule of law must therefore define the scope of acceptable commercial practices to ensure that they are compatible with the collective interest.
A necessary condition
Critics of the ethics of AI reduce it to industry player manipulation. However, as others have pointed out, ethics can guide the law when it is overwhelmed by the speed of social change. So it is for ethical reasons that we are calling for priority to be given to the development of a much more restrictive legal framework for AI, rather than relying exclusively on ethical self-regulation based on broad consensual declarations.
Developing a more extensive and coherent legal framework is a necessary condition for ethical know-how to emerge across the industry while avoiding ethical laundering. The adoption of Bill C-27 is necessary, but Quebec must take the lead among the provinces by adopting its own legislative framework for AI within its areas of jurisdiction.
Read more in this series:
- The time for a law on artificial intelligence has come, by Céline Castets-Renard and Anne-Sophie Hulin
- The risk of waiting to regulate AI is greater than the risk of acting too quickly, by Jennifer Quaid
- Who has jurisdiction over artificial intelligence: Ottawa, or the provinces?, by Benoît Pelletier
- How to legislate on artificial intelligence in Canada, by Céline Castets-Renard and Anne-Sophie Hulin
Subscribe to our newsletter.
This article first appeared on Policy Options and is republished here under a Creative Commons license.