Efforts to regulate AI are moving in the right direction, but there are still many issues to be clarified and a public debate to be had on its impact.
by Céline Castets-Renard, Anne-Sophie Hulin. Originally published on Policy Options
October 6, 2023
(Version française disponible ici)
As is often the case when a technological breakthrough turns the human adventure upside down, the astounding possibilities of artificial intelligence (AI) have caught governments off guard.
Urged to provide a framework for an advance with such dizzying potential, they have embarked on initiatives that, to date, appear scattered and of varying scope.
Several states have already published statements of intent, including the U.K. and Japan, but it was the European Union that was the first to submit AI legislation with a regulation proposal, in April 2021.
Brussels stands out for its comprehensive approach, which divides the AI systems market into four levels of risk. The proposal includes a ban on certain uses (an AI system that exploits vulnerabilities due to age or disability, for example) and the introduction of very strict requirements for the deployment of systems identified as high-risk.
For its part, the U.S. government has not managed to push through a specific piece of legislation presented to the House of Representatives, the Algorithmic Accountability Act. However, in a joint statement published last April, four federal agencies set out their concerted efforts to combat discrimination and bias in automated systems and AI.
A first step in Canada
In Canada, alongside discussions on the ethical aspects, the very first move to oversee AI was the Directive on Automated Decision-Making, adopted in 2019 by the federal government.
The aim of this directive is to reduce the risks associated with the use of AI and to ensure that the federal government uses this technology (which it does for the triage of temporary resident visa applications submitted from abroad, among other things) in compliance with the fundamental principles of administrative law, such as transparency, accountability, legality and procedural fairness.
However, the directive has its limitations and is not sufficient to provide a definitive framework for AI in Canada. It is merely a policy instrument that omits certain areas, such as criminal matters, and does not apply to certain federal bodies, such as the Canada Revenue Agency.
AIDA, the pièce de résistance
In June, the minister for innovation, science and industry introduced Bill C-27. The third part of this bill, the Artificial Intelligence and Data Act (AIDA), aims to regulate the design, development, use and marketing of AI systems to prohibit their use in ways that could cause serious harm to individuals.
The first stumbling block is that AIDA does not apply to federal institutions. The fields of national defence, security intelligence and telecommunications security are also excluded, as is any other person in charge of a federal or provincial department or agency and designated by regulation.
Ultimately, AIDA is limited to regulating private companies under federal constitutional powers that limit its scope to ‟international and interprovincial trade and commerce in artificial intelligence systems.”
Since technology obviously doesn’t stop at borders, shouldn’t the provincial and federal governments start discussions? An AI bill is taking shape in Quebec, but there is no sign of any willingness to collaborate with the federal government for the time being.
A risk-based approach
In line with the proposed European regulation, AIDA is interested in any AI technique designed to generate content or to produce predictions, recommendations or decisions. It approaches the issue from the angle of the risks generated by AI, which is what the G7 countries also seem to want to do.
The Canadian bill aims to regulate AI systems with a ‟high-impact” and specifically ‟risks of harm and biased output,” while the European text targets high risks to ‟health, safety and fundamental rights.”
In the Canadian vision, the person responsible for the AI system would be required to identify ‟high impact,” but without any particular consideration of the sector of activity – unlike the European draft, which establishes an exhaustive list of high-risk AI systems by area (justice, immigration, education and security, among others).
The harm to an individual may be physical, psychological or economic. The results are considered biased if the AI system disadvantages, directly or indirectly and without justification, an individual on the basis of one or more of the grounds set out in the Canadian Human Rights Act or their combined effect.
More vague, but more flexible, too
The Canadian approach appears more vague, but it would offer the advantage of greater flexibility than the European outline. It would still be necessary, however, for the qualification of risks to be sufficiently described and for the level of ‟high-impact” to be clearly defined. On this point, and in many places in the text, AIDA refers to future regulations from the Department of Innovation, Science and Economic Development. This creates a great deal of uncertainty and confers an important decision-making role on the executive.
A companion document dated March 2023 provides useful clarification for companies to assess the high-impact risk of their AI system (for example, the risks of physical or psychological harm or the severity of potential harm).
The document also identifies specific types of AI systems that require particular attention, including biometric systems used to identify people and predict their behaviour, and systems capable of influencing human behaviour on a large scale.
However, these risk qualification criteria need to be added to the law.
The thorny issue of liability
AIDA sets out a number of people who may be liable (designer, developer, operations manager), which has the advantage of encompassing the multitude of activities carried out in implementing this technology and the diversity of ways of doing so.
To arrive at a fair division of responsibility, however, it will be necessary to determine each person’s role on a case-by-case basis. This fundamental task is made more difficult today with the arrival of generative-AI models such as ChatGPT, which can be manipulated and adapted for uses not originally intended. This forces us to consider the role of those who deploy the system, and not just that of the initial designer.
The stakes are high, because those responsible assume obligations upstream of the deployment of the systems: they would be accountable for the transparency of the measures taken to mitigate the risks of harm and bias. Failure to meet these obligations is a criminal offence punishable by fines of up to $25 million or five per cent of gross global revenues, whichever is greater.
Widening the debate
Finally, since AI systems will affect just about everyone, isn’t democratic debate essential? There has been a lot of discussion about AIDA, with supporters and detractors pitted against each other, but for the time being it remains confined to the experts.
The forthcoming debates in the House of Commons (Standing Committee on Industry and Technology) will provide an opportunity to improve the text, and if AIDA is adopted, it will take at least two years for measures and rules to be established. During this period, the government intends to consult and publish draft regulations.
The democratic debate will then have to be conducted in earnest to ensure that the legislation adopted is not only clear and agile, but also legitimate.
There is still a long way to go, both in terms of substance and method, but we should be pleased to see that the need for a legislative framework has been heard. It is in the interests of Canadian leadership in promoting AI that is ethical, responsible and – let’s hope – fair.
Read more in this series:
- The ethics of artificial intelligence await the law, by Jocelyn Maclure and Alexis Morin-Martel
- The time for a law on artificial intelligence has come, by Céline Castets-Renard and Anne-Sophie Hulin
- The risk of waiting to regulate AI is greater than the risk of acting too quickly, by Jennifer Quaid
- Who has jurisdiction over artificial intelligence: Ottawa, or the provinces?, by Benoît Pelletier
Subscribe to our newsletter.
This article first appeared on Policy Options and is republished here under a Creative Commons license.