AI accountability can’t be left to the CRTC

AI is learning and changing – that’s the whole point of the technology. Shutterstock

The CRTC enables secrecy and lacks technical competence to deliver oversight for developments in AI. Algorithmic transparency is crucial.

by Fenwick McKelvey, Brenda McPhail, Reza Rajabiun. Originally published on Policy Options
February 2, 2022

For those wondering about the extent of Canada’s commitment to artificial intelligence (AI) accountability and transparency, we now have an answer: not much. Buried in a recent decision by Canada’s media regulator (the Canadian Radio-television Telecommunications Commission or CRTC) was a clear admission that AI accountability was not a priority. The decision all but closes the door on hopes that the CRTC would push for algorithmic accountability, raising further doubts about its future in looming reforms to Canada’s institutions of Internet governance and the future of AI governance.

Toward zero-knowledge networks

On December 9, 2021, the CRTC approved Bell Canada’s request to use an artificial intelligence system to block fraudulent calls. We’d offer a more detailed description, but there are few details to be had. Our research team intervened to learn how AI works in the field, but our efforts devolved into a fight for basic explanations about this system. We struggle to explain its human oversight, how it’s automated, or how it even works in much detail.

What we know is Bell Canada monitors call patterns in Canada with an AI, looking for anomalies that it reviews, verifies and then blocks. Having a communication provider block anything is a serious matter, because it can cut off legitimate communication between people.

For the newly approved system, Bell is making decisions network-wide. The system is applied to all calls transmitting across Bell Canada’s network, affecting millions of Canadians making calls through Canada’s largest telecommunications provider. The system has blocked more than 1.120 billion calls since it started on July 15, 2020. A big number, but one that is difficult to evaluate. It is not clear if the total volume of spam or scam calls Canadians receive has actually gone up or down since the introduction of the blocking system.

No assessments, no explanations, no tangible oversight

Bell Canada, to its credit, brought its proposed blocking system before the regulator. Unfortunately, the CRTC decided that there was not “any need for regulatory framework regarding the use of AI at this time.” That malaise weakens public evidence and public policy.

The standard of public evidence is poorer now, too. The commission let the case be settled under non-disclosure agreements between trusted parties and declined to use it as a chance to elaborate a fulsome approach to automated content moderation – which is another way of saying spam blocking.

In particular, the federal regulatory agency did not consider the complexity and importance of there being an explanation of decisions made by AI systems, a critical requirement for ensuring the accountability of automated decision-making for the welfare of humans.

Policy-makers must get up to speed on AI

Is the government picking the wrong place to start regulating algorithms?

As the digital divide widens, telecom policy is still an afterthought

The lack of technical competency in the decision is alarming. AI is learning and changing – that’s the whole point of the technology. Presumably, ongoing monitoring might be called for since the system could change. However, this ruling has little in the way of AI oversight. Bell Canada has only to submit annual reports and update the CRTC on any major changes to the algorithm within 60 days. What happens when the CRTC receives that information is anyone’s guess. It’s not clear, given the ruling, that the CRTC knows what to do with AI in the first place.

These shortcomings do not seem to matter much to the national regulatory agency. Before the decision was even reached, CRTC chair and CEO Ian Scott touted Bell’s new system as a success in an interview with the CBC. We can appreciate the commission’s mandate to protect Canadians from fraudulent calls, but does it have to be at the expense of good governance?

Undermining responsible AI for the world to see

The CRTC’s decision seems out of step with the Government of Canada’s stated positions at the global level, where it is pushing for AI accountability as a member of the Freedom Online Coalition (FOC). In the group’s 2020 statement, it was recommended that the “private sector should endeavor to promote and increase transparency, traceability, and accountability in the decision, development, procurement, and use of AI systems.”

After two years of hearings, where we raised these same concerns, we have found little interest on behalf of this federal regulatory agency to implement Canada’s international commitments to AI governance.

Canada staked a claim to be a world leader when the Treasury Board implemented algorithmic impact assessments (AIA) in the federal service. International observers may then be surprised to learn that the CRTC declined our request to develop a comparable algorithmic impact assessment, even when dealing with such a large-scale application by Canada’s largest telecom infrastructure provider. With the decision, the CRTC missed another opportunity to translate international commitments into meaningful AI policy.

A common reason we were given by the CRTC for withholding information, both during the hearing and elsewhere, was concern that “bad actors” could use it. That assumption is part of a worrisome trend toward secrecy at the commission, doubly problematic as we envisage national Canadian content promotion, Internet harm reduction and cybersecurity strategies. “Obscurity is not security” is a truism in the field. Yet, in all the cases described here, deference to confidentiality has overridden good faith efforts for transparency and public oversight.

In the same 2020 Freedom Online Coalition commitment, Canada and other signatories warned about “the use of AI systems for repressive and authoritarian purposes.” We have another worry – that AI priorities of secrecy and excessive deference to commercial interests could lead to anti-democratic rulings and the undermining of public interest. How can we expect good AI governance if a new AI system cannot be described to citizens, be audited effectively, or be held accountable by public regulatory agencies or the courts if, and when, something goes wrong?

A regulator that actually understands this file

Last fall, NDP MP Charlie Angus joined calls for a new public regulator for digital matters, one that “actually understand(s) this file.” Angus was responding to the CRTC’s incapacity to regulate Silicon Valley, but with the Bell AI file, the CRTC has demonstrated it doesn’t understand the risks even in the high-tech systems it does regulate.

Expanding the CRTC’s responsibilities, as the federal government has suggested, would be fraught with risk, as the agency’s institutional capacity to learn and adapt to technological change appears limited.

Subscribe to our newsletter.

The case is a beginning as much as an end. The CRTC’s failure to put an appropriate, publicly accountable framework in place for AI shows that Canada’s voluntary approach to AI governance is inadequate and the consultations hollow. It is time for AI accountability and explainability to be given legislative priority. Unlike the CRTC, we think the time for good regulatory frameworks for AI is now

This article first appeared on Policy Options and is republished here under a Creative Commons license.

0 Shares