To protect our privacy and free speech, Canada needs to overhaul its approach to regulating online harms

Canada’s proposed internet regulation measures focus almost exclusively on speech. (Shutterstock)

Yuan Stevens, L’Université d’Ottawa/University of Ottawa

October 20, 2021

In the wake of the leaks by Facebook whistleblower Frances Haugen, at least one thing remains clear: social media companies cannot be left to their own devices for addressing harmful content online.

But Canada is currently on a path to regulating “online harms” that global experts — like the Global Network Initiative, Ranking Digital Rights, internet scholar Daphne Keller, legal scholar Michael Geist and others — have decried as among the worst in the world.

Why was this law proposed in Canada, and why now? Immediately after the storming of the U.S. Capitol on Jan. 6, Justin Trudeau’s Liberal government began to make good on an election promise from 2019 to introduce a law modelled after the German Network Enforcement Act — commonly known as NetzDG.

Despite Canada’s longstanding role as a champion of human rights and internet freedom, the law proposed has numerous flaws that call the country’s reputation into question.

A lack of nuance

The Canadian law would have 24-hour content blocking requirements for illegal content just like the German law, which has provided a blueprint for online censorship by authoritarian regimes.

But the law would go much further than Germany’s NetzDG, and not in a good way. NetzDG requires removal of “manifestly unlawful” content within 24 hours but gives platforms seven days to assess content that falls in legally gray areas. There is no nuance like this in Canada’s proposed blocking requirements, and that’s a problem.

Canada’s requirement is bound to lead to over-removal and the censorship of legitimate speech, especially given that companies can face massive fines of up to five per cent of gross global revenues or $25 million under the proposed law. There is also mounting evidence that automated removal decisions by platforms are biased against marginalized and racialized communities, causing further harms to the very people that this law aims to protect.

Intrusive obligations

The proposed law could well require websites and social media companies to proactively monitor and filter five types of content posted online ranging from “terrorist” content to intimate images shared without consent. It would also force websites to disclose personally identifying information to law enforcement and intelligence agencies.

Entire websites could be blocked in Canada, with enormous implications for the rights to free expression and access to information in Canada and beyond.

But requiring websites and social media platforms to proactively monitor content and feed data on their users to the police is tantamount to pre-publication censorship, according to David Kaye, former special rapporteur on the promotion and protection of the right to freedom of opinion and expression.

It also effectively transforms online service providers into an investigative tool and “suspicion database” for law enforcement.

When combined, these intrusive obligations pose an unacceptable risk to the privacy of Canadians and have no place in the laws of a free and democratic society.

What happens in Canada won’t stay in Canada

The Canadian Internet Policy and Public Interest Clinic at the University of Ottawa, and many other non-governmental organizations ranging from Citizen Lab to the Internet Society of Canada and the Canadian Civil Liberties Association, have all filed comments describing the problems with the law.

What happens in Canada won’t stay in Canada. Just as with the landmark ruling in Google Inc vs. Equustek Solutions Inc, which enabled worldwide online takedowns and spawned international imitators, other countries will leap on Canada’s example to pass similar laws that advance their own governmental interests.

Canada needs a new approach to regulating online harms that respects human rights. We must change course before authoritarian regimes replicate Canada’s approach for intrusive surveillance, censorship and other human rights abuses.

Make your website accessibility in 1 easy step, Start Free

Harmful content cannot be addressed in isolation

A man holds a sign in German
A sign from a protest against content filtering laws in Germany reads ‘Only totalitarian states need upload-filtering.’ (Markus Spiske/Unsplash), CC BY

A fundamental problem with the Canadian online harms legislation is that it deals with the most controversial aspect of internet governance — the issue of online speech regulation — in isolation.

Unlike its global peers in the United States and the European Union, there has been no conversation in Canada about the bigger picture of big tech regulation.

Canada hasn’t reckoned with the business models of behemoth social media platforms premised on surveillance capitalism and the problems of anti-competitive actions by technology companies.

Nor has the government devoted a fraction of the political energy it is spending on online harms to reforming Canada’s outdated online privacy laws.

Human digital rights

After Trudeau’s Liberal government called for a snap election, his party promised to introduce legislation to regulate online harms within 100 days.

Some promises are best not kept. This is one of them.

Subscribe to our newsletter.

The digital rights community needs to hold Canada to account and urge Canada to slow down, think things through, and come up with a model of internet regulation that should be emulated — and not avoided — around the world.

Yuan Stevens, Legal Researcher at the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), L’Université d’Ottawa/University of Ottawa

This article is republished from The Conversation under a Creative Commons license. Read the original article.

0 Shares