B.C. election tests new policies to protect elections from digital threats

Photo by Element5 Digital/ Pexels

Election-related disinformation, harassment and privacy violations have occurred in recent votes in other countries. How do we prevent them in Canada?

by Jenina Ibañez, Spencer McKay, Chris Tenove. Originally published on Policy Options
October 15, 2024

Election integrity has re-emerged as a serious concern for established democracies, including Canada.

There are worries about foreign influence, rampant misinformation and campaigns to sow distrust in electoral institutions. The rapid rise of generative artificial intelligence has heightened these concerns and 2024 has been billed as the “year of deepfake elections.”

In the United States, AI-generated audio of President Biden’s voice was used to spread false information about voting in the January primaries in New Hampshire. During this year’s United Kingdom election, an investigation found hundreds of online images of female politicians that had been manipulated or AI-generated to make them sexually explicit.

The current provincial election in British Columbia provides an opportunity to assess whether several recent provincial policies can address these problems and potentially serve as a model for Canada and other provinces to follow.

These include countering election-related misinformation, limiting misuse of citizens’ data by political parties and responding to online harassment that involves posting or sharing non-consensual intimate images.

A survey of British Columbians that we conducted with the Media Ecosystem Observatory shortly before the official election period found public support for the policies adopted in B.C.

But even if B.C.’s policies prove effective, more needs to be done. In particular, the federal government has greater capacity to require social media platforms to proactively combat intimate image abuse and comprehensively regulate AI.

There is a lot of hype around artificial intelligence, and its near-term transformational power is being exaggerated by both enthusiasts and critics. This is particularly true in elections, where some have suggested that AI-generated deepfakes, along with sophisticated and personalized message targeting, could “take over elections.

AI lowers the cost of producing disinformation

Our August report from UBC’s Centre for the Study of Democratic Institutions argues that beneficial uses of generative AI in elections exist but have not been proven effective. On the other hand, while harmful AI uses are unlikely to be catastrophic for election integrity, they warrant scrutiny and preparation.

We identify three categories of harmful uses and show how they have been pursued in elections around the world:

  • Deception, such as the deepfake audio of Biden;
  • Harassment, such as the creation of non-consensual intimate images of politicians;
  • Pollution of the information environment, such as the thousands of TikTok videos created using AI in the run-up to France’s election, or inaccuracies from chatbots regarding election information in the European Parliament, U.K. and U.S. elections, particularly for voters with disabilities.

In all these cases, generative AI is lowering the cost and speeding up the pace at which deceptive, harassing or low-quality material is produced.

These are not new problems and it’s not clear that AI-generated material is significantly more persuasive. However, concerns about this new technology may heighten distrust of democratic governments and elections.

B.C.’s new policies do not focus on the use of AI, but may still alleviate some of these concerns by addressing the spread of election-related disinformation; the use of digital media to defame, threaten or harass election participants; and the misuse of citizens’ data that can heighten these and other risks.

In our pre-election survey, 60 per cent of respondents agreed that “misinformation is a serious problem for the 2024 British Columbia election,” compared to 24.6 per cent who did not believe it would be a serious problem and a further 15.3 per cent who were not sure.

Moreover, 66.8 per cent were significantly concerned that AI would produce more convincing misinformation in elections, while 25.1 per cent expressed little or no concern and 8.1 per cent didn’t know.

Political parties in B.C. agreed to follow a code of practice for political campaigns, which they developed along with Elections BC and the Office of the Information and Privacy Commissioner. In it, they pledged to avoid misleading the public through impersonations of politicians or misrepresentations of AI-generated content as having been created by humans.

The battle against AI-driven disinformation

Defending against disinformation

Fight disinformation to strengthen our democracy

In our survey, 85.1 per cent of respondents thought it was unacceptable to create deepfakes that misrepresent their political opponents, while just nine per cent found it acceptable. The rest were unsure.

In addition, the B.C. legislature passed the Election Amendment Act in 2023 to empower Election to curb the spread of certain forms of misinformation, including false information about the voting process or clear misrepresentations of candidates, such as whether they were charged with a crime.

Elections BC can require media organizations and social media platforms to stop transmitting such content and can impose monetary penalties or pursue regulatory prosecution if adequate actions are not taken.

While the code of practice aims to make campaigns accountable to shared commitments around fair elections, the amendments seek to reduce election disinformation by directly penalizing its dissemination.

Voters favour enforcement over voluntary codes

It remains to be seen if and how Elections BC will enforce these new rules, but voters appear to have more confidence in this approach than in a voluntary code of conduct for the misuse of AI.

In our survey, 34.7 per cent of respondents believed a voluntary code would be very or somewhat effective, while 63 per cent thought laws enforced by a government body such as Elections BC would be very or somewhat effective.

B.C. also has some rules to help prevent politically motivated harassment of candidates and voters, which particularly affects women and other under-represented groups.

There appears to be widespread public disapproval of this tactic. In our survey, 85.4 per cent indicated it was unacceptable to create deepfakes of a political opponent with intimate or sexualized content, while only 7.5 per cent found it acceptable, with the rest unsure.

Although not specifically targeted to election contexts, the B.C. Intimate Images Protection Act makes it easier for individuals to seek financial damages or demand takedowns if intimate images are shared without their consent. B.C. also created a special government unit to provide psychological, legal and other forms of support to victims.

Responsive Landing Page Templates

Privacy protections that help prevent unauthorized data collection and safeguard voters’ personal information can reduce risks of manipulation by helping people understand how and why they are being targeted by campaigners or campaign ads.

Artificial intelligence heightens the risks of these manipulative, personalized messages, especially if AI systems are trained on data that includes sensitive information.

In Canada, voter privacy protections often fall short because federal privacy law does not extend to political parties. The federal Elections Act regulates only how the list of registered voters is distributed to, and used by, political parties. Other forms of personal data collected by these parties are mostly unregulated.

Unlike the federal level, B.C. law requires political parties to obtain consent to collect personal information. Despite this, a 2019 investigation by the privacy commissioner found that B.C. “political parties are generally collecting too much information from potential voters, without getting proper consent” and sometimes collect information indirectly, such as through data brokers.

The code of practice in B.C. commits the parties to collect personal information directly from individuals, to get their consent, and to report any privacy breaches that could harm individuals. The code also requires parties to provide information about the models and data they use to understand and predict voter behaviour.

List Building Program in 90 days

Questions remain with B.C.’s blueprint

While B.C.’s efforts to protect elections from digital threats could serve as a blueprint for the rest of Canada, their effectiveness remains to be proven.

Will the voluntary code of practice actually shape the behaviour of provincial political parties and candidates, including their use of private data and generative AI?

Can Elections BC identify and address clear electoral disinformation?

Will non-consensual intimate content be used to harass candidates and does the Intimate Images Protection Act provide remedies that work in the election context?

Even if these policies do prove effective, the federal government must do more than simply catch up with B.C.

Only the federal government can directly regulate social media platforms to prevent and mitigate the risks of non-consensual intimate images and other clear harms, as the federal Online Harms Act seeks to do.

Free Domain Privacy

The federal government also needs to take the lead on strengthening personal data protection, including by political parties, and regulating AI to reduce risks of discrimination and other harms. Bills have been put forward for these issues, though improvements are needed.

Much remains to be done to fortify election integrity and the broader information environment in Canada. The current B.C. election gives us a chance to assess what works and what still needs to be done.

Methodology note: Survey findings for this report draw upon responses from 1,005 B.C.-based adults conducted from Aug. 28 to Sept. 5 using a commercial survey panel provider. The margin of error for a comparable probability-based random sample of the same size is +/-3.09 per cent, 19 times out of 20. All results are weighted by age and gender.

This article first appeared on Policy Options and is republished here under a Creative Commons license.

0 Shares