Overcoming online echo chambers requires institutional and individual commitment

Photo by Yohan Marion on Unsplash

Politics have always made it hard to accept facts and opinions that challenge our preexisting beliefs. Artificial intelligence algorithms make it even worse. 

by Jocelyn Maclure. Originally published on Policy Options
December 19, 2024

In the wake of Donald Trump’s election and his emerging political alliance with Elon Musk, thousands of people decided to desert their X (formerly Twitter) accounts to join the upstart Bluesky.

This uncommonly sudden social media migration has sparked concerns that the already alarming degree of polarization that strains democratic societies will be further exacerbated.

Like many other commentators, La Presse’s deputy editor François Cardinal and Globe and Mail columnist Phoebe Maltz Bovy have expressed fear that Bluesky will become a giant “filter bubble” or echo chamber for liberals and progressives.

The public exchange of arguments

I think that this concern rests on a misunderstanding of the kind of discursive space that a true echo chamber represents. By that, I mean a loose structure or virtual space in which people may gather information, encounter various discourses, express their own beliefs and interact with others.

A standard interpretation of our current predicament is that the transformation of the public sphere brought on by digital technologies tends to trap people in online communities that share similar beliefs, values, anxieties and aversions. These communities are often described as bubbles, silos or islands because they are supposedly shielded from opposing ideas by some invisible ideological barrier.

Even if one is not a defeatist about human reason, it is simply true that people have intellectual limits and are therefore fallible. Accordingly, it is widely accepted that intellectual progress requires a diversity of viewpoints and the public exchange of arguments.

Deliberation offers the opportunity to broaden horizons, acquire new knowledge and hone one’s judgments. As John Stuart Mill famously put it, if we can trust the judgement of a particular individual, it is because “he has kept his mind open to criticism of his opinions and conduct.”

Politics always made this pursuit difficult. Artificial intelligence algorithms, which largely determine the content we are exposed to online, make it even harder. AI algorithms predict what will capture our attention by finding patterns in the data we continuously produce or choose to consume on digital platforms.

As such, they easily activate our natural tendency to accept facts and opinions that confirm our preexisting beliefs or serve our personal interests. AI didn’t create polarization and the conflictual nature of political discourse. But in the current context, it does amplify views that reverberate within various publics.

Echo chambers are not bubbles

While polarization is real, it is unlikely that a large proportion of people are sealed off from the rest of the world in like-minded online communities.

First, relying only on studies that analyze the use of a single social network should be avoided. Someone who uses X may also uses Facebook and YouTube, in addition to occasionally consuming traditional news sources. It is unlikely then that they are trapped in some sort of  informational bubble.

Second, the goal of most social media platforms is to win our attention and get us to engage. Content that confirms a user’s beliefs and validates their identity certainly helps to retain them, but abhorred content can also elicit strong reactions.

As such, members of an echo chamber may share content they disapprove of to criticize or lampoon it, which ends up introducing a divergent viewpoint into their circle. In this sense, users couldn’t avoid learning what others are talking about if they tried.

If I take my own example, I am constantly exposed to the provocations of the polemists and influencers that I am quite deliberately not following online because either the algorithm predicts that I might pay attention to their views or the people I follow discuss them.

Sowing distrust

If most of us do not live in homogeneous and fully segmented belief spaces, why does healthy democratic deliberation between open-minded citizens seem so alien?

Part of the answer may lie in an idea put forward by the philosopher C. Thi Nguyen: echo chambers do exist, but they are not necessarily homogeneous and fully sealed discursive spaces. Opposing viewpoints do circulate, but in a way that renders them innocuous by dominating voices that set the overall tone in the echo chamber.

According to Nguyen, the ideological leaders of echo chambers go to great lengths to ensure that opposing perspectives arrive pre-interpreted and already discredited, while dominant viewpoints are constantly repeated and validated. This happens in public debates on issues such as secularism or multiculturalism in Quebec. Demonstrations that the policy of multiculturalism does not entail moral relativism or the segmentation of Canadian communities into ethnically homogeneous enclaves are largely ineffective.

Genuine deliberation on topics like immigration or secularism requires taking the strongest arguments of opposing positions seriously. Instead, the crafters of echo chambers caricature alternative views (the “straw man fallacy”), cherry pick and blow out of proportion isolated cases that make the other side looks beyond the pale, or strategically spread misinformation.

Rather than vainly building fortifications around like-minded online communities, ideologically driven echo chambers strive to render divergent viewpoints powerless. Their goal is to sow distrust against outside sources and activate the biases inherent in the preexisting viewpoints of their members.

The worry about the mass migration to Bluesky is overblown. Being active on X did not prevent the formation of echo chambers. The simple exposure to a diversity of viewpoints does not guarantee genuine deliberation.

Disinformation, ideology and reflectiveness

Fighting against online disinformation can help reducing the influence of echo chambers. This can be achieved in several ways:

  • Forcing platforms to remove speech that is blatantly false and misleading, such as false news and deepfakes
  • Labelling AI-generated content
  • Making it harder for human users to deploy bots to spread toxic content

Be that as it may, institutional fixes alone are doomed to fall short in reducing the influence of echo chambers. The formidable efficacy of echo chambers lies in their ideological nature. Ultimately, what is needed is an ever-growing number of reflective individuals committed to inclusive public deliberation and mutual understanding.

This article first appeared on Policy Options and is republished here under a Creative Commons license.

0 Shares