Dailymaverick logo

World

World

Zuckerberg, Musk and weaponising freedom of expression

Zuckerberg, Musk and weaponising freedom of expression
Mark Zuckerberg seems to be using a compelling freedom of expression argument not to deepen and protect freedom of expression, but to weaponise it instead.

The announcement last week by Mark Zuckerberg about changes to the Meta platforms, first being rolled out in the US, has been widely criticised. 

They come while credible news media are being subjected to ongoing attacks, most commonly from Elon Musk, but also Donald Trump.

As the EU moves to bring greater accountability and transparency to social media platforms, Musk and now seemingly Zuckerberg are choosing to respond by attacking news media which are key accountability mechanisms.

In doing so, they expose their motivations – to evade accountability and silence those who dissent. Those who support them have asserted that the response is because efforts to counter online harms have gone too far and have resulted in voices being silenced.

In his video announcement, Zuckerberg seems to make this argument by suggesting they need to go back to their roots and allow for more freedom of expression.

The logic of the argument we are presented with is that while efforts to limit online harms were good, the policies were allowed to extend too far and included ideas that caused offence, or that people disagreed with them and as a result, any dissenting ideas were censored.

There is certainly compelling force to this argument. It is used frequently by those who argue that states shouldn’t criminalise freedom of expression. It is an argument that can be valid in many instances. The problem is that in the current instance, it doesn’t seem to be accurate.

Zuckerberg uses the argument to justify shutting down fact-checking, but fact-checkers used by Meta were all accredited and didn’t censor ideas that were offensive or controversial – they presented evidence and tested arguments – the very foundation of a scientific approach to facilitating freedom of expression, robust argument and disagreement.

Indeed Zuckerberg, in his video announcement, presented no evidence at all that ideas that were offensive or controversial were censored. Instead, he simply refers to a “cultural tipping point” and “mainstream discourse”.

Legitimate limitations


To be clear, when the mainstream discourse used by Hitler, Verwoerd and others were dehumanising people based on their race, religion or ethnicity they weren’t just offensive or controversial ideas, they were preparing for and justifying mass murder. In other words, there are legitimate limitations on freedom of expression. Zuckerberg seems to be using a compelling freedom of expression argument not to deepen and protect freedom of expression, but to weaponise it instead.

Much of the focus on this has understandably been on the decision to scrap support for fact-checking and for Meta to replace it with some kind of community notes type of programme. Both present real challenges to information integrity – our ability to access information that is credible and that we can trust. (Less attention has been paid to the changes in Meta’s community guidelines. The changes here will be the focus of a subsequent piece.)

There have already been some important pieces on Meta’s decision to scrap fact-checking – this piece from The Conversation; this piece from Wired, and this piece on NPR. In response, an open letter on Poynter, which has hosted the International Fact Check Network (IFCN), highlights key issues. In addition, there are calls for the EU to use its legislation to respond to the changes. They go wider than the impact on the fact-checking community, they signify an important shift in how big tech is seeking to respond to increased demands for transparency and accountability, especially in Europe.

This piece is a must-read for a broader understanding of possible motivations as to why the changes have been made at this juncture. Beyond the obvious obsequious efforts to appease Trump, the moves have clear geopolitical motivations. 

The long and short of it all, however, is simply this – we will be exposed to significantly higher levels of mis and disinformation as well as content that seeks to polarise, dehumanise, demean and divide.

The comment by Zuckerberg that there was “a cultural tipping point towards once again prioritising speech” and “get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse” seems to be shorthand for saying expect an explosion of attacks against women and marginalised groups.

To be clear, fact-checking has its limitations – as indeed any system does when dealing with the sheer scale of information being shared daily. There is no way that fact-checking could, for example, check every claim.

Evidence-based information


What it does offer however, when done according to the standards set out by the IFCN (of which Africa Check, for example, is a member) is a clear reference point for those who want to know about key issues and claims. While not all disinformation can be challenged, fact-checked information offers evidence-based non-party-aligned information.

Fact-checking supports credible journalism – in helping the public make informed decisions, it contributes to building a common understanding of key issues and promotes information integrity.

Community notes being touted by Zuckerberg would seem to be a hopelessly poor alternative to fact-checking. On the surface, it sounds like a good idea – crowdsource users to rate and add context to posts, leaving other users to determine if they wish to believe the post’s claims.

While it may be the case that community notes might help some users not to be misled by claims, for the most part they don’t serve to improve accuracy, or even seemingly discourage those who post misleading content.

This brief analysis by Ros Atkins of Elon Musk’s posts on X about Keir Starmer highlights not only how much content Musk puts out that is simply false and misleading, but also how utterly ineffective the Community Notes system is. One of the posts (see below) by Musk that Atkins refers to was found to have no supporting evidence.



Despite both the original post and Musk’s addition to it having community notes, because of the competing nature of the notes, the reader is still left uncertain as to the veracity of the claims, and none of the notes were published – as can be seen in the screen grabs below.







Instead of the claims being fact-checked and the post flagged as false and misleading, it remains on X, shared with hundreds of millions of users. This example serves to highlight the deep limitations of a community notes system where not only is it ineffective in helping readers understand the claim, it also absolves the person posting, and the platform, from any accountability.

Leaving Musk and Zuckerberg and global politics aside, if we go back to the South African elections last year, we can see the possible consequences we may face in future elections. In 2024, when people made claims about our elections being rigged, Meta could publish fact-checked information and in some cases flag posts. Users who posted that content could have their content de-escalated, meaning it would be seen by fewer people. In extreme cases, posts could be removed – for example, when they incited violence.

It seems difficult to see how a community notes system on Meta platforms, Instagram, Threads and Facebook will offer any meaningful action against those who seek to undermine elections. 

Our findings from Real411 highlighted that most complaints involved X, which had only its Community Notes programme running. While the social media platforms have some way to go in terms of greater transparency, access to data and accountability, Meta was responsive to issues highlighted by partners and could limit the spread of harmful content.

In practical terms, the latest announcement seems set to ensure that we will be exposed to extensive false and misleading content, and not only will there be less fact-checking, there will be close to no accountability for the platforms and the users who post. 

As we look for positive dimensions to the announcement, the most common response has been to note how it highlights the critical role of credible journalism, of public interest content.

This has been strongly summed up by Adam Oxford:

“You know what I’m going to tell the organisations I coach? This is an opportunity. Don’t trust the platforms, or waste time and energy on their mendacious ways. Trust your readers. Be an essential part of their lives, not an inconsequential annoyance for Zuckerberg, Musk and Pichai. If your readers need fact-checking done, and they really will now, because they aren’t idiots and can smell bullshit better than you can, give it to them. And if you trust them, they will help you figure out how to pay for it (hint, this will not be in the ways you are accustomed to).”

For Media Monitoring Africa, there is an additional positive dimension. The move by Zuckerberg highlights how and why we need mechanisms that are independent of the social media platforms, to both hold them accountable but also to collect, analyse and empower the public to act against online harms.

Real411 is a public complaints platform that serves to empower the public to report content that they suspect may be an online harm. Using principles that are in line with South African laws and our Constitution, each complaint is assessed by three people and then by a lawyer.

The goal is to help people become more critically engaged, expose online harms like mis and disinformation, and to hold the platforms accountable for the content they share.

While X and seemingly Meta seek to avoid accountability and move away from acting against online harms, credible media, fact-checkers and platforms like Real411 seek to hold the line. As much as the announcements are likely to lead to more online harms, they also serve to highlight the importance of why we need to support information integrity. DM

William Bird is director of Media Monitoring Africa, Ashoka and a Linc Fellow.

Categories: