Analysis: Meta’s fact-checking pullback will have global consequences
The social media company’s widespread overhaul of its content moderation policies will have a real-world negative impact, embolden authoritarian regimes, and put its own users at risk
Analysis: Meta’s fact-checking pullback will have global consequences
Banner: Mark Zuckerberg personal Facebook account is displayed on a mobile phone with Meta logo seen on tablet screen, January 7, 2025. (Source: Jonathan Raa / Sipa USA via Reuters Connect)
On January 7, 2025, Meta announced a wholesale change to how the company polices harmful, illegal and divisive content on Facebook and Instagram. The updates include the closure of its fact-checking program, starting in the US, that cuts off funding for independent groups to debunk falsehoods posted on its social media networks.
The social media giant also said it would alter its policies to allow greater amounts of political content to be displayed in people’s feeds, as well as reduce restrictions on how divisive, but legal, posts around topics like gender and migration showed up on the platforms.
Mark Zuckerberg, Meta’s chief executive, also said he would work with the incoming Trump administration to fight against other countries’ legislative efforts that force the tech company to allegedly censor people’s voices online.
Meta’s decision to roll back its content moderation policies – particularly those related to politically-divisive topics like immigration and gender – will almost certainly have real-world consequences.
Social media is not the root cause of much of the polarization that has arisen globally ever since the Arab Spring in 2011 demonstrated the power these online platforms could have on global events. But over the last fifteen years, the likes of Facebook have played a critical role in amplifying often harmful, but legal, content that has targeted minority groups from Ethiopia to Myanmar.
By retrenching on its content policies, Meta is reneging on previous statements that it wanted to create a safe and open space for people on its platforms. In the week that marks the four-year anniversary of the January 6, 2021 insurrection at the US Capitol, during which Facebook was a central hub for those organizing the violence according to internal company documents provided to Congress, the pullback on policing its platforms for potential harmful content will inevitably be felt offline.
Since the 2016 election, there have been growing voices, including that of the incoming US president, who believe that social media censors right-wing content. Others accuse the federal government of working with these companies and outside experts to throttle conservative material. To date, no such censorship has been proven, based on repeated independent analysis of these platforms.
Yet the company’s decision is one that will affect its billions of users, the majority of which reside outside of the US. Many of these users live in repressive, authoritarian or semi-democratic regimes where the likes of Facebook, Instagram and WhatsApp provide some of the only means of access to independent news and information.
That role – one that has allowed Meta to garner tremendous global influence – is now in jeopardy. The company stated that fact-checking organizations, many of which it has funded globally for years, are “too politically biased.” That unfounded accusation plays directly into the hands of authoritarian regimes that similarly have labeled independent media outlets and civil society groups as corrupt political actors.
Within hours of Meta’s decisions, for instance, the Kremlin-backed ruling party in Georgia jumped on the announcement to denigrate local groups that are attempting to hold politicians to account. The tech company’s US-centric policy changes are already being felt globally — and Georgia will not be the last country to weaponize the content overhaul for its own political gain.
Meta’s accusation that other countries’ regulatory effort represents an unfair targeting of American firms to promote wholesale censorship comes after US tech companies have long complained that international governments, most notably in the European Union, use digital rules to hamstring American firms to favor local competitors.
That includes a series of newly-created online safety regimes that have developed in the EU, United Kingdom and Australia that aim to boost accountability and transparency over how these digital platforms affect wider society.
Such legislation may prove cumbersome for platforms, but they can’t be in good faith be perceived as censorship. The rules primarily center on forcing Meta and others to be more transparent about how they combat internationally recognized illegal content like child online sexual abuse material.
What these international rules do require, though, is for the likes of Meta to be more open and honest with those who use their services over how content is displayed on their feeds; how these companies combat illegal content like hate speech and terrorist content; and how citizens can hold these firms more accountable when they believe the platforms have put users in harm’s way.
Those principles are shared by most Americans, too.
Cite this post:
Mark Scott, “Analysis: Meta’s fact-checking pullback will have global consequences,” Digital Forensic Research Lab (DFRLab), January 8, 2025, https://dfrlab.org/2025/01/08/analysis-metas-fact-checking-pullback-will-have-global-consequences/.