Jan 7 – Meta is replacing its independent fact-checking program on Facebook and Instagram with a “community notes” system, similar to the approach adopted by X (formerly Twitter).
Under this model, users will contribute to contextualizing and clarifying the accuracy of posts.
Announcing the change in a video alongside a company blog post, Meta CEO Mark Zuckerberg said the use of third-party moderators had proven “too politically biased” and signaled a return to the company’s focus on “free expression.” Joel Kaplan, Meta’s new head of global affairs, replacing Sir Nick Clegg, said the reliance on independent fact-checkers had been “well-intentioned” but often resulted in “unfair censorship” of users.
However, critics argue the move is politically motivated and could have dangerous implications. Ava Lee, from the campaign group Global Witness, said, “Zuckerberg’s announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications.” She accused Meta of dodging responsibility for curbing hate speech and disinformation.
Emulating X
Meta’s fact-checking program, introduced in 2016, referred questionable posts to independent organizations for verification. Inaccurate posts were labeled, given additional context, and demoted in users’ feeds. This system will now be replaced in the U.S. with the community notes feature, with no immediate plans to phase out third-party fact-checkers in the UK or the EU.
The community notes system, inspired by X, relies on users with diverse viewpoints agreeing on context to add clarifications to controversial posts. Elon Musk, who introduced the feature on X, expressed approval of Meta’s decision, calling it “cool.”
However, safety advocates raised alarms. The UK’s Molly Rose Foundation described the move as a “major concern for safety online.” Ian Russell, its chairman, stressed the need to clarify whether the new system would address sensitive content like suicide, self-harm, and depression.
Fact-checking organization Full Fact, which participates in Meta’s European program, refuted claims of bias and criticized the change as “disappointing” and a “backward step” with potentially global ramifications.
Shifting Policies and Political Signals
Meta also announced plans to roll back restrictions on politically sensitive topics such as immigration, gender, and gender identity, arguing that these policies had stifled public discourse. “It’s not right that things can be said on TV or the floor of Congress but not on our platforms,” the blog post stated.
The shift comes as Meta and other tech companies position themselves ahead of President-elect Donald Trump’s January inauguration. Trump has been critical of Meta’s content moderation in the past, but relations appear to have improved. Zuckerberg dined with Trump at Mar-a-Lago in November, and Meta recently donated $1 million to Trump’s inauguration fund.
Observers see these moves as part of a broader cultural shift toward prioritizing free speech. Kate Klonick, associate professor of law at St. John’s University, noted the changes reflect a growing trend since Elon Musk’s acquisition of X. “The governance of speech on these platforms has become deeply politicized,” she said, adding that companies are now swinging back toward looser moderation policies.
While Meta insists the changes aim to foster free expression, critics warn of potential harms, including the spread of hate speech, misinformation, and reduced safety for vulnerable users. As the debate over content moderation intensifies, Meta’s decision signals a pivotal moment in the evolution of online platforms.
Source: capitalfm