The implications of Meta ending its fact-checking program

The fight against disinformation looks to have taken a different turn

 


 

Meta will be ending its third-party fact-checking program in the US for its Facebook, Instagram, and Threads platforms. In exchange, they will be employing a user-run community notes system similar to X’s. 

Why? According to Meta CEO Mark Zuckerberg, the present system has become a tool for censorship rather than a means for freedom of expression. For him, its implementation had become so heavy-handed that numerous posts would be mistakenly removed or hidden by the system even if they didn’t violate community standards. How? Automation error and human bias.

“The problem with complex systems is that they make mistakes. Even if they accidentally censor just one percent of posts, that is millions of people. And we’ve reached a point where it’s just too many mistakes and too much censorship,” says Zuckerberg.

The social media giant promises to make a clear division between content worthy of debate and those that are not: non-negotiables such as child exploitation, terrorism, drugs, and more. The platform will be fostering political debate and emphasizes that censored content should not come from a biased position and instead should be agreed upon by the community, whether or not a certain post should be removed.

READ: Facebook, Instagram to ditch fact-checking for community notes

 

Between censorship and expression

In an official statement by Joel Kaplan, chief global affairs officer at Meta, he says, “The intention of the [fact-checking] program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.” 

However, he adds, “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact-check and how. Over time we ended up with too much content being fact-checked that people would understand to be legitimate political speech and debate.”

Under Meta’s old system, once third-party fact-checkers have identified a post as inaccurate, it is labeled and users will be warned before they share it—those who’ve previously shared the post will also be informed. However, Meta also manipulates its visibility on the algorithm, limiting how often it is viewed. Repeat offenders lose the ability to monetize and advertise and are restricted from immediately creating new pages.

With the incumbent community notes program, Meta will move content monitoring away from the hands of separate entities and into those of the community itself.

 

“We’ve seen this approach work on X—where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing—and one that’s less prone to bias,” says Kaplan.

The community notes program will be formed of contributing users who will be tasked with monitoring and rating divisive content. “Just like they do on X, community notes will require agreement between people with a range of perspectives to help prevent biased ratings,” adds Kaplan.

Kaplan also admitted that Meta had been using automated systems to scan various posts for policy violations. They will continue to use these, but instead, gear them towards monitoring for illegal and high-severity violations like terrorism, sexual exploitation, drugs, and scams. “For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action,” adds the Meta executive.

A threat and ally to speech

Though many may view the matter as Meta’s attempt to repair its relationship with Donald Trump after banning him on the platform in 2021—and may rightfully be so—their shift to a community notes program has its merits.

We often ask ourselves who exactly is monitoring those monitoring us—that they don’t overstep their boundaries and encroach on our rights. Meta, in essence, is democratizing its content surveillance system and giving that responsibility to its users. However, like anything that looks good on paper, it is not without its faults.

Going back to Zuckerberg’s earlier point: “Even if they accidentally censor just one percent of posts, that is millions of people. And we’ve reached a point where it’s just too many mistakes and too much censorship.” 

It’s a valid point for sure. But, if our solution to that is to let loose an entire bag of rotten apples to save the two that are fresh—then wouldn’t Meta only be promoting false news that cannot be taken down? In the wrong hands, misinformed and hateful content could very well be protected in the name of free speech. But don’t worry, we’ve got community notes—or, we’ve got a better name for it, a footnote.

Read more...