https://www.nbcnews.com/tech/social-med ... rcna186468Meta is ending its fact-checking program in favor of a 'community notes' system similar to X
Meta CEO Mark Zuckerberg announced a series of major changes to the company's moderation policies and practices, saying that the election felt like a "cultural tipping point."
By Bruna Horvath
Meta CEO Mark Zuckerberg announced a series of major changes to the company's moderation policies and practices Tuesday, citing a shifting political and social landscape and a desire to embrace free speech.
Zuckerberg said that Meta will end its fact-checking program with trusted partners and replace it with a community-driven system similar to X’s Community Notes.
The changes will affect Facebook and Instagram, two of the largest social media platforms in the world, each boasting billions of users, as well as Threads.
"We're gonna get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms," Zuckerberg said in a video. "More specifically, here's what we're going to do. First, we're going to get rid of fact checkers and replace them with community notes similar to X, starting in the U.S."
Zuckerberg pointed to the election as a major influence on the company's decision, and criticized "governments and legacy media" for allegedly pushing "to censor more and more."
"The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech," he said.
"So we're gonna get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms."
He also said the systems the company had created to moderate its platforms were making too many mistakes, adding that the company would continue to aggressively moderate content related to drugs, terrorism and child exploitation.
"We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes," Zuckerberg said. "Even if they accidentally censor just 1% of posts, that's millions of people, and we've reached a point where it's just too many mistakes and too much censorship."
Beyond the end of the facet-checking program, Zuckerberg said the company will be eliminating some content politics around hot-button issues including immigration and gender, and re-focus the company's automated moderation systems on what he called "high severity violations" and rely on users to report other violations.
Facebook will also be moving its trust and safety and content moderation team from California to Texas.
"We're also going to tune our content filters to require much higher confidence before taking down content," he said. "The reality is that this is a trade off. It means we're going to catch less bad stuff, but we'll also reduce the number of innocent people's posts and accounts that we accidentally take down."
The changes come as Meta and social media companies broadly have in recent years reversed course on content moderation due in part to the politicization of moderation decisions and programs. Republicans have long criticized Meta’s fact-checking system and fact-checking in general as unfair and favoring Democrats — a claim that is in dispute.
X’s Community Notes system, which CEO Elon Musk has used to replace X’s previous efforts around misinformation, has been celebrated by conservatives, and it has allowed for a mixture of fact-checking, trolling and other community-driven behavior.
Zuckerberg's announcement comes as CEOs and business leaders across sectors are currying favor with the incoming administration of President-elect Donald Trump. Meta, along with other tech companies, donated $1 million to Trump's inaugural fund, and ahead of the election, Zuckerberg praised Trump in an interview with Bloomberg Television without offering an outright endorsement. Ahead of Trump's inauguration, Meta has reportedly appointed Republican Joel Kaplan to lead its policy team, and on Monday, the company announced that UFC's Dana White, a long-time supporter of Trump, would join its board.
Meta’s initial fact-checking system, which was launched on Facebook in 2016, worked by running information on its platforms through third-party fact-checkers certified by the International Fact-Checking Network and the European Fact-Checking Standards Network. The program included more than 90 organizations that would fact-check posts in more than 60 languages. In the United States, they have included groups like PolitiFact and Factcheck.org.
In a news release, Meta wrote that it was able to identify posts that might be promoting misinformation based on how people were responding to certain pieces of content and how fast posts would spread. Independent fact-checkers would also work to identify posts with possible misinformation on their own. Posts that were said to include misinformation would then be shown lower in feeds as they waited for review.
The independent fact-checkers would then work to verify the accuracy of the content that had been flagged and give it a “content rating,” labeling content as “False,” “Altered,” “Partly False,” “Missing Context,” “Satire” or “True” and adding notices to the posts.
Those fact-checking measures applied to any posts on Facebook, and they expanded to include Instagram in 2019 and Threads last year. Fact-checkers were able to review content including “ads, articles, photos, videos, Reels, audio and text-only posts.”
Under the system, Meta noted, fact-checkers did not have the ability to remove content, and content would be removed if it violated the company’s community standards, which was discerned by Meta itself.
Προσκύνησαν και αυτοί τον πορτοκαλί God-Emperor και τον Ήλον. Αχ γουοκισμούλη μου, απανωτά τα χτυπήματα.