Facebook says it wants to help fix misinformation running rampant across the internet — a problem it may have helped create in the first place.
Facebook parent Meta announced a new AI-powered tool on Monday, called Sphere. It’s intended to help detect and address misinformation, or “fake news“, on the internet. Meta claims that it’s “the first [AI] model capable of automatically scanning hundreds of thousands of citations at once to check whether they truly support the corresponding claims.”
The announcement comes after years of criticism over Facebook’s own role in allowing online misinformation to thrive and rapidly spread across the globe. Sphere’s dataset includes 134 million public webpages, according to Meta’s research team. It relies on that collective knowledge of the internet to rapidly scan hundreds of thousands of web citations, in search of factual errors.
It’s perhaps fitting, then, that the AI model’s first client is Wikipedia. According to Meta’s announcement, the crowd-sourced internet encyclopedia is already using Sphere to scan its pages and flag sources that don’t actually support the claims in the entry.
Meta also says that when Sphere spots a questionable source, it will also recommend a stronger one — or a correction — to help improve the entry’s accuracy.
“Wikipedia is the default first stop in the hunt for research information, background material, or an answer to that nagging question about pop culture,” Metasaid in a statement, noting that Wikipedia hosts more than 6.5 million entries in the English language alone and adds roughly 17,000 new entries to its pages each month.
The company also released a video showing how Sphere works:
The arrangement with Wikipedia reportedly does not involve any financial compensation in either direction, Meta told TechCrunch. Meta gets to access a widescale training grounds for Sphere, and Wikipedia gains an AI tool that could potentially streamline its verification process and improve its factual accuracy.
Existing automated systems were already capable of identifying pieces of information that lacked any citation. But Meta’s researchers say the complexity of singling out individual claims with questionable sources and determining if those sources actually support the claims in question “requires an AI system’s depth of understanding and analysis.”
In a statement, Shani Evenstein Sigalov — a Tel Aviv University researcher and vice chair of the Wikimedia Foundation’s Board of Trustees — called Sphere’s work with Wikipedia “a powerful example of machine learning tools that can help scale the work of volunteers.”
“Improving these processes will allow us to attract new editors to Wikipedia and provide better, more reliable information to billions of people around the world,” Sigalov said.
Sphere marks Meta’s latest effort to address online misinformation — while potentially deflecting criticism over the company’s own role in allowing that misinformation to persist.
Meta has faced consistently harsh criticism over the past several years from users and regulators over the spread of misinformation on the company’s social media platforms, which include Facebook, Instagram and WhatsApp. Former employees and leaked internal documents have added fuel to claims that the company has valued profits over battling misinformation, and Meta CEO Mark Zuckerberg has been called in front of Congress to discuss the problem.
Last summer, President Joe Biden accused the social media giant of “killing people” by allowing Covid-19 vaccine misinformation on its platforms to spread. The company pushed back, claiming that Facebook and Instagram were providing “authoritative information about COVID-19 and vaccines” to billions of users.