Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Meta wants to supercharge Wikipedia with an AI upgrade

Wikipedia has a problem. And Meta, the not-too-long-ago rebranded Facebook, may just have the answer.

Let’s back up. Wikipedia is one of the largest-scale collaborative projects in human history, with more than 100,000 volunteer human editors contributing to the construction and maintenance of a mind-bogglingly large, multi-language encyclopedia consisting of millions of articles. Upward of 17,000 new articles are added to Wikipedia each month, while tweaks and modifications are continuously made to its existing corpus of articles. The most popular Wiki articles have been edited thousands of times, reflecting the very latest research, insights, and up-to-the-minute information.

Recommended Videos

The challenge, of course, is accuracy. The very existence of Wikipedia is proof positive that large numbers of humans can come together to create something positive. But in order to be genuinely useful and not a sprawling graffiti wall of unsubstantiated claims, Wikipedia articles must be backed up by facts. This is where citations come in. The idea – and for the most part this works very well – is that Wikipedia users and editors alike can confirm facts by adding or clicking hyperlinks that track statements back to their source.

Citation needed

Say, for example, I want to confirm the entry on President Barack Obama’s Wikipedia article stating that Obama traveled to Europe and then Kenya in 1988, where he met many of his paternal relatives for the first time. All I have to do is to look at the citations for the sentence and, sure enough, there are three separate book references that seemingly confirm that the fact checks out.

By contrast, the phrase “citation needed” is probably the two most damning in all of Wikipedia, precisely because they suggest that there’s no evidence that the author didn’t conjure the words out of the digital ether. The words “citation needed” affixed to a Wikipedia claim is the equivalent of telling someone a fact while making finger quotes in the air.

the wikipedia logo on a pink background
Wikipedia

Citations don’t tell us everything, though. If I were to tell you that, last year, I was the 23rd highest-earning tech journalist in the world and that I once gave up a lucrative modeling career to write articles for Digital Trends, it appears superficially plausible because there are hyperlinks to support my delusions.

The fact that the hyperlinks don’t support my alternative facts at all, but rather lead to unrelated pages on Digital Trends is only revealed when you click them. For the 99.9 percent of readers who have never met me, they might leave this article with a slew of false impressions, not the least of which is the surprisingly low barrier to entry to the world of modeling. In a hyperlinked world of information overload, in which we increasingly splash around in what Nicholas Carr refers to as “The Shallows,” the existence of citations themselves appear to be factual endorsements.

Meta wades in

But what if citations are added by Wikipedia editors, even if they don’t link to pages that actually support the claims? As an illustration, a recent Wikipedia article on Blackfeet Tribe member Joe Hipp described how Hipp was the first Native American boxer to challenge for the WBA World Heavyweight title and linked to what seemed to be an appropriate webpage. However, the webpage in question mentioned neither boxing nor Joe Hipp.

In the case of the Joe Hipp claim, the Wikipedia factoid was accurate, even if the citation was inappropriate. Nonetheless, it’s easy to see how this could be used, either deliberately or otherwise, to spread misinformation.

Mark Zuckurburg introduces Facebook's new name, Meta.
Meta

It’s here that Meta thinks that it’s come up with a way to help. Meta AI (that’s the AI research and development research lab for the social media giant) has developed what it claims is the first machine learning model able to automatically scan hundreds of thousands of citations at once to check if they support the corresponding claims. While this would be far from the first bot Wikipedia uses, it could be among the most impressive — although it’s still currently in the research phase, and not in use on actual Wikipedia.

“I think we were driven by curiosity at the end of the day,” Fabio Petroni, research tech lead manager for the FAIR (Fundamental AI Research) team of Meta AI, told Digital Trends. “We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar [before].”

Understanding meaning

Trained using a dataset consisting of 4 million Wikipedia citations, Meta’s new tool is able to effectively analyze the information linked to a citation and then cross-reference it with the supporting evidence. And this isn’t just a straightforward text string comparison, either.

“There is a component like that, [looking at] the lexical similarity between the claim and the source, but that’s the easy case,” Petroni said. “With these models, what we have done is to build an index of all these webpages by chunking them into passages and providing an accurate representation for each passage … That is not representing word-by-word the passage, but the meaning of the passage. That means that two chunks of text with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.”

a single-pane comic from xkcd about Wikipedia citaions
xkcd

Just as impressive as the ability to spot fraudulent citations, however, is the tool’s potential for suggesting better references. Deployed as a production model, this tool could helpfully suggest references that would best illustrate a certain point. While Petroni balks at it being likened to a factual spellcheck, flagging errors and suggesting improvements, that’s an easy way to think about what it might do.

But as Petroni explains, there is still much more work to be done before it reaches this point. “What we have built is a proof of concept,” he said. “It’s not really usable at the moment. In order for this to be usable, you need to have a fresh index that indexes much more data than what we currently have. It needs to be constantly updated, with new information coming every day.”

This could, at least in theory, include not just text, but multimedia as well. Perhaps there’s a great authoritative documentary that’s available on YouTube the system could direct users toward. Maybe the answer to a particular claim is hidden in an image somewhere online.

A question of quality

There are other challenges, too. Notable in its absence, at least at present, is any attempt to independently grade the quality of sources cited. This is a thorny area in itself. As a simple illustration, would a brief, throwaway reference to a subject in, say, the New York Times prove a more suitable, high-quality citation than a more comprehensive, but less-renowned source? Should a mainstream publication rank more highly than a non-mainstream one?

Google’s trillion-dollar PageRank algorithm – certainly the most famous algorithm ever built around citations – had this built into its model by, in essence, equating a high-quality source with one that had a high number of incoming links. At present, Meta’s AI has nothing like this.

If this AI was to work as an effective tool, it would need to have something like that. As a very obvious example of why, imagine that one was to set out to “prove” the most egregious, reprehensible opinion for inclusion on a Wikipedia page. If the only evidence needed to confirm that something is true is whether similar sentiments could be found published elsewhere online, then virtually any claim could technically prove correct — no matter how wrong it might be.

“[One area we are interested in] is trying to model explicitly the trustworthiness of a source, the trustworthiness of a domain,” Petroni said. “I think Wikipedia already has a list of domains that are considered trustworthy, and domains that are considered not. But instead of having a fixed list, it would be nice if we can find a way to promote these algorithmically.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Apparently, no one wanted to talk to Meta’s ‘creepy’ AI celebrities
Meta AI's Dungeon Master looks like Snoop Dogg.

Less than a year after its debut, Meta has quietly shuttered its celebrity chatbot program. If you hadn't noticed the AI's ignoble end, don't worry, neither did anyone else.

Last September, Meta rolled out a slew of AI experiences across its product ecosystem meant to "enhance your connections with others." In addition to the Meta AI assistant and AI-generated stickers for Instagram, Facebook introduced more than two dozen AI chatbots made to appear as a variety of celebrities and influencers, from Snoop Dogg and Tom Brady to Kendall Jenner and Naomi Osaka.

Read more
Meta’s next AI model to require nearly 10 times the power to train
mark zuckerberg speaking

Facebook parent company Meta will continue to invest heavily in its artificial intelligence research efforts, despite expecting the nascent technology to require years of work before becoming profitable, company executives explained on the company's Q2 earnings call Wednesday.

Meta is "planning for the compute clusters and data we'll need for the next several years," CEO Mark Zuckerberg said on the call. Meta will need an "amount of compute… almost 10 times more than what we used to train Llama 3," he said, adding that Llama 4 will "be the most advanced [model] in the industry next year." For reference, the Llama 3 model was trained on a cluster of 16,384 Nvidia H100 80GB GPUs.

Read more
AI just came to VR in a big way
Alan Truly wears a Meta Quest 3 while laying down with a Meta AI prompt overlaid.

Meta just announced a huge update to its AI and it's coming soon to the Meta Quest platform, home to some of the best VR headsets you can buy. In the next few weeks, you'll soon have access to a smart voice assistant anytime you wear your Quest 3, Quest Pro, or Quest 2.

You can see some examples of how Meta AI works on the Quest 3 in the video below. Note that Meta AI can use the Quest 3's and Quest Pro's mixed reality mode to see real-world objects and answer your questions. The Quest 2's black-and-white passthrough camera isn't supported for visual input.

Read more