Skip to main content

Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos

If Facebook has an unofficial slogan, an equivalent to Google’s “Don’t Be Evil” or Apple’s “Think Different,” it is “Move Fast and Break Things.” It means, at least in theory, that one should iterate to try news things and not be afraid of the possibility of failure. In 2021, however, with social media currently being blamed for a plethora of societal ills, the phrase should, perhaps, be modified to: “Move Fast and Fix Things.”

One of the many areas social media, not just Facebook, has been pilloried for is its spreading of certain images online. It’s a challenging problem by any stretch of the imagination: Some 4,000 photo uploads are made to Facebook every single second. That equates to 14.58 million images per hour, or 350 million photos each day. Handling this job manually would require every single Facebook employee to work 12-hour shifts, approving or vetoing an uploaded image every nine seconds.

facebook hacked
Digital Trends / Digital Trends

That’s not likely to happen any time soon. This is why the job of classifying images is handed over to artificial intelligence systems. A new piece of Facebook research, published today, describes a new, large-scale computer vision model called SEER (that’s “SElf-supERvised” in the hopelessly mangled backronym tradition that tech folks love to embrace). Trained on over 1 billion public images on Instagram, it can outperform the most cutting-edge self-monitoring image-recognition system, even when the images are of low quality and thereby difficult to read.

Recommended Videos

It’s a development that could, its creators claim, “[pave] the way for more flexible, precise, and adaptable computer vision models.” It may be used to better keep “harmful images or memes away from our platform.” It could be equally useful for automatically generating alt-text-describing images for visually impaired people, superior automatic categorization of items to be sold on Marketplace or Facebook Shops, and a multitude of other applications that require improved computer vision.

Welcome to the self-supervised revolution

“Using self-supervision, we can train on any random image,” Priya Goyal, a software engineer at Facebook AI Research (FAIR), where the company is carrying out plenty of innovative image-recognition research, told Digital Trends. “[That] means that, as the harmful content evolves, we can quickly train a new model on the evolving data and, as a result, respond faster to the situations.”

The self-supervision Goyal refers to is a brand of machine learning that requires less in the way of human input. Semisupervised learning is an approach to machine learning that sits somewhere between supervised and unsupervised learning. In supervised learning, training data is fully labeled. In unsupervised learning, there is no labeled training data. In semisupervised learning … well, you get the idea. It is, to machine learning, what keeping half an eye on your kid while they charge autonomously around a park is to parenting. Self-supervised learning has been used to transformative effects in the world of natural language processing for everything from machine translation to question answering. Now, it’s being applied to image recognition, too.

brain network on veins illustration
Chris DeGraw/Digital Trends, Getty Images

“Unsupervised learning is a very broad term that suggests that the learning uses no supervision at all,” Goyal said. “Self-supervised learning is a subset — or more specific case — of unsupervised learning, as self-supervision derives the supervisory signals automatically from the training data.”

What self-supervised learning means for Facebook is that its engineers can train models on random images, and do so quickly while achieving good performance on many tasks.

“Being able to train on any random internet image allows us to capture the visual diversity of the world,” said Goyal. “Supervised learning, on the other hand, requires data annotations, which limits the visual understanding of the world as the model is trained to learn only very limited visual annotated concepts. Also, creating annotated datasets limits the data amount that our systems can be trained on, hence supervised systems are likely to be more biased.”

What this means is A.I. systems that can better learn from whatever information they’re given, without having to rely on curated and labeled datasets that teach them how to recognize specific objects in a photo. In a world that moves as fast as the online one, that’s essential. It should mean smarter image recognition that acts more quickly.

Other possible applications

“We can use the self-supervised models to solve problems in domains which have very limited data or no metadata, like medical imaging,” Goyal said. “Being able to train high-quality, self-supervised models from just random, unlabeled, and uncurated images, we can train models on any internet image, and this allows us to capture diversity of visual content, and mitigate the biases otherwise introduced by data curation. Since we require no labels or data curation for training a self-supervised model, we can quickly create and deploy new models to solve problems.”

As with all of FAIR’s work, right now this is firmly in the research stages, rather than being technology that will roll out on your Facebook feed in the next couple of weeks. That means this won’t be immediately deployed to solve the problem of harmful images spreading online. At the same time, it means that conversations about the use of A.I. to further identify fine details in uploaded images are premature.

Like it or not, though, image-classifying A.I. tools are getting smarter. The big question is whether they’re used to break things further or start fixing them back up again.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
How to change margins in Google Docs
Laptop Working from Home

When you create a document in Google Docs, you may need to adjust the space between the edge of the page and the content --- the margins. For instance, many professors have requirements for the margin sizes you must use for college papers.

You can easily change the left, right, top, and bottom margins in Google Docs and have a few different ways to do it.

Read more
What is Microsoft Teams? How to use the collaboration app
A close-up of someone using Microsoft Teams on a laptop for a videoconference.

Online team collaboration is the new norm as companies spread their workforce across the globe. Gone are the days of primarily relying on group emails, as teams can now work together in real time using an instant chat-style interface, no matter where they are.

Using Microsoft Teams affords video conferencing, real-time discussions, document sharing and editing, and more for companies and corporations. It's one of many collaboration tools designed to bring company workers together in an online space. It’s not designed for communicating with family and friends, but for colleagues and clients.

Read more
Microsoft Word vs. Google Docs
A person using a laptop that displays various Microsoft Office apps.

For the last few decades, Microsoft Word has been the de facto standard for word processors across the working world. That's finally starting to shift, and it looks like one of Google's productivity apps is the heir apparent. The company's Google Docs solution (or to be specific, the integrated word processor) is cross-platform and interoperable, automatically syncs, is easily shareable, and perhaps best of all, is free.

However, using Google Docs proves it still has a long way to go before it can match all of Word's features -- Microsoft has been developing its word processor for over 30 years, after all, and millions still use Microsoft Word. Will Google Docs' low barrier to entry and cross-platform functionality win out? Let's break down each word processor in terms of features and capabilities to help you determine which is best for your needs.
How does each word processing program compare?
To put it lightly, Microsoft Word has an incredible advantage over Google Docs in terms of raw technical capability. From relatively humble beginnings in the 1980s, Microsoft has added new tools and options in each successive version. Most of the essential editing tools are available in Google Docs, but users who are used to Word will find it limited.

Read more