If Facebook has an unofficial slogan, an equivalent to Google’s “Don’t Be Evil” or Apple’s “Think Different,” it is “Move Fast and Break Things.” It means, at least in theory, that one should iterate to try news things and not be afraid of the possibility of failure. In 2021, however, with social media currently being blamed for a plethora of societal ills, the phrase should, perhaps, be modified to: “Move Fast and Fix Things.”
One of the many areas social media, not just Facebook, has been pilloried for is its spreading of certain images online. It’s a challenging problem by any stretch of the imagination: Some 4,000 photo uploads are made to Facebook every single second. That equates to 14.58 million images per hour, or 350 million photos each day. Handling this job manually would require every single
That’s not likely to happen any time soon. This is why the job of classifying images is handed over to artificial intelligence systems. A new piece of Facebook research, published today, describes a new, large-scale computer vision model called SEER (that’s “SElf-supERvised” in the hopelessly mangled backronym tradition that tech folks love to embrace). Trained on over 1 billion public images on Instagram, it can outperform the most cutting-edge self-monitoring image-recognition system, even when the images are of low quality and thereby difficult to read.
It’s a development that could, its creators claim, “[pave] the way for more flexible, precise, and adaptable computer vision models.” It may be used to better keep “harmful images or memes away from our platform.” It could be equally useful for automatically generating alt-text-describing images for visually impaired people, superior automatic categorization of items to be sold on Marketplace or Facebook Shops, and a multitude of other applications that require improved computer vision.
Welcome to the self-supervised revolution
“Using self-supervision, we can train on any random image,” Priya Goyal, a software engineer at Facebook AI Research (FAIR), where the company is carrying out plenty of innovative image-recognition research, told Digital Trends. “[That] means that, as the harmful content evolves, we can quickly train a new model on the evolving data and, as a result, respond faster to the situations.”
The self-supervision Goyal refers to is a brand of machine learning that requires less in the way of human input. Semisupervised learning is an approach to machine learning that sits somewhere between supervised and unsupervised learning. In supervised learning, training data is fully labeled. In unsupervised learning, there is no labeled training data. In semisupervised learning … well, you get the idea. It is, to machine learning, what keeping half an eye on your kid while they charge autonomously around a park is to parenting. Self-supervised learning has been used to transformative effects in the world of natural language processing for everything from machine translation to question answering. Now, it’s being applied to image recognition, too.
“Unsupervised learning is a very broad term that suggests that the learning uses no supervision at all,” Goyal said. “Self-supervised learning is a subset — or more specific case — of unsupervised learning, as self-supervision derives the supervisory signals automatically from the training data.”
What self-supervised learning means for Facebook is that its engineers can train models on random images, and do so quickly while achieving good performance on many tasks.
“Being able to train on any random internet image allows us to capture the visual diversity of the world,” said Goyal. “Supervised learning, on the other hand, requires data annotations, which limits the visual understanding of the world as the model is trained to learn only very limited visual annotated concepts. Also, creating annotated datasets limits the data amount that our systems can be trained on, hence supervised systems are likely to be more biased.”
What this means is A.I. systems that can better learn from whatever information they’re given, without having to rely on curated and labeled datasets that teach them how to recognize specific objects in a photo. In a world that moves as fast as the online one, that’s essential. It should mean smarter image recognition that acts more quickly.
Other possible applications
“We can use the self-supervised models to solve problems in domains which have very limited data or no metadata, like medical imaging,” Goyal said. “Being able to train high-quality, self-supervised models from just random, unlabeled, and uncurated images, we can train models on any internet image, and this allows us to capture diversity of visual content, and mitigate the biases otherwise introduced by data curation. Since we require no labels or data curation for training a self-supervised model, we can quickly create and deploy new models to solve problems.”
As with all of FAIR’s work, right now this is firmly in the research stages, rather than being technology that will roll out on your Facebook feed in the next couple of weeks. That means this won’t be immediately deployed to solve the problem of harmful images spreading online. At the same time, it means that conversations about the use of A.I. to further identify fine details in uploaded images are premature.
Like it or not, though, image-classifying A.I. tools are getting smarter. The big question is whether they’re used to break things further or start fixing them back up again.