After testing a tool to fight revenge porn in Australia last year, Facebook is expanding the pilot program and is including human reviewers. On Tuesday, May 22, Facebook shared an expanded test of the tool created in conjunction with a number of organizations that allows users to upload their own sensitive photos privately to prevent someone else from uploading the images publically. The change most notably uses a “specially trained” team member to review the report, while Facebook said the initial test used artificial intelligence. While the initial test included Australia and three unannounced countries, Facebook shared that the tool is now also being tested in the U.S, Canada, and the U.K in addition to Australia.
The social media giant is suggesting that those who consider themselves vulnerable to such tactics pre-emptively upload their images to the social network. While it might seem counter-intuitive, such a solution would let Facebook, and by extension, the uploader, get ahead of the problem by creating a hash of the image. A hash, Facebook explains, is like a fingerprint of the image that allows software to keep other copies off the platform without actually permanently keeping a copy of the image on Facebook servers.
Revenge pornography has been a growing problem for years, especially on Facebook, which has tools to remove the image only after it’s been shared. Facebook is looking to do something much more proactive to prevent the practice in the future, and hopes that people who have the potential to be affected will trust its A.I.-driven system to combat it.
The theory behind the new technique is that someone who knows that there are nude or compromising images of them in the hands of someone who may upload those images can block them from being uploaded. By uploading the image to Facebook privately, the social network can “hash” the media, effectively marking duplicates of that image for immediate takedown should someone else attempt to upload them. Developed in conjunction with a number of non-profits focusing on women’s rights and domestic abuse, the service will apply across all Facebook platforms, including Facebook itself, its Messenger application and Instagram.
Users can go through the organization(s) in their country working with Facebook on the tool to receive a form. In the U.S. those organizations are the Cyber Civil Rights Initiative, the National Network to End Domestic Violence. Other organizations include the Australia e-Safety Commissioner, the U.K. Revenge Porn Helpline, and YWCA Canada.
The tool allows someone fearing potential revenge porn to submit a form. Facebook then sends that user a secure upload link for the image or images. Facebook will then create the hash and delete the actual image from the servers within seven days.
While in the initial tests, Facebook said no human actually viewed the photos and that AI created the hash, Facebook now says one trained staff member will review the report. “One of a handful of specifically trained members of our Community Operations Safety Team will review the report and create a unique fingerprint, or hash, that allows us to identify future uploads of the images without keeping copies of them on our servers,” Facebook’s head of global safety, Antigone Davis, wrote in a post.
The question is whether users trust Facebook enough in the wake of Cambridge Analytica and a bug that saved unpublished videos to upload sensitive images. The tool is still only in testing stages. If the scheme proves successful, it may be rolled out the world over.
Updated on May 23: Added information regarding the expanded test.