Twitter is following up on the steady stream of safety updates it has rolled out over the past weeks with more measures aimed at stamping out abuse.
Unlike in the past, the company is now taking matters into its own hands by deploying algorithms to identify abusive behavior. But that doesn’t mean it is doing away with its safety tools that allow users to customize what they see.
On Wednesday, Twitter announced new controls that center on the notifications tab (where users receive alerts regarding new followers, re-tweets, likes, and mentions). The platform now lets you activate a new set of advanced filters that can block notifications from certain types of accounts.
The options consist of broader controls — for example, turning off notifications from all accounts you don’t follow — to filters that target anonymous, and potentially nefarious, accounts, including those lacking a profile photo, verified email address, or phone number.
Twitter is also expanding its mute feature that lets users remove certain keywords, phrases, or entire conversations from notifications. Now, you can apply the muting options to your home timeline and can enable them for a select amount of time (such as 24 hours, seven days, a month, or permanently). Twitter claims that both the new updates were highly requested features from its community.
In its safety blog post detailing the new initiatives, Twitter admitted to using machine learning systems to track abusive behavior. Twitter’s algorithms are behind the recent timeouts it has been placing on select accounts — which essentially limits the visibility of the alleged offender’s activity on the platform.
Having kept quiet about the change, Twitter engineering vice president Ed Ho officially confirmed it in the blog post. Ho claims the company uses its own (human) judgment to act on the accounts targeted by its algorithms and that the timeout is only placed on accounts that repeatedly tweet abuse at non-followers. The Twitter exec also admits the new tools could be prone to error at the outset but will improve over time.
The company has thus far refrained from providing information about its algorithms. When quizzed by Digital Trends on its increasing reliance on machine learning, a Twitter spokesperson claimed no specifics were being offered because the platform didn’t want anyone to take advantage of a system built to ensure safety.
“Our platform supports the freedom to share any viewpoint, but if an account continues to repeatedly violate the … rules, we will consider taking further action,” Ho wrote in the blog post.
Twitter is also promising to bring more transparency to the abuse reporting process. It claims that users who get in touch about policy violations will receive notifications via the Twitter mobile app when their report is received and if the company decides to take further action.