Skip to main content

Filter by positivity: This new A.I. could detoxify online comment threads

How do you solve a problem like the internet? It’s a question that, frankly, would have made little sense even a quarter of a century ago. The internet, with its ability to spread both information and democratic values to every far-flung corner of the Earth, was the answer.

Recommended Videos

Asking for a cure for the internet was like asking for a cure for the cure for cancer. Here in 2020, the picture is a bit more muddied. Yes, the internet is astonishingly brilliant for all sorts of things. But it also poses problems, from the spread of fake news to, well, the digital cesspit that is every YouTube comments section ever. To put it another way, the internet can be all kinds of toxic. How do we clean it up?

Getty

There are no simple answers here. Is algorithmic or human-driven censorship the answer? Should we close all comments sections on controversial topics? Does a privately-owned platform really need to feel obligated to provide everyone with a voice? How does blocking fringe opinions for the public good tally with the internet’s dream of giving a voice to everyone?

Researchers at Carnegie Mellon University have created an intriguing new tool they believe might help. It’s an artificial intelligence algorithm which works not by blocking negative speech, but rather by highlighting or amplifying “help speech” to make it easier to find. In the process they hope it might assist with the cybertopian ambition of better making the internet a voice for empowering the voiceless.

A voice for the voiceless

Rohingya refugee camp
Rohingya refugee camp Image used with permission by copyright holder

The A.I. devised by the team, from Carnegie Mellon’s Language Technologies Institute, sifts through YouTube comments and highlights comments that defend or sympathize with, in this instance, disenfranchised minorities such as the Rohingya community. The Muslim Rohingya people have been subject to a series of largely ongoing persecutions by the Myanmar government since October 2016. The genocidal crisis has forced more than a million Rohingyas to flee to neighboring countries. It’s a desperate plight involving religious persecution and ethnic cleansing — but you wouldn’t necessarily know it from many of the comments which have shown up on local social media; overwhelming the number of comments on the other side of the issue.

“We developed a framework for championing the cause of a disenfranchised minority — in this case the Rohingyas — to automatically detect web content supporting them,” Ashique Khudabukhsh, a project scientist in the Computer Science Department at Carnegie Mellon, told Digital Trends. “We focused on YouTube, a social media platform immensely popular in South Asia. Our analyses revealed that a large number of comments about the Rohingyas were disparaging to them. We developed an automated method to detect comments championing their cause which would otherwise be drowned out by a vast number of harsh, negative comments.”

“From a general framework perspective, our work differs from traditional hate speech detection work where the main focus is on blocking the negative content, [although this is] an active and highly important research area,” Khudabukhsh continued. “In contrast, our work of detecting supportive comments — what we call help speech — marks a new direction of improving online experience through amplifying the positives.”

Image used with permission by copyright holder

To train their A.I. filtering system, the researchers gathered up more than a quarter of a million YouTube comments. Using cutting edge linguistic modeling tech, they created an algorithm that can scour these comments to rapidly highlight comments which side with the Rohingya community. Automated semantic analysis of user comments is, as you might expect, not easy. In the Indian subcontinent alone, there are 22 major languages. There are also frequently spelling mistakes and non-standard spelling variations to deal with when it comes to assessing language.

Accentuate the positive

Nonetheless, the A.I. developed by the team were able to greatly increase the visibility of positive comments. More importantly, it was able to do this far more rapidly than would be possible for a human moderator, who would be unable to manually through large amounts of comments in real-time and pin particular comments. This could be particularly important in scenarios in which one side may have limited skills in a dominant language, limited access to the internet, or higher priority issues (read: avoiding persecution) which might take precedence over participating in online conversations.

“What if you are not there in a global discussion about you, and cannot defend yourself?”

“We have all experienced being that one friend who stood up for another friend in their absence,” Khudabukhsh continued. “Now consider this at a global scale. What if you are not there in a global discussion about you, and cannot defend yourself? How can A.I. help in this situation? We call this a 21st century problem: migrant crises in the era of ubiquitous internet where refugee voices are few and far between. Going forward, we feel that geopolitical issues, climate and resource-driven reasons may trigger new migrant crises and our work to defend at-risk communities in the online world is highly important.”

But is simply highlighting certain minority voices enough, or is this merely an algorithmic version of the trotted-out-every-few-years concept of launching a news outlet that tells only good news? Perhaps in some ways, but it also goes far beyond merely highlighting token comments without offering ways to address broader problems. With that in mind, the researchers have already expanded the project to look at ways in which A.I. can be used to amplify positive content in other different, but nonetheless high social impact scenarios. One example is online discussions during heightened political tension between nuclear adversaries. This work, which the team will present at the European Conference on Artificial Intelligence (ECAI 2020) in June, could be used to help detect and present hostility-diffusing content. Similar technology could be created for a wealth of other scenarios — with suitable tailoring for each.

These are the acceptance rates for #ECAI2020 contributions:
– Full-papers: 26.8%
– Highlight papers: 45%

Thank you so much for the effort that you put into the review process!

— Digital ECAI2020 (@ECAI2020) January 15, 2020

“The basic premise of how a community can be helped depends on the community in question,” said Khudabukhsh. “Even different refugee crises would require different notions of helping. For instance, crises where contagious disease breakout is a major issue, providing medical assistance can be of immense help. For some economically disadvantaged group, highlighting success stories of people in the community could be a motivating factor. Hence, each community would require different nuanced help speech classifiers to find positive content automatically. Our work provides a blueprint for that.”

No easy fixes

As fascinating as this work is, there are no easy fixes when it comes to solving the problem of online speech. Part of the challenge is that the internet as it currently exists rewards loud voices. Google’s PageRank algorithm, for instance, ranks web pages on their perceived importance by counting the number and quality of links to a page. Trending topics on Twitter are dictated by what the largest number of people are tweeting about. Comments sections frequently highlight those opinions which provoke the strongest reactions.

The unimaginably large number of voices on the internet can drown out dissenting voices; often marginalizing voices that, at least in theory, have the same platform as anyone else.

Changing that is going to take a whole lot more than one cool YouTube comments-scouring algorithm. It’s not a bad start, though.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Women with Byte: Vivienne Ming’s plan to solve ‘messy human problems’ with A.I.
women with byte vivienne ming adjusted

Building A.I. is one thing. Actually putting it to use and leveraging it for the betterment of humanity is entirely another. Vivienne Ming does both.

As a theoretical neuroscientist and A.I. expert, Ming is the founder of Socos Lab, an incubator that works to find solutions to what she calls “messy human problems” -- issues in fields like education, mental health, and other areas where problems don't always have clear-cut solutions.

Read more
Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos
brain network on veins illustration

If Facebook has an unofficial slogan, an equivalent to Google’s “Don’t Be Evil” or Apple’s “Think Different,” it is “Move Fast and Break Things.” It means, at least in theory, that one should iterate to try news things and not be afraid of the possibility of failure. In 2021, however, with social media currently being blamed for a plethora of societal ills, the phrase should, perhaps, be modified to: “Move Fast and Fix Things.”

One of the many areas social media, not just Facebook, has been pilloried for is its spreading of certain images online. It’s a challenging problem by any stretch of the imagination: Some 4,000 photo uploads are made to Facebook every single second. That equates to 14.58 million images per hour, or 350 million photos each day. Handling this job manually would require every single Facebook employee to work 12-hour shifts, approving or vetoing an uploaded image every nine seconds.

Read more
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more