Poweron Technology Blog
How AI Can Make the Internet More Civil
Artificial intelligence can be used in numerous different ways, but one way you might not have anticipated is as a means of making sure people on the Internet mind their manners. Rude and inappropriate comments are remarkably common online, so it stands to reason that many companies and developers are looking for ways to minimize them. Let’s look at what some have implemented, utilizing artificial intelligence to their advantage.
Comment Sections Have Devolved into Garbage
Whether you’re considering an online article, news story, or video, the comments section probably isn’t someplace you look for insight and civil discussion as it was intended to be. Instead, there’s an assortment of hate, lewdness, and spammy “advertisements” filled with malware and/or empty promises.
Naturally, the platforms and organizations that support and provide this content aren’t all too pleased with this situation, so they have taken various steps to try and eliminate these comments—some going so far as to eliminate their comment sections entirely. Others have taken a more progressive approach by leveraging advanced technologies—the aforementioned artificial intelligence playing a critical role in their strategies.
AI and Automation, Now Involved in Comment Moderation
Let’s start with Google, which has a dedicated AI conversation platform called Perspective API. Joining with OpenWeb to conduct a study, Google implemented Perspective API in some news platforms’ comment sections to test the real-time feedback functionality.
How Well Did AI Moderation Work?
This study examined comments that violated community standards on these websites and provided the user with a request to edit their comment—“Let’s keep the conversation civil. Please remove any inappropriate language from your comment” or “Some members of the community may find your comment inappropriate. Try again?” As a control, some commenters saw no intervention message.
According to the study, about a third of commenters went back and edited the comment. Of those, half took the advice to heart and made their message more acceptable by eliminating the problematic language. However, a quarter of these users doubled down, simply editing their comment to still make their message clear by evading the filter. For instance, rather than saying “booger,” the user would edit it to read “b o o g e r,” or would adopt a new word to stand in for an offensive one.
The rest of the responses were those that revised the wrong part of their submitted comment, misunderstanding the issue, or those that instead directed their comment to the AI feature, rather than the media they were commenting on, complaining about censorship.
These results were pretty much in line with those that came from a similar study that Google conducted with Coral, which showed toxic language being edited out in 36 percent of cases. Having said that, another experiment that The Southeast Missourian conducted showed a 96 percent reduction in “very toxic” comments after this feedback was provided.
Ultimately, the number of people who persisted in posting their comment unedited or simply chose not to post anything after all show that these gentle reminders are only effective to a degree, with people who legitimately don’t mean any offense.
Fortunately, there is also some indication that the number of so-called Internet trolls is overestimated, and that most inflammatory comments come from people feeling some strong emotion. This interpretation was boosted by the findings of another study conducted with Wikipedia. Most offensive content was reactive and isolated.
Besides, compared to the scale of the Internet, the 400,000 comments sampled by OpenWeb and Google are far from statistically significant.
YouTube—one of Google’s most prominent possessions—has especially been active when it comes to comment moderation. Its comment sections are notorious for exactly the kind of problematic dialogue that these enhancements are looking to correct.
This kind of approach isn’t unique to Google and its subsidiaries, either. Instagram has adopted machine learning tools that can identify offensive content and hide them from users who have enabled the comment filtering option in their settings.
Is This Really Such a Bad Problem?
In a word: yes.
Think about it. How many times have you seen an article posted without a comment section, or have seen a social media account have comments disabled? How often have you just avoided looking at the comments at all, because of how notorious the idea of a comments section has become?
Therefore, it makes sense that various platforms—including Google—would invest heavily in the technology to keep the Internet relatively clean. After all, the Internet is financed through advertisements… the longer a user spends on a website, the more money that website can garner from ads. In turn, it makes sense that a website will try and make the user’s experience as beneficial as possible, something that just doesn’t come with comment sections filled with hate, spam, and other toxicity.
In this case, it appears that one of the most effective means of fighting a technology issue is through more technology. As an MSP, we are very familiar with this concept, as we put our technology to use every day as we help you maintain yours. Give us a call at (505) 899-4600 to find out how we can help benefit your business processes. Leave us a comment with your point of view here, too… just please, make sure they’re respectful!
Comments