18-03-2020 6:38 am Published by Nederland.ai Leave your thoughts

A day after Facebook announced that it would rely more heavily on artificial intelligence-driven content moderation, some users complain that the platform makes mistakes and blocks a slew of legitimate messages and links, including news articles featuring news articles about the coronavirus pandemic , highlighting them as spam.

As they post, users seem to be getting a message that their content – sometimes just a link to an article – violates Facebook's community standards. “We are working hard to limit the spread of spam, as we don't want to allow content designed to mislead or attempt to mislead users into increasing viewership,” the platform's rules read. .

The problem is also because social media platforms continue to fight Covid-19 related disinformation . Social media drive some now the idea that the decision of Facebook to send its contracted content moderators home could be the cause of the problem.

Facebook rebuts that idea, and Guy Rosen, the company's vice president for integrity, tweeted that “this is a bug in an anti-spam system, unrelated to changes in our content moderation staff.” Rosen said the platform is recovering the messages.

Recode contacted Facebook for comment and we will update this post if we hear anything.

The problem on Facebook reminds us that any type of automated system can still mess up, and that fact may become more apparent as more companies, including Twitter and YouTube, depend on automated content moderation during the coronavirus pandemic. The companies say they do this to meet social distance because many of their employees are forced to work from home. This week, they also warned users that, due to the increase in automated moderation, more messages could be accidentally deleted.

In a blog post on Monday, YouTube told creators that the platform will turn to machine learning to help with “some of the work normally done by reviewers.” The company warned that the transition means that certain content will be removed without human judgment, and that both users and contributors to the platform may see videos removed from the site that do not violate YouTube policies.

The company also warned that “unrated content may not be available through search, on the home page, or in recommendations.”

Likewise, Twitter has told users that the platform will increasingly rely on automation and machine learning to remove “offensive and manipulated content”. Still, the company recognized that artificial intelligence wouldn't be a substitute for human moderators.

“We want to be clear: as we work to ensure our systems are consistent, they can sometimes miss the context our teams bring, and this can lead to us making mistakes,” the company said in a blog post .

To compensate for possible errors, Twitter said it will not permanently suspend accounts “based solely on our automated enforcement systems”. YouTube also makes adjustments. “We will not strike this content except in cases where we have great confidence that it is violent,” the company said, adding that creators would have an opportunity to appeal these decisions.

Facebook, meanwhile, says it's working with its partners to send its content moderators home and get them paid. The company is also investigating temporary remote content review for some of its moderators.

“We don't expect this to affect people who use our platform in a noticeable way,” the company said in a statement Monday . “That said, there may be some limitations to this approach and we may see slightly longer response times and therefore make more mistakes.”

The move to AI moderators is no surprise. For years, technology companies have been pushing automated tools as a way to complement their efforts to combat the offensive and dangerous content that can breed on their platforms. While AI can help speed content moderation, the technology can also struggle to understand the social context for posts or videos, thereby making inaccurate judgments about their meaning. In fact, research has shown that algorithms that detect racism can be biased against black people , and the technology has been widely criticized for being vulnerable to discriminatory decision-making .

Normally, the shortcomings of AI have led us to rely on human moderators who can better understand nuance. However, reviewers of human content are by no means a perfect solution either, especially since they have to work long days to analyze traumatic, violent and offensive words and images. Their working conditions have recently been reviewed.

But in the era of the coronavirus pandemic, it can be dangerous not only to have reviewers working side by side in an office, but also to increase the risk of the virus spreading further to the general public. Keep in mind that these companies are reluctant to allow content reviewers to work from home because they have access to a lot of personal user information, not to mention very sensitive content.

Amid the new coronavirus pandemic, content rating is just another way we turn to AI for help . As people stay indoors and want to move their personal interactions online, we will no doubt take a rare look at how well this technology performs as it gains more control over what we see on the world's most popular social platforms. Without the influence of human reviewers we are used to, this could be a boom for the robots

Leave a Reply

Your email address will not be published. Required fields are marked *

nine − nine =

The maximum upload file size: 20 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here