Facebook, YouTube, To Warn, To Replace More Errors Than Machines Moderators

Facebook, YouTube, to Warn, to Replace More errors Than machines moderators

Enlarge this image

Facebook and other tech companies sent the workers home, in order to protect you from the coronavirus. This creates new challenges about how to harmful content on their platforms.

Glenn Chapman/AFP via Getty Images

hide caption


Glenn Chapman/AFP via Getty Images

Facebook, YouTube and Twitter have to rely more on automated systems to identify content that is sent against their rules, such as the tech workers were home to slow the spread of the Corona Virus.

But this shift could mean that more errors — some posts or videos to be taken, could, and others may be incorrectly removed. It comes at a time when the volume of the contents, the make platforms, by leaps and bounds, so that you can clamp down on misinformation about the pandemic.

Tech companies have been saying for years that you want the Computer to take on more of the work, disinformation, violence, and offensive content from their platforms. Now the Corona-Virus outbreak will speed up your use of the algorithms rather than human auditors.

“We see that play in real-time to such an extent that I don’t think that many of the companies probably expect at all,” said Graham, Brookie, Director and editor-in-chief of the Atlantic Council ‘ s Digital Forensic Research Lab.

Facebook CEO Mark Zuckerberg told reporters that an automated check of some of the content means “we can effectively be a little less in the near future, while we adapt to this.”

Twitter and YouTube are also sounding caution about the shift to automatic moderation.

“While we work to ensure that our systems are consistent, you can sometimes the connection is missing, that our team bring, and this can lead us to make mistakes,” Twitter said in a blog post. She added that no accounts will be permanently locked only to the actions of the automated systems.

“YouTube”, said its automated systems “are not always as accurate or granular in your analysis of the content as a human auditor.” He warned that more content can be found, “including some videos, which is not contrary to the guidelines.” And, she added, it is longer, the complaints take to consider away from videos.

Facebook, YouTuand Twitter rely on tens of thousands of content moderators to monitor your web sites and apps, the for material, talking breaks their rules against spam, nudity, hate and violence. Many of the moderators are not full time employees of the company, but an entrepreneur, the work for the recruitment of the company.

Now, these workers are sent home. But some of the content, the moderator can not be carried out outside the office for privacy and security reasons.

For the most sensitive categories, such as suicide, self-injury, child exploitation and terrorism, Facebook says, it is the relocation of work from contractors to full-time employees, and there is more number of people in these areas.

There are also increased requirements for moderation as a result of the pandemic. Facebook explains the use of its apps, including WhatsApp and Instagram, heaving. The platforms are under pressure, so that incorrect information, including a dangerous fake health claims, spread.

The world health organization calls the situation a infodemic, where too much information is both true and false, it makes it difficult to find trusted information.

The tech companies “are dealing with more information with less staff,” Brookie said. “Why have you seen to shift these decisions to other automated systems. Because frankly, there are not enough people to keep the amount of information that the current.”

The power of the platforms’ decisions, which is now even more important, he said. “I think we should all rely on more moderation and not less moderation, to ensure that the vast majority of the people establish a connection with the target, scientifically-based facts.”

Some Facebook users triggered the alarm, and the automated verification, it was causing problems.

As you have tried to post links to mainstream news sources such as The Atlantic and BuzzFeed, have you thought notifications, Facebook, the posts were spam.

Facebook said the posts were mistakenly marked as spam because of an error in its automated spam-filter.

Zuckerberg denied that the problem was in connection with the relocation of content, facilitation of human-to-Computer.

“This is a completely own system to spam,” he said. “This is not some kind of change in the short term, it was just a technical error.”

Released on Tue, 31. Mar 2020 09:06:00 +0000

Leave a Comment