Can’t remove violent content completely from Facebook, says Global Head Everson

By |

Facebook has been under pressure for not being able to filter hate content.

Facebook accepted its helplessness in completely getting rid of contents that ignite hate and violence from its platform, this comes after a prolonged public debate over the company’s incapacity to weed out content that spreads violence.

Carolyn Everson, Global Head of Facebook, in an interview with CNBC on Tuesday, said that the social media company has a zero tolerance policy toward content that spread hate and violence but cannot guarantee zero occurrences.

“Our policy is zero tolerance — that doesn’t mean zero occurrences,” she said.

In recent times, Facebook has been under pressure for not being able to filter hate content such as rape, murder, and other acts of crime being broadcasted live.

A few days back, Facebook faced flak for not being able to prevent a user from broadcasting live the murder of a baby in Thailand and another murder in Cleveland, which later went viral on the social media.

“There are bad things that are going to happen. That is reality. That is life. Facebook is a reflection of that life. It is our job to make the community as safe as possible,” Everson said in the CNBC interview.

According to Facebook, it has already created a database with blacklisted websites which have content deemed inappropriate to advertise in-stream during videos, on Instant Articles and through its Audience Network of websites it serves ads to.

Now, the brands have more flexibility of choosing what kind of videos should and should not appear.

“Facebook is a safe place for brands, let me start there. The second thing I would say is we have zero tolerance, zero tolerance for hate speech and terrorism. There is no place for any of that on our platforms, and we have already made a commitment,” Everson noted.

Facebook, which has been investing heavily on building Artificial Intelligence to detect and remove posts that are found offensive, through the disclosure has signaled that there is a long way to go before its AI technology is fine-tuned to completely weed out offensive content.

As of now, Facebook relies on 1.9 billion users who frequently report offensive content.

Earlier last month, Facebook had announced that it will hire more than 3000 more employees who will be assigned to monitor and remove content that spread violence, hate, and contempt from its platform.