A disturbing new report reveals Facebook moderators are dropping the ball when it comes to preventing underage kids from using the social media platform, as well as removing graphic, abusive content from the site.

British news outlet Channel 4 sent one of its reporters to pose as a moderator at CPL Resources, a third-party contractor in Dublin that has worked to oversee content at Facebook since 2010, according to Business Insider.  The reporter went through the training course, where moderators familiarize themselves with Facebook’s community standards and learn how to determine whether a comment, video or image should be ignored, marked as “disturbing” – which limits its reach to users over the age of 18 – or deleted all together.

What that reporter found was that CPL moderators allowed extreme posts involving child abuse, violence and racism to remain on the site – even after they were reported by users.  Additionally, moderators admitted to the reporter that they wouldn’t remove an account created by a child under 13 years of age (the age requirement for Facebook users) unless the child admitted to being underage on their account, the Telegraph reported.

“We have to have an admission that the person is under-age. If not, we just pretend that we are blind and we don’t know what underage looks like,” a trainer tells the reporter in the Channel 4 footage. “If this person was a kid, like a 10-year-old we don’t care, we still action the ticket as if it were an adult.”

This rule applied to underage accounts where the user was posting pictures of self harm, according to the Telegraph.  This is troubling, considering some 700,000 children – about half of 11- and 12-year-olds – have a social media profile, according to a study by regulator, Ofcom.  The study also found that 8 in 10 of parents whose underage children use Instagram, Facebook or Snapchat were unaware of age restrictions on the platforms.

Back to that graphic content.  The reporter found multiple examples of extreme, offensive content that had been left on the social network site even after it was reported.

One video shows an adult male brutally beating a toddler. While the video was reported by a child advocate back in 2012, she received a reply explaining that the video did not violate Facebook’s policy.

The video is still accessible today, even though the Community Standards listed on the site read, “People need to feel safe in order to build community. We are committed to removing content that encourages real-world harm, including (but not limited to) physical, financial, and emotional injury.”

Another includes racist, derogatory comments directed at Muslim immigrants, while yet another post depicts violence toward a young girl for having a crush on a “negro” boy.

When Channel 4 brought its findings to Facebook, the company responded quickly.

“It’s clear that some of what is shown in the program does not reflect Facebook’s policies or values, and falls short of the high standards we expect,” Richard Allan, Facebook’s vice president of public policy said in a statement to Business Insider. “We take these mistakes in some of our training processes and enforcement incredibly seriously and are grateful to the journalists who brought them to our attention. Where we know we have made mistakes, we have taken action immediately. We are providing additional training and are working to understand exactly what happened so we can rectify it.”

The reporter’s experience is detailed in a new documentary called “Inside Facebook: Secrets of the Social Network.”

Parents should familiarize themselves with Facebook’s Community Standards, which we’ve listed below. Discuss the standards with your children and teach them how to report inappropriate content, or facilitate a dialogue that allows them to feel comfortable coming to you when what they see makes them uncomfortable.

WebSafety is here to help parents and children navigate the digital world together.  Using our social media monitoring feature, parents can view every photo posted to their child’s Facebook account and will receive notifications each time your child posts.  Comments are visible and continuously run through WebSafety’s words of concern database for additional alerts.

FROM FACEBOOK:

The goal of our Community Standards is to encourage expression and create a safe environment. We base our policies on input from our community and from experts in fields such as technology and public safety. Our policies are also rooted in the following principles:

Safety: People need to feel safe in order to build community. We are committed to removing content that encourages real-world harm, including (but not limited to) physical, financial, and emotional injury.

Voice: Our mission is all about embracing diverse views. We err on the side of allowing content, even when some find it objectionable, unless removing that content can prevent a specific harm. Moreover, at times we will allow content that might otherwise violate our standards if we feel that it is newsworthy, significant, or important to the public interest. We do this only after weighing the public interest value of the content against the risk of real-world harm.

Equity: Our community is global and diverse. Our policies may seem broad, but that is because we apply them consistently and fairly to a community that transcends regions, cultures, and languages. As a result, our Community Standards can sometimes appear less nuanced than we would like, leading to an outcome that is at odds with their underlying purpose. For that reason, in some cases, and when we are provided with additional context, we make a decision based on the spirit, rather than the letter, of the policy.