Facebook Community Standards

Explaining the new Facebook Community Standards

Facebook attracted a huge amount of media attention earlier this week with the release of new, revised, “Community Standards”.

How many people actually made an effort to find out what the changes mean?  This blog attempts to explain what the changes are and what you need to be aware of.  You might also find it useful as a general guide to what Facebook think is acceptable.

To begin with it is useful to understand why Facebook has found it necessary to clarify, and change, existing guidelines (though many remain exactly as they were to begin with).  We’ve all seen content on Facebook that we might question.  Be it offensive to our beliefs or directly abusive to you as a user, there aren’t many people that haven’t wondered how some Facebook content is allowed to remain in place.

Some of the content has courted significant media controversy.  Elements of this have been damaging to Facebook.  The media tend to focus on a numbers game of “how many people have left Facebook“, but the reality is that many of the statistics are plucked out of thin air and Facebook is still a thriving community.

However, Facebook is a business that needs to keep advertisers happy.  If they begin to feel that they’re using a medium that contains offensive content, they’re likely to want to distance themselves from the brand.  Facebook can’t afford to let advertisers leave.  It has performed remarkably well since flotation on the stock exchange but it needs to continue delivering a return to shareholders and exhibiting growth.

We’re not dismissing the fact that keeping users happy is of paramount importance, because without them there wouldn’t be an audience for advertisers, but there have been enough incidents in recent years for Facebook to see fit to amend parts of its policies.

In terms of what the Community Standards exist for, Facebook explain that they fall into four broad categories:

  • Helping to keep you safe
  • Encouraging respectful behaviour
  • Keeping your account and personal information secure
  • Protecting your intellectual property

However, the guidelines take particular note of the following areas and Facebook policy on them:

  • Self-injury
  • Dangerous organisations
  • Bullying and harassment
  • Criminal activity
  • Sexual violence and exploitation
  • Nudity
  • Hate speech
  • Violence and graphic content

So, what are the changes, what exists that you might not already be aware of and what are the implications for you?

Firstly, most users are unlikely to need to moderate their behaviour.  Despite what you might read online or in the press, your average Facebook user is well behaved and respectful.  The policies are intended to counter the minority.

Helping to keep you safe

By far the longest of all the sections, the broad coverage of this is a surprise.  You’d be forgiven for thinking that ‘safety’ might only include your online safety and privacy, but Facebook have placed a number policies under it.

Direct threats

Looking at the first category of how Facebook monitors use, under “Keeping you safe”, you should be aware that the official Facebook description of this is as follows:

“We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.”

They do act if they need to.  The definitions of a direct threat is something many people are probably unaware of.

So what do Facebook class as a “direct threat”?  Again, they’re reasonably clear on this:

“We carefully review reports of threatening language to identify serious threats of harm to public and personal safety. We remove credible threats of physical harm to individuals. We also remove specific threats of theft, vandalism, or other financial harm.   We may consider things like a person’s physical location or public visibility in determining whether a threat is credible. We may assume credibility of any threats to people living in violent and unstable regions.”

If you’re going to threaten someone on Facebook, be it physically or even financially, you can expect any report of your actions to lead to removal of posts or accounts.  The financial threat aspect of this is an interesting one.  We’ve lost count of the number of times we’ve seen posts from people on company pages that might say something similar to “you’re a terrible company and I’ll make it my mission to put you out of business”.  Does that fall under the threat category?  It is a financial threat, after all.

Herein lies an issue; Facebook is still reliant on users reporting threats.  How many times do you see something threatening but not report it?  We often hear of complaints on Facebook where the majority of users simply don’t act.  They might see a threat or be the victim of one, but seldom do many users actually report it.  You’ve got to do so if you want Facebook to be able to rid the network of those that don’t abide by the guidelines.

Self-injury

The issue of self-injury is a serious one on Facebook and many other channels.  Suicidal users are just one part of it and the broader Facebook description of this area is:

“We don’t allow the promotion of self-injury or suicide. We work with organizations around the world to provide assistance for people in distress. We prohibit content that promotes or encourages suicide or any other type of self-injury, including self-mutilation and eating disorders. We don’t consider body modification to be self-injury. We also remove any content that identifies victims or survivors of self-injury or suicide and targets them for attack, either seriously or humorously. People can, however, share information about self-injury and suicide that does not promote these things.”

To start with you can see how seriously Facebook takes a well reported issue of encouraging suicide.  If you see it happening, you need to report it.  They’re making strenuous efforts to combat it, but they still need the help of users.

You might find it interesting to see eating disorders included in this section, but ultimately it is another form of self-injury and one that users need to be aware of.

Dangerous Organisations

What constitutes a dangerous organisation?  There have been a plethora of media reports about Facebook pages promoting all manner of dangerous activities, but the Facebook definition leaves little room for doubt:

“We don’t allow any organizations that are engaged in the following to have a presence on Facebook:

  • Terrorist activity, or
  • Organized criminal activity.

We also remove content that expresses support for groups that are involved in the violent or criminal behavior mentioned above. Supporting or praising leaders of those same organizations, or condoning their violent activities, is not allowed.

We welcome broad discussion and social commentary on these general subjects, but ask that people show sensitivity towards victims of violence and discrimination.”

Much of the above is self-explanatory, but the line about the expression of support for groups involved in the above, or praising their leaders, is one that many are probably unaware of.  Just because you’re not the owner or author of the original content, doesn’t mean you won’t be held to account for supporting it.

The last part of the definition is also one that a lot of Facebook users might wish to take note of.  Showing insensitivity towards victims of violence or discrimination falls outside of organised groups, but how often have you seen someone leave an offensive comment on a page where a person is discriminated against?  We know that we’ve seen plenty such activity and the guidelines should encourage more users to report those who act in an insensitive manner.

Bullying or harassment

The first line of this definition says it all.  If you’re the type of individual who thinks it is acceptable to bully or harass someone on Facebook, you’re going to find yourself under scrutiny (and rightly so):

“We don’t tolerate bullying or harassment. We allow you to speak freely on matters and people of public interest, but remove content that appears to purposefully target private individuals with the intention of degrading or shaming them. This content includes, but is not limited to:

  • Pages that identify and shame private individuals,
  • Images altered to degrade private individuals,
  • Photos or videos of physical bullying posted to shame the victim,
  • Sharing personal information to blackmail or harass people, and
  • Repeatedly targeting other people with unwanted friend requests or messages.

We define private individuals as people who have neither gained news attention nor the interest of the public, by way of their actions or public profession.”

if you read the list of bullet points above then you should be left under no illusion that this is an area Facebook intends to clamp down on.   We don’t feel we need to say a great deal about the above points because the clarity is there.  Facebook wants to change perception on it being used to bully people and the guidelines make it much easier for people to report those who think bullying will be tolerated.

You should also take note of the fact that the list has a caveat above it in the form of “not limited to”.  Bullying reports will be taken seriously and just because they don’t fall into the categories on the list doesn’t mean they’ll be permitted.

Attacks on public figures

Private individuals are covered by the bullying and harassment section shown above, so what about celebrities or those in the public eye?  Facebook has a surprisingly short definition on attacking such individuals:

“We permit open and critical discussion of people who are featured in the news or have a large public audience based on their profession or chosen activities. We remove credible threats to public figures, as well as hate speech directed at them – just as we do for private individuals.”

That’s it.  Hate speech will be acted on, as will credible threats, but the level of protection for someone who puts themselves in the public eye is noticeably lacking.  It may be an area in which we’ll see development, and individual reported posts will still be subject to Facebook general guidelines on being respectful, but it does appear that Facebook is taking a “there to be shot at” approach and we’re surprised they haven’t provided more safeguards.

Perhaps we’ll see a greater level of protection in the future, but for now you’re not going to see a great deal of change in the way many celebrities have to deal with some rather questionable (to say the least) posts on their Facebook pages.

Criminal Activity

It never ceases to amaze us how some users appear to think it is both reasonable and clever to offer a form of adulation to those involved in criminal activity.  We’ve seen it with riots, looting and those damaging property.  Facebook has the following to say about it now:

“We prohibit the use of Facebook to facilitate or organize criminal activity that causes physical harm to people, businesses or animals, or financial damage to people or businesses. We work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.

We also prohibit you from celebrating any crimes you’ve committed. We do, however, allow people to debate or advocate for the legality of criminal activities, as well as address them in a humorous or satirical way.”

So, you can crack the same old jokes that you’ll see everywhere on Facebook.  You can use satire.  However, you need to be very careful if you intend to be seen to use Facebook to organise criminal activity or be seen to brag about something you’ve done that is against the law.

The direct mention of harm to animals is a sign of some of what has been published on Facebook in the past and they’re trying to address it.

Again we see mention of financial damage to businesses too, something becoming increasingly common in the guidelines and reflective of events seen in the past.

Sexual Violence and Exploitation

Revenge publication of imagery and the threat of use of imagery is covered by the Facebook guideline in this area.  There has been enough in the press of late with celebrities to make this a potentially misunderstood area, but Facebook give the guidance you would expect as well as some you might not:

“We remove content that threatens or promotes sexual violence or exploitation. This includes the sexual exploitation of minors, and sexual assault. To protect victims and survivors, we also remove photographs or videos depicting incidents of sexual violence and images shared in revenge or without permissions from the people in the images.

Our definition of sexual exploitation includes solicitation of sexual material, any sexual content involving minors, threats to share intimate images, and offers of sexual services. Where appropriate, we refer this content to law enforcement. Offers of sexual services include prostitution, escort services, sexual massages, and filmed sexual activity.”

Solicitation is something few might have encountered on Facebook, but the issue is there and they’ve tried to make the rule as clear as possible in this definition.

Videos depicting sexual violence have been well covered by the press and Facebook adds clarity to the fact this isn’t an area they’ll tolerate content in.

The revenge element of this type of content is perhaps the newest of all.  It is an area in which a number of people have fallen victim in recent years, and the words “without permissions from the people in the images” can cover a lot more than revenge acts.  It remains an area in which we’re bemused as to why people would think it is appropriate to publish such material and not expect to be held accountable for it.

Regulated Goods

There is a distinctly USA feel about this section in particular, but it serves to highlight that Facebook do take regulated goods seriously and can be applied throughout the world.

In other words, don’t try to sell anything subject to regulation on Facebook, or, as they put it:

“We prohibit any attempts by unauthorized dealers to purchase, sell, or trade prescription drugs and marijuana. If you post an offer to purchase or sell firearms, alcohol, tobacco, or adult products, we expect you to comply with all applicable laws and carefully consider the audience for that content. We do not allow you to use Facebook’s payment tools to sell or purchase regulated goods on our platform.”

If you’re not authorised to sell regulated goods, don’t try it in any way whatsoever.  We’re surprised to see they have found it necessary to say you can’t use Facebook’s own payment tools to do so!  That said, we applaud the clarity.  For those of you wondering what is meant by “Facebook payment tools” (especially those outside of the USA), there are a variety of ways you can buy things on Facebook in the United States that you can’t do in all other parts of the world.  They’ll be spreading that facility before too long though.

Encouraging respectful behaviour

This is a minefield as far as definitions go.  How, with over 1.1 billion users across the planet, do you account for what people might consider to be respectful behaviour?  Society has a mass of issues with this in general, so imagine trying to police or issue guidelines about it on a platform like Facebook.

This is what Facebook says in introduction to it in very broad terms:

“People use Facebook to share their experiences and to raise awareness about issues that are important to them. This means that you may encounter opinions that are different from yours, which we believe can lead to important conversations about difficult topics. To help balance the needs, safety, and interests of a diverse community, however, we may remove certain kinds of sensitive content or limit the audience that sees it.”

Debate is encouraged but they’ll remove content if they see fit or stop it being seen by a wider audience.  That is very different from the original guidelines of years ago that could have been much mistaken for applying a rule of “anything goes”.

Because of this, they go into detail on specific areas, starting with:

Nudity

Facebook accept the fact that nudity may be present to reflect art amongst other reasons.  However, that doesn’t mean they ignore the fact it might still offend some users.  Facebook has started to respect different cultures and this policy guidance shows the outline of it:

“People sometimes share content containing nudity for reasons like awareness campaigns or artistic projects. We restrict the display of nudity because some audiences within our global community may be sensitive to this type of content – particularly because of their cultural background or age. In order to treat people fairly and respond to reports quickly, it is essential that we have policies in place that our global teams can apply uniformly and easily when reviewing content. As a result, our policies can sometimes be more blunt than we would like and restrict content shared for legitimate purposes. We are always working to get better at evaluating this content and enforcing our standards.

We remove photographs of people displaying genitals or focusing in on fully exposed buttocks. We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring. We also allow photographs of paintings, sculptures, and other art that depicts nude figures. Restrictions on the display of both nudity and sexual activity also apply to digitally created content unless the content is posted for educational, humorous, or satirical purposes. Explicit images of sexual intercourse are prohibited. Descriptions of sexual acts that go into vivid detail may also be removed.”

As you can see from the above, Facebook are the first to acknowledge that this policy needs work.  Imagine trying to police nudity on a global scale, it isn’t an easy task.  What the policy does show is that they’ve realised you can’t sit on the fence in some cases.  The mention of the word “blunt” says it all.  There has to be an element of rigidity to the policy that Facebook has never really been comfortable with where the ability to express oneself is considered prerequisite to the success of the social media network.

They’ve also responded to the increasing amount of digitally created content, unless it is for the usual exceptions of education, humour or satire.

The policy is an interesting one because of how it shows Facebook has evolved along with the type of content published on it.

Hate speech

If you were to look at each of the individual categories and assess the definitions given to them, you’ll find Hate speech has been given a colossal amount of thought.  This is reflected in how detailed Facebook has been in demonstrating what it deems should come under scrutiny with the following written about it:

“Facebook removes hate speech, which includes content that directly attacks people based on their:

  • Race,
  • Ethnicity,
  • National origin,
  • Religious affiliation,
  • Sexual orientation,
  • Sex, gender, or gender identity, or
  • Serious disabilities or diseases.

Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us.

People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding. Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others about that hate speech. When this is the case, we expect people to clearly indicate their purpose, which helps us better understand why they shared that content.

We allow humor, satire, or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary. For that reason, we ask that Page owners associate their name and Facebook Profile with any content that is insensitive, even if that content does not violate our policies. As always, we urge people to be conscious of their audience when sharing this type of content.

While we work hard to remove hate speech, we also give you tools to avoid distasteful or offensive content.  You can also use Facebook to speak up and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.”

If you’ve just read through all of that, well done!  Now you understand what we mean about the amount of thought that has gone into it.

It is evidence of the issues Facebook has recognised and the sheer volume of content that falls under this category.  Unfortunately, there is a lot of prejudice in the world and some people seem to think it is their right to express hate in any way they deem fit – even if it breaks the law.

You might also have noticed that Facebook go as far as saying that any organisation that promotes hate towards the groups identified are not allowed a presence on it.  Then you get to the crux of the matter that we’ve focused on so many times in this article; Facebook rely on you, the user, to report it to them.

They’ve stressed this because it is impossible for them to police this section without the help of members.  How many times do you see someone pass a comment that you find offensive and potentially in breach of the law, but they seem to feel it is their legal right to express the opinion.  Facebook is making moves to let these people know it isn’t how they view free speech.

Challenging a viewpoint is acceptable.  Using the hate speech of someone else as an example of what is offensive is also fine, but if you’re the originator of it you’re at much greater threat of having your account closed than ever before.

Violence and graphic content

There is much about this section that shows Facebook is still prepared to accept content that might offend if it is used in the right context or with good intention.
You might think that they’d be quick to remove violence and graphic content bearing in mind some of what you’ve read in this blog so far, and you’d be right.  However, there is less clarity with this category than many of the others.  Here is what Facebook say about it:

“Facebook has long been a place where people share their experiences and raise awareness about important issues. Sometimes, those experiences and issues involve violence and graphic images of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, they are condemning it or raising awareness about it. We remove graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence.

When people share anything on Facebook, we expect that they will share it responsibly, including carefully choosing who will see that content. We also ask that people warn their audience about what they are about to see if it includes graphic violence.”

Facebook are making it clear that you need to consider your audience, but they’re still relying heavily on users to decide whether something is in the interest of promoting good or to “celebrate or glorify violence.”

Recent incidents in Paris saw a large amount of content shared that many felt was just too much in detail.  Many found the images and videos of the terrorist attack on Charlie Hebdo to be more than they needed to see.  Others felt it required airing in the interests of building support for the victims and free speech.

Of all the areas covered, we believe this and Hate speech will be the ones that continue to cause most issues.  That isn’t to say we think Facebook has got the policy wrong, we simply don’t think enough people will understand the true meaning of it.

Keeping your account and personal information secure

The title of this is the easy part.  However, it uses the word “your”.  Now read what the main introduction to it says under the Facebook policy:

“We work hard to help keep your account secure and protect your personal information. By joining Facebook, you agree to use your authentic name and identity. You may not publish the personal information of others without their consent.”

Did you spot the killer line?  Yes, you’re not permitted to publish the personal information of others without their consent.

Now, think about all the times you’ve seen someone, maybe a friend, publish information about someone else on their Facebook page?  It could be something about where they live, the children they have or all manner of other information.  The fact is that very few people understand the nature of the platform they’re publishing information on or what Facebook say about the privacy of others – because most users are so focused on their own personal privacy instead.

Using your authentic identity

Do you tell Facebook who you really are?  Here is what they have to say about what you can and can’t do in that respect:

“People connect on Facebook using their authentic identities. When people stand behind their opinions and actions with their authentic name and reputation, our community is more accountable. If we discover that you have multiple personal profiles, we may ask you to close the additional profiles. We also remove any profiles that impersonate other people.

If you want to create a presence on Facebook for your pet, organization, favorite movie, games character, or another purpose, please create a Page instead of a Facebook Profile. Pages can help you conduct business, reach out to fans, or promote a cause you care about.”

You’d be surprised how many people fall foul of this.  Creating a personal account that impersonates someone used to be very common.  It is still an issue, but nothing like as much as previously.  Nonetheless, Facebook make a clear distinction between personal accounts and “Pages” – something that a number of users still don’t seem to understand.

Multiple personal profiles are not permitted.  How many users have created a profile they use at work (not a Page) and a profile they use at home?  The number is much higher than you might imagine in our experience.

Fraud and spam

For a section that could encompass so many things, we’re inclined to think Facebook might have been running out of space when they came to write this:

“We work hard to help ensure that the information you share is secure. We investigate any suspected breach of security. Any attempt to compromise the security of a Profile, including fraud, may be referred to law enforcement. Using misleading or inaccurate information to artificially collect likes, followers, or shares is not allowed. We also ask that you respect people by not contacting them for commercial purposes without their consent.”

That’s it.  That is the entire definition of fraud and spam!  Unfortunately, we don’t think Facebook have gone far enough with this.  Not so much in what they say, but more so in how they promote the policy and make users aware of it.

Week after week we see people liking and sharing pages that purport to offer a new Range Rover, Ferrari, trip of a lifetime or whatever the scam might be that week.  The reality is that most such pages are fraudulent.  You’ll never win that Range Rover.  Telling them you’d like it in black, white or red isn’t going to make it any more real either.

People set up pages of this type to build a large number of likes.  We’ve seen them at well over 100,000 likes in very swift time.  Then, when they’ve got all your support, they un-publish the page, rename it (openly leaving the name to be anything) and then….they sell it.  Yes, people sell the pages.  Why?  The reason is that some companies will pay $50, $100 and much more for a page that has an immediate following.  All those people that think they’re entering a competition are simply providing an extra cent, penny or euro to the undeserving scammer.

You’ve not lost anything personally (other than time you’ll never get back), but you have been tricked, scammed or conned, however you wish to think of it.

We’d like to see greater awareness of this.  Having the policy is one thing, making users aware of how common the fraudulent pages are is another.

Accounts of friends or family who have passed away

A few weeks ago we wrote a blog on the new “Legacy contacts” feature that Facebook introduced.  It allows you to nominate someone to take control of your account in the event of your death.

With this in mind we were astonished to see that Facebook haven’t acknowledged or linked to this feature in their policy definition:

Facebook is a place where people come to share their stories and reminisce about friends and family who have passed away. Once we receive proof of death, we secure and memorialize accounts. We do this once we receive sufficient proof of death.

Immediate family members can also request that we remove and delete a loved one’s profile.”

How can they not have updated the policy to mention the new feature and process?  As a oversight, it appears to be one of quite mammoth proportions.  The above policy used to be the only way of memorialising (Facebook seem to like that word!) or closing an account of someone that had passed away.  It no longer is, but the policy definition remains as it is above.

Protecting your intellectual property

If this isn’t an ocean of problems, we don’t know what is.  That might explain why Facebook provide a brief definition on this area and then give you the option of learning more about it.  The policy introduction is as follows:

“Facebook is a place for you to share the things that are important to you. You own all of the content and information you post on Facebook, and you can control how it is shared through your privacy and application settings. However, before sharing content on Facebook, please be sure you have the right to do so. We ask that you respect copyrights, trademarks, and other legal rights.”

Through our experience on Facebook, we’d be surprised if there was a single user who, at some point, hadn’t breached this policy.  It is easy to do when you just want to share something with a friend, perhaps a photograph, and don’t attribute the originators name to the content.

If you decide you’d like to learn more about the subject, Facebook takes you to a section of Help that provides a more in-depth definition of copyright, trademarks and much more.  It also provides a link to what is known as the Facebook SRR.  That means “Statement of rights and responsibilities”.  It is an often changed section of policy that governs Facebook’s relationship with you, the user.  By using Facebook, you accept the policy.  Everyone should read it.  You might be surprised at what you’re agreeing to and how far Facebook goes to ensure it isn’t held liable for you breaching the intellectual copyright of someone.

Summary

That’s a lot of rules, isn’t it?  The fact is that they’re required.  Facebook would never survive in the long term without them.  Users and advertisers would turn away and it is every users responsibility to report breaches of the policies as and when they find them.

However, we can’t help but share something with you in a rather tongue-in-cheek style.

Here is a closing note on a “Community Standards letter” from Monika Bickert, Facebook’s Head of Global Product Policy:

“Our goal is to create an environment where we don’t need a lot of rules, and people on Facebook feel motivated and empowered to treat each other with empathy and respect.”

As Facebook are finding, this is very difficult to achieve!

Facebook training

Social media training

One thought on “Explaining the new Facebook Community Standards

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s