Zuckerberg, we have a problem

Stuart Fuller

For those of us who have been around social media for a while, we’ve learnt to take the content published with a pinch of salt. Whether it’s the incessant “you will never believe what she did next” buzzfeed-type stories, the ‘looks too good to be true’ discount vouchers or counterfeit goods, or the recently discovered videos proving that the Loch Ness monster is real, the aim remains the same − to drive traffic to external websites where more nefarious activities can be actioned by cyber-criminals.

However, there’s another aspect of social media that has hit the headlines in the past few days that’s even more of an issue for brand holders. The admission by Facebook that some news stories appearing on our timelines are not only fake but could also influence real-life events brings the integrity of the platform under the spotlight again. In the run-up to the US presidential election, a number of erroneous stories were published on the site, including one that was shared by over 500,000 people that almost certainly affected how people voted. Mr Zuckerberg’s response was that because there had been false stories favoring both candidates, it was all fair.

Zuckerberg said only a small percentage of news stories posted on Facebook were deliberately fake, and it was “extremely unlikely” any of those posts had contributed to Trump’s victory. But the issue is now out in the open, and the fact that the CEO of Facebook cannot give any assurances to brand holders that there’s a process in place to verify the integrity of stories is surely worrying – particularly when they have an impact, no matter how small, on people’s perceptions or adoption of certain behaviors.

Consider how damaging it could be for a brand if a story is shared 500,000 times about a healthcare product that has harmed users, or that a car manufacturer has been cutting corners on safety? Without any verification of the facts, significant damage to reputation (and ultimately revenues) could occur very quickly indeed. Whilst the First Amendment to the US Constitution sets out the principals of freedom of speech, publishers have a duty of care to ensure content is factually true and not simply damaging to the reputation of others in order to satisfy their end game.

One of the reasons why Facebook is one of the most popular websites in the world, with over 1.2 billion active monthly users, is that it regulates very little content. Zuckerberg himself has said that he was loath to put restrictions in place that make the social media platform sanitized. In a post on his personal profile, he said he was cautious not to make Facebook an “arbiter of truth” but said the company was testing new tools to flag hoax content.

One creator of these stories claimed in an interview with the BBC that there’s nothing wrong with what he does, suggesting his posts are meant to be viewed in a humorous way and that it’s no different to the way that some news outlets are “guilty of publishing stories that border on fabrication.”

It appears most digital publishers do not agree. Google announced in the wake of the election stories that it would ban websites that publish fake news from using its online advertising service, whilst Facebook updated the wording in its Audience Network policy − which already says it will not display ads in sites that show misleading or illegal content − to include fake news sites.

So, where does this leave organizations that could be subject to these ‘humorous’ but unfounded stories? If the social media networks can only go so far in their detection, then the vital activity of social media monitoring will fall on the shoulders of the brand holders themselves. Some solutions may simply look at usernames and profiles to understand where there’s intellectual property abuse, but something more comprehensive and deeper is required to find the stories that could be potentially damaging. The nature of social media means that offending content can spread very quickly, and so it is essential to use a partner that has the experience in working with the majority networks in detecting and removing anything that could infringe on intellectual property.