A long overdue step to formally regulate social media companies and force them to remove harmful content from their platforms will be announced today with the granting of new powers to government watchdog Ofcom - although it’s not clear what actual penalties it will be able to enforce. There has been widespread media coverage of many awful and tragic situations where content on social media has been identified as a contributory factor, from extremist material fueling fanaticism, racism and even terrorism through to negative impact on mental health and wellbeing such as the high profile case of teenager Molly Russell who took her own life after viewing graphic content on Instagram, or the grooming of minors. That’s not to mention the misuse or abuse of harvested personal data or manipulative political content.
Ofcom will have new powers to make technology firms responsible for protecting users from harmful content such as cyber-bullying, child abuse, violence and terrorism. The social media platforms will also need to ensure that content is removed quickly, something they claim to be doing already but there are countless examples of harmful material being left online demonstrating, at best, a blasé attitude towards monitoring and moderating content. They will now be expected to "minimise the risks" of it appearing at all.
The industry has always claimed to be self-regulating but firms like Facebook, Youtube, Twitter and Instagram have consistently failed to do so adequately, in fact they seem more focused on trying to influence the lawmakers to shape legislation no doubt in their own favour – Facebook spent nearly $13M on lobbyists in the US last year trying to influence the legislators. The FAANG [Facebook, Amazon, Apple, Netflix and Google] group spent over $130M combined on lobbyists in 2019. Which begs the question why would they spend this kind of money if they don’t want to at least influence lawmakers?
“The lobbying comes as Big Tech deals with antitrust probes, and as some politicians say the Silicon Valley titans should get broken up. In the wake of the Cambridge Analytica data harvesting scandal and other foul-ups, the companies are also facing greater scrutiny over how they handle their users’ data as well as how they manage misinformation and political bias. Lawmakers have talked up the potential for a bipartisan law that would lead to better safeguards for personal data, though analysts say it’s unlikely because of a partisan split over the potential for federal preemption of states’ laws.”
All this smacks of avoidance and maybe if they’d put the same kind of effort into sorting themselves out they wouldn’t be facing the potential of more stringent measures? Digital Secretary Baroness Nicky Morgan said "There are many platforms who ideally would not have wanted regulation, but I think that's changing. I think they understand now that actually regulation is coming."
Extending Ofcom’s powers, which already regulates the media, is the government's first response to the Online Harms consultation it carried out in 2019. The new rules will apply to firms hosting user-generated content, including comments, forums and video-sharing including Facebook, Snapchat, Twitter, YouTube, Instagram and TikTok. The government will apparently ‘set the direction of the policy but give Ofcom the freedom to draw up and adapt the details. By doing this, the watchdog should have the ability to tackle new online threats as they emerge without the need for further legislation.’
NSPCC chief executive Peter Wanless welcomed the news: “Statutory regulation is essential. Thirteen self-regulatory attempts to keep children safe online have failed. Too many times social media companies have said: 'We don't like the idea of children being abused on our sites, we'll do something, leave it to us.'”
The government’s move follows in the steps of other countries who have legislated to control social media content. In 2018 Germany introduced the 'NetzDG Law' which states that social media platforms with more than two million registered German users have to review and remove illegal content within 24 hours of being posted or face fines of up to €5m. Australia passed the 'Sharing of Abhorrent Violent Material Act' in April 2019, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties of up to 10% of a company's global turnover.
It may be an extreme example, being a country at the opposite end of the spectrum in terms of human rights and freedom of speech, but it’s worth mentioning that the Chinese Government blocks many western tech giants altogether including Twitter, Google and Facebook, and the state monitors Chinese social apps for politically sensitive content. Obviously China is an extreme example, but it does highlight the risks of censorship. One of the biggest challenges for Ofcom and the application of its new powers will be treading that fine line in some of the grey areas – the blatantly obvious material is, well, blatantly obvious, but there’s lots of potentially debatable content, could some of Helmut Newton’s work be considered exploitative rather than provocative? Is ‘Das Kapital’ or ‘Mein Kampf’ too extreme to be published?