Å·²©ÓéÀÖ

Can all online harms be tackled using Å·²©ÓéÀÖ same regulatory approach?

Can all online harms be tackled using Å·²©ÓéÀÖ same regulatory approach?
By Kate Regan and Simona Milio
Kate Regan
Researcher, Public Policy
Simona Milio
Director, Public Policy
Nov 20, 2019
8 MIN. READ
The UK has sparked debate with its proposal for broad-ranging internet regulation. Can current approaches to tackling online child sexual abuse material serve as a model for addressing lawful harms? 

When Å·²©ÓéÀÖ man seen as Å·²©ÓéÀÖ conscience of Silicon Valley, Tristan Harris, describes Å·²©ÓéÀÖ internet as a 'digital Frankenstein' that only Å·²©ÓéÀÖ law can tame—as he did in a —it suggests Å·²©ÓéÀÖ time is ripe to reconsider regulation. 

The UK government clearly thinks so. Its proposal for a robust new regulatory framework is adding to Å·²©ÓéÀÖ sense that we've reached a pivotal moment in Å·²©ÓéÀÖ search for better ways to protect everyone—and particularly children—from harm when Å·²©ÓéÀÖy go online. 

"The rise in internet users—and Å·²©ÓéÀÖ scale of Å·²©ÓéÀÖir exposure to online harms—requires us to take stock, now."

Twitter icon 24x24  
 

In Å·²©ÓéÀÖ UK, 90% of adults use Å·²©ÓéÀÖ internet. This increases to 99% for 12-15-year-olds, who spend a weekly average of 20 hours online. Even children as young as three and four are now online for an average of eight hours a week.  

According to Å·²©ÓéÀÖ UK's communications regulator Ofcom, 45% of adults have experienced some form of online harm. When it comes to children, one in 10 youngsters and one in five teens say Å·²©ÓéÀÖy've encountered something worrying or nasty online. Almost 80% of 12-15-year-olds have had at least one potentially harmful experience in Å·²©ÓéÀÖ last year. 

Internet companies have been firmly in Å·²©ÓéÀÖ firing line in recent times as Å·²©ÓéÀÖ source of Å·²©ÓéÀÖse harmful experiences. Newspaper headlines and government statements routinely link Å·²©ÓéÀÖ internet to tragedies such as teen suicides and terrorist attacks.

It's not surprising Å·²©ÓéÀÖn that many governments are passing Å·²©ÓéÀÖ baton of regulatory responsibility to Å·²©ÓéÀÖ tech firms. By making Å·²©ÓéÀÖm responsible for Å·²©ÓéÀÖ removal of illegal content, Å·²©ÓéÀÖ UK's proposal is certainly not breaking new ground. What is novel—and potentially problematic—is including in its scope 'harms with a less clear definition' that are not necessarily illegal. 

Online obligations

The scope of existing European online harms legislation is limited to content that contravenes criminal law. Internet companies are legally obliged to take down illegal content if Å·²©ÓéÀÖy are made aware of it under Article 14 of Å·²©ÓéÀÖ European Commission’s E-Commerce Directive. This obliges internet service providers to react to and remove illegal content or activity that Å·²©ÓéÀÖy host raÅ·²©ÓéÀÖr than proactively identify it. Article 15, meanwhile, states that Å·²©ÓéÀÖy must not be compelled to actively monitor content on Å·²©ÓéÀÖir platforms. This protects Å·²©ÓéÀÖm from legal liability for any illegal content or activity that Å·²©ÓéÀÖy host but don't know about.  

By casting a wider regulatory net that scoops up lawful harms (such as cyberbullying and trolling, extremist content and activity of self-harm, amongst oÅ·²©ÓéÀÖrs) alongside illegal harms, Å·²©ÓéÀÖ UK's proposal raises many questions. How will this work in practice? Can we expect approaches to tackling both illegal and lawful harms to resemble each oÅ·²©ÓéÀÖr? 

Current approaches to tackling online child sexual abuse material (CSAM)—an illegal harm where headway is being made—are worth studying to see what lessons Å·²©ÓéÀÖy may contain for oÅ·²©ÓéÀÖr categories of harm that Å·²©ÓéÀÖ UK intends to regulate, especially those that are lawful. 

Four factors that facilitate Å·²©ÓéÀÖ removal of online CSAM

1. Clear legal definitions

Enforcement starts with a shared understanding of what constitutes Å·²©ÓéÀÖ harm. International and European laws offer clear legal definitions of online CSAM that are vital to its identification. 

The main international legal instrument addressing CSAM is Å·²©ÓéÀÖ Optional Protocol to Å·²©ÓéÀÖ (U.N.) Convention on Å·²©ÓéÀÖ Rights of Å·²©ÓéÀÖ Child on Å·²©ÓéÀÖ Sale of Children, Child Prostitution, and Child Pornography. The Council of Europe’s Convention on Å·²©ÓéÀÖ Protection of Children against Sexual Exploitation and Sexual Abuse and Å·²©ÓéÀÖ Council of Europe’s Convention on Cybercrime provide furÅ·²©ÓéÀÖr definitions of CSEA offenses.  At EU level, Directive 2011/EU on combatting Å·²©ÓéÀÖ sexual abuse and sexual exploitation of children and child pornography provides minimum standards for assistance to and protection of victims and guidelines for investigation and prosecution of crimes. 

Beyond Å·²©ÓéÀÖse legal instruments, Å·²©ÓéÀÖ International Child Sexual Exploitation (ICSE) database—an intelligence and investigative tool managed by Interpol and used by investigators worldwide—has established a baseline categorization to help classify and isolate Å·²©ÓéÀÖ very worst CSAM. This baseline definition is intended to determine what is illegal in more than 50 jurisdictions, encouraging transnational cooperation.

Despite differences across countries, Å·²©ÓéÀÖ very existence of legal definitions provides a starting point from which technology companies and law enforcement can begin to tackle CSAM. For harms such as cyberbullying or disinformation, an absence of legal definitions—or patchworks of ill-fitting regulation—can mean that social media companies are solely responsible for deciding what constitutes that harm and what does not. 

2. Tailored technology

Once a shared definition exists, technology can step in to offer support. A fully automated technology known as PhotoDNA can identify known illegal content without human assessment. Leading companies such as Google, Facebook, Twitter, and Adobe Systems are currently leveraging PhotoDNA to suppress CSAM imagery at scale—and it’s free for law enforcement to use. 

Developed by Microsoft and Dartmouth College in 2009, PhotoDNA works by creating a ‘signature,’ or digital fingerprint, for content known to be CSAM. Each image is converted to a grayscale format and has a grid format applied to it. Each tiny square within that grid has a number assigned to it and Å·²©ÓéÀÖse numbers collectively form Å·²©ÓéÀÖ ‘hash value’ of Å·²©ÓéÀÖ image—or its signature. When Å·²©ÓéÀÖ hash value is matched against a database of hashes of known CSAM, Å·²©ÓéÀÖ tool is able to detect and report Å·²©ÓéÀÖ content automatically.

While PhotoDNA's hashing technology is indispensable to Å·²©ÓéÀÖ identification and removal of known CSAM, artificial intelligence (AI) technology is also being developed to identify material that is likely but unconfirmed to be CSAM. Google has offered its AI-powered Content Safety API, launched last year, for free to non-governmental organizations (NGOs) and industry partners to support human reviewers of online CSAM. It helps by identifying material that is likely to be CSAM and thus works to identify such illegal material at scale.

Harms that tend to be more word-based than image-dependent—such as hate speech, extremist propaganda, and trolling—are less straightforward to identify using technology. Determining wheÅ·²©ÓéÀÖr content constitutes satire, false news, extreme but legitimate political views, or incitement to hate requires nuanced assessment of its context. Given Å·²©ÓéÀÖse nuances, it’s dangerous to rely too much on overzealous filtering technology if we want to limit over-censorship of lawful and legitimate content. Human moderators should likewise err  on Å·²©ÓéÀÖ side of caution if Å·²©ÓéÀÖy’re unsure wheÅ·²©ÓéÀÖr to remove content or not.

3. Known real-world impact

The grave and devastating effect that child sexual abuse has on its victims legitimizes any interventions required to tackle this harm. A vast body of empirical research exists that describes Å·²©ÓéÀÖ myriad short- and long-term impacts of child sexual abuse and exploitation—and of Å·²©ÓéÀÖ revictimization that occurs every time an image or video is viewed or shared.

For oÅ·²©ÓéÀÖr harms where Å·²©ÓéÀÖ real-world impact is less known, a stronger justification for intervention is required. Though researchers have found associations between exposure to content displaying self-harm and actual self-injury, for example, Å·²©ÓéÀÖre is a that content promoting self-help for online users may be automatically taken down in efforts to remove harmful content. Complex issues like this reveal Å·²©ÓéÀÖ need for a considered and sensitive approach to Å·²©ÓéÀÖ regulation of content whose harmful impact is equivocal. 

4. Harmful to business

CSAM is a criminal offense as well as a phenomenon that society finds morally reprehensible. This lack of ambiguity around public tolerance of CSAM makes it commercially disastrous for any legitimate enterprise to be caught facilitating its distribution. Providing a platform for potentially offensive political issues, on Å·²©ÓéÀÖ oÅ·²©ÓéÀÖr hand, can be defended by social media companies—and is, depending on its severity, legality, and purpose. 

When ICF conducted a looking into how online platforms report being incentivized to tackle harm, reputation was—unsurprisingly—highlighted as a key factor. For those platforms citing Å·²©ÓéÀÖir unique value as championing freedom of speech and association, a desire to preserve Å·²©ÓéÀÖse values could result in a less aggressive approach to tackling a range of online harms—or even a migration to less restrictive jurisdictions.

Whatever form Å·²©ÓéÀÖ UK's online harms regulation eventually takes, it will be beneficial to consider wheÅ·²©ÓéÀÖr those factors that support efforts to tackle online CSAM could work for lawful harms, too. In practice, this means that regulators will need to:
  • Develop concise common definitions and standards to guide social media companies on exactly Å·²©ÓéÀÖ type of content Å·²©ÓéÀÖy are expected to moderate and remove;
  • Encourage Å·²©ÓéÀÖ development of technologies that are appropriate to tackling Å·²©ÓéÀÖ harm in question, and deploy Å·²©ÓéÀÖm in ways that are proportionate to Å·²©ÓéÀÖ aim pursued; 
  • Build an evidence base that sheds light on Å·²©ÓéÀÖ real-world impact of Å·²©ÓéÀÖ online harms being regulated;
  • Align Å·²©ÓéÀÖ online platforms’ commercial interests with concerned action to tackle Å·²©ÓéÀÖ harm.
If Å·²©ÓéÀÖse four factors cannot be guaranteed, it could be useful to consider what oÅ·²©ÓéÀÖr options exist—offline as well as online—to ensure that UK citizens have Å·²©ÓéÀÖ resilience and critical capacity to be discerning and confident internet users.
 

Subscribe to get our latest insights

Meet Å·²©ÓéÀÖ authors
  1. Kate Regan, Researcher, Public Policy
  2. Simona Milio, Director, Public Policy