This article is a part of the series - Raisina Files 2024
In today's networked society, there is growing anxiety among adults about the potential negative impact on our children of smartphones, social media, and our always-online culture.[1] However, a more nuanced understanding[2]of the risks and benefits of technology should acknowledge both the potential harms, and the resilience and wisdom demonstrated by the younger generation.
Quotidian parenting concerns aside, the spread of Child Sexual Abuse Material (CSAM) and the real harm that it documents, are exacerbated, wildly complicated and tragic crimes[3] unique to the scale and reach of digitalage communications. It is useful to take an extreme issue such as CSAM and child safety and explain how human rights organisations approach policy and technology interventions against child abuse to ensure that there is adequate protection for children’s human rights and civil liberties—namely, privacy, security and safety, and free expression and association.
The most effective approaches to enabling children’s rights online are human rights-centric, rather than protectionist. A shared policy advocacy approach among all stakeholders working in service of strengthening children’s rights has four key aspects: protecting children’s rights in the digital age; incentivising technical architecture that helps ensure child safety; examining the role of agency in rights-protecting ways to moderate content; and deploying encryption for children’s safety.
Beyond policy, there are also proactive ways whereby technology standards can be designed with children’s rights at the centre. Technology can assist child safety in a variety of ways, but technology alone cannot do the work of protecting children’s rights. Technology that protects privacy and confidentiality in online communications ensures and protects the human rights of all—including children and youth in at-risk communities—and allows social workers and other related institutions to help survivors in a secure and private manner. Thus, technical interventions that disrupt the privacy and security features of end-to-end encryption, not only threaten the human rights of all people using that technology, including children; they are counterproductive to the aim of keeping kids safe.
A Human Rights Approach to Child Safety
It is imperative to adopt an unwavering commitment to upholding human rights as a first principle, because children’s rights, as with all rights, are most rigorously and broadly defined within the human rights framework. The fight against child abuse, as with all crimes, necessitates a delicate balance between the imperative to protect the most vulnerable members of society and the preservation of individual rights and freedoms. A human rights-centric framework emphasises the dignity and well-being of every child, acknowledging their right to privacy, safety, and freedom from exploitation. This has the benefit of safeguarding the broader digital activities of children, into a larger effort to keep all children safe from online harms while preventing unintentional collateral damage to society as a whole that may result from overreaching or intrusive measures.
Protectionist measures must not leverage violations of anti-CSAM laws and regulations as a pretext for the expansion of surveillance and control mechanisms that could encroach upon the privacy and autonomy of individuals beyond the scope of combating child abuse. Striking the right balance is paramount, ensuring that the fight against criminal content like CSAM does not inadvertently erode the very rights and freedoms it aims to protect. An overly protectionist approach poses significant risks to marginalised groups and vulnerable communities such as women at risk of domestic abuse, LGBTQI individuals, and sex workers. Excessive surveillance and stringent control mechanisms not only undermine the privacy and autonomy of these groups but also perpetuate societal stigmas and discrimination. A nuanced, rightsbased approach is essential to ensure that the pursuit of safety does not come at the cost of further marginalising those who are already at the fringes of society.
To address criminal offenses like CSAM in a rights-respecting way, it is important not to criminalise any offence because a technology is used, or because the offence has a particular technical element, or because the offence involves a specific type of content such as hate speech or copyright infringement. Each of those approaches creates potential for human rights abuses because they are so easily expanded in scope and technologically impossible to restrict. Proposals that attempt to address content by introducing systemic weaknesses to the security systems we all rely on, lays fertile ground for increased cybercrime and other human rights abuses conducted via a vast and interconnected global telecommunications network.
Furthermore, as stated to the UN by several civil society organisations, “Any investigative powers should be tied to investigations of specific crimes. Any crossborder investigative framework should not cater to the lowest common denominator in terms of human rights safeguards.”[4] Thus, states must be required to adhere to the highest standard, and not default to the lowest, of protection in multi-jurisdictional investigations.
When state proposals to protect children online noticeably violate human rights, they lose legitimacy, become targets for wide criticism from the human rights community, and are weakened in the eyes of the public. All indications point to child safety becoming the defining example from the current decade of a classic dynamic: state abuse of investigatory powers has a weakening effect in the medium term on the global rules-based order as well as the unsustainability of unaccountable power in the long term. By adopting a human rights-centric perspective, we can construct a framework that not only effectively keeps children safe but also fosters a digital environment where children can explore, learn, and communicate without the shadow of unwarranted intrusion or undue restrictions.
Content Moderation, in Moderation
‘Content moderation’ refers to the set of policies, systems, and tools that intermediaries of user-generated content use to decide what user-generated content or accounts to publish, remove, or otherwise manage. As with any technology, it should serve the needs of its users. Therefore, when considering moderating content in any system, end-to-end encrypted or otherwise, any method should: 1) refrain from violating privacy and confidentiality; and 2) empower users by improving user-agent features of those systems.
An end-to-end encrypted communications system is defined by the probability of an adversary's success in learning information about the communication between “ends” or users. Users today demand systems that are both secure and private;[5] systems that are confidential and that limit account metadata.
In a 2021 report, the Center for Democracy and Technologya evaluated five content moderation techniques[6] used in both end-to-end encryption and plaintext systems against the promises of confidentiality and privacy. These techniques include user reporting, traceability, metadata analysis, and two client-side scanning techniques that use artificial intelligence (AI): perceptual hashing and predictive models. Of these five techniques, the analysis showed, metadata analysis and user-reporting provide effective tools in detecting significant amounts and different types of problematic content on end-to-end encrypted services including abusive and harassing messages, spam, misinformation and disinformation, as well as CSAM.
However, not all approaches to content moderation using metadata are suitable. Metadata is data about data (such as sender, recipient, time stamp, and message size). In most jurisdictions the creation of more metadata contravenes data protection regulations. Encrypted systems deliver on promises of privacy and confidentiality by reducing discoverable account information—one form of metadata. Platforms take steps to minimise metadata such as user obfuscating IP addresses, reducing non-routing metadata, and avoiding extraneous message headers can enhance the confidentiality and security features of direct communications systems. Furthermore, limiting metadata analysis to the user’s device reduces the risk of exposure.
Because metadata can be correlated with other data and itself constitutes important information, proposals that leverage traceability in end-to-end encryption systems actually produce more metadata in such systems, and thus expose all people to greater risk. For instance, AI machine learning approaches to metadata-driven content moderation risk exposing and re-aggregating identifiable information about people to third-party large language models.
Any measure to introduce computing on the user agent, or “client”, or user device (such as client-side scanning) will not only break encryption but becomes a direct threat to civil liberties the moment a person’s device becomes their adversary. Moreover, how client-side scanning is envisioned to work is notable: There is a turn to novel computational methods, AI machine learning (AI/ML), in the industry as well as governments in the hopes that hard societal problems can be solved with advances in technology. When Apple announced in 2021 changes to messaging and photo services, human rights advocates said,[7] “Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the U.S., but around the world.”
Many such attempts come from an over-glorification of AI/ML techniques for content moderation, all the while risking exposure of their users to “classic” or unsophisticated hacking by bad actors. Client-side scanning (CSS) has been flagged and debunked by academics[8] who argue that “CSS neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic.” Public interest technologists,[9] and industry alike,[10] will continue to research and expose the serious risks associated with deployment of such methods, backed by expert analyses, and warn against a blind faith in technological solutionism, no matter how cutting-edge the technology in question seems.
There is a need for transparent and inclusive discussions between policymakers, technology experts, and civil society to navigate the complex landscape of online harms to children as it is a problem that intersects policing, protocols, platforms, and user privacy. A collaborative and informed approach balances the imperative to combat illegal content with the paramount importance of preserving fundamental rights and the security of communications in an always-online world.
Indeed, a recent report on youth’s experiences online[11] shows that they themselves would benefit from more control over their tools, supporting the claim that user reporting and other in-app features are more effective and privacy-preserving solutions to content moderation than automated content moderation by the platform.
Tech Can Assist, But Not Control, Children’s Safety
Tech-assisted approaches recognise that the problems of the digital age are much larger than that of sharing CSAM on encrypted platforms. Human rights advocates urge policymakers and platforms alike to go beyond “backdoors” to encryption, and rather take a wider view, as the previous discussion on the principles of a human rights approach has suggested.
In parallel to the human rights complexities of the problem of CSAM, the challenges surrounding the implementation of technology that attempts to solve CSAM are many,[12] yet all of them are insufficient both in guaranteeing less CSAM and preserving privacy and confidentiality. Systems that involve content detection inherently involve some level of access to content created and shared by users, thus violating the promises of end-to-end encryption.[13] However, a nuanced understanding of encryption technologies provides the opportunity for a balanced approach to the larger problem space that prioritises user privacy and security while addressing the challenges associated with illegal content.
Scholar Laura Draper takes an approach that accepts the ubiquitous existence of strong encryption,[14] and concludes that its security, privacy and confidentiality features are helpful to victims of abuse. Draper builds an informed and evolved set of recommendations for how to combat online child exploitation and abuse, including preserving strong end-to-end encryption.
For a variety of purposes, governments are focused on how to detect illegal content in private communications, and the technical approaches they suggest are often flawed. For instance, a draft European Commission report[15] leaked in September 2020 proposed several detection methods that would each break end-to-end encryption, weaken the security and privacy of all users, and present attractive targets for criminals. Again, experts in both human rights and technology responded by breaking the myths that framed the report,[16] arguing that “breaking end-to-end encryption to curb objectionable content online is like trying to solve one problem by creating 1,000 more. Insecure communications make users more vulnerable to the crimes we collectively are trying to prevent.”
Cybersecurity experts are in agreement: There is no way to enable a third party to monitor end-to-end encrypted communications without weakening the security and privacy for all of its users, including those most vulnerable and the victims of crimes for whom digital security is especially critical.
Backdoors to encryption render the whole system vulnerable, weaken the security of all components, and put users at risk. If measures such as mandatory detection, reporting, and removal are intended to apply to end-to-end encrypted communications, then regardless of whether the unlawful content is known, platforms would be forced to undermine end-to-end encryption, and to do so for all content, the vast majority of which is lawful.
By minimising the intense focus on end-to-end encrypted systems that would require content detection, there is an opportunity to build alternative methods to combat illegal content without compromising the privacy and security of people, including children and youth, online. In that search for solutions, a respect for user confidentiality and privacy is a must while addressing the challenges associated with illicit content detection and beyond.
Technical interventions like user reporting and metadata analysis are more likely to be implemented consistently across the industry and better preserve privacy and security guarantees for end users. A narrow focus on these improvements could address the problem of CSAM while avoiding the privacy and security nightmares of broader, technocractic approaches. These tools can detect significant amounts of different types of problematic content on end-to-end encrypted services, including abusive and harassing messages, spam, misinformation and disinformation, and CSAM. These tools have known imperfections—including that users sometimes make false accusations via provided reporting mechanisms—thereby necessitating more research to improve these tools and better measure their effectiveness.
Child Safety Begins With Privacy
The complexities of online safety for children and youth stem from the novel and pervasive risks at-scale. While research indicates a lack of clear cause-and-effect understanding of the internet's impact on youth, broadly, this article has focused on the most egregious, albeit less understood crimes against child victims of abuse. The controversies and potential drawbacks of policies aimed at devastating but relatively rare crimes, include fears of censorship and restrictions on information access for all people. Meaningful policy enhancements to online safety for all children would require stronger privacy legislation and clearer content guidelines, and market regulation that would force better practices and end user features in social media.[17]
Turns out that encryption protects all human rights including those of children, especially youth in at-risk communities. Encryption has a vital role in safeguarding the privacy and safety of survivors[18] of domestic violence, sexual violence, stalking, and trafficking. It explains that secure communication and storage tools are crucial for survivors and those supporting them, with strong encryption being a critical component of the solution. Encryption mitigates technology-facilitated abuse, such as aiding safety planning and evidence protection. This is critically important to the problem of child abuse where statistically it is the caretaking adults who are most likely to be their victimisers. Encryption prevents unauthorised access to data, both in transit and at rest, which can safeguard survivors’ case files from privacy breaches and revictimisation.
Encryption’s properties of data integrity can help in maintaining strong evidence when victims or parties to the crime are collaborating with law enforcement and legal professionals. Ultimately, weakening strong encryption practices could compromise the privacy and safety of survivors. Strong encryption can empower survivors with secure communication tools that are crucial for their ability to seek help, safety, and healing.
Moreover, children’s safety and those providing services to child victims can sometimes find themselves in an adversarial relation to state power. Repressive regimes, military occupation, and migration are all examples of when children are vulnerable to state power and its abuses or excesses.[19]
Companies have made commitments to user privacy and communications security, including visible changes that make children and teens safer online, including the use of encryption.[20] When companies, either because they are pressured by law enforcement to do more for children, or coerced through legislative restrictions, open the door to privacy threats for all users, regardless of age or jurisdiction, they create new threats for those same young people targeted by the changes. Youth and children in abusive homes are especially vulnerable to injury and reprisals, including from their parents or guardians.
Overall, child safety enhancements must involve families, schools, and young people themselves in creating effective strategies, with states responsible for tailoring those strategies appropriately per jurisdiction and cultural environs. The importance of digital literacy education must also not be forgotten, which aligns child safety with efforts to bolster sustainable development and civil rights.
Conclusion
A principled approach to online child safety uncovers some key takeaways for the debates in parliaments and board rooms, and around dinner tables:
- There is no silver-bullet solution to the complex problem of child exploitation. Data-driven methods in computer vision and data analysis at scale are largely overstated, processing intensive, and require human oversight. False confidence in these tools does a disservice to youth victims as much as it makes a collateral damage of all youth’s privacy.
- Law enforcement and the intelligence communities in rule-of-law democracies have consistently demonstrated a lack of restraint in the use of pervasive monitoring tools. While policy must continue to hold investigatory powers in check, the ubiquitous deployment of strong encryption is another necessary check on this power.
- Children’s threat models are complex because they lack legal standing, they are dependents, and they need to be cared for. They deserve safety from abusive parents, strangers, and familiar adults who would hurt them, companies that might exploit them, and states who would neglect them. Technical mechanisms that give children and youth more agency over their digital lives give them the tools to address the threats they might be facing.
On the whole, the safety of already marginalised populations, especially children and youth in at-risk communities, could be seriously endangered in the absence of endto-end encryption environments. A more holistic and child-centric approach is needed both from policy and technology
Endnotes
[1] Arman Khan, “The Validation Spiral: How We Let Others Define Our Self-Worth,” Feminism in India, June 2021
[2] Kelli María Korducki, “What the Teen-Smartphone Panic Says About Adults,” The Atlantic, June 14, 2023
[3] Susan Landau, “Finally Some Clear Thinking on Child Sexual Abuse and Exploitation Investigation and Intervention,” Lawfare, January 24, 2023
[4] Joint civil society statement, “Privacy & Human Rights in Cross-Border Law Enforcement,” August 2021
[5] Mallory Knodel et al., “Definition of End-to-End Encryption,” Internet Draft, Internet Engineering Task Force, June 21, 2023
[6] Seny Kamara et al., Outside Looking In: Approaches to Content Moderation in End-toEnd Encrypted Systems, Center for Democracy & Technology, 2021
[7] Greg Nojeim et al., “Apple’s Changes to Messaging and Photo Services Threaten User Security and Privacy,” Center for Democracy & Technology, August 2021
[8] Hal Abelson et al., “Bugs in Our Pockets: The Risks of Client-Side Scanning,” arXiv, October 14, 2021
[9] Gurshabad Grover et al., “The State of Secure Messaging,” The Centre for Internet and Society, July 2020
[10] “EDPB-EDPS Joint Opinion 04/2022 on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Rules to Prevent and Combat Child Sexual Abuse,” European Data Protection Board, July 28, 2022
[11] Michal Luria, “More Tools, More Control: Lessons from Young Users on Handling Unwanted Messages Online,” CDT, November 9, 2023
[12] Mallory Knodel, “To the UK: An Encrypted System That Detects Content Isn’t End-to-End Encrypted,” May 25, 2022
[13] Anushka Jain et al., “The draft Indian Telecommunication Bill retains its colonial roots,” Internet Freedom Foundation, September 2022
[14] Laura Draper, “Protecting Children in the Age of End-to-End Encryption,” Joint PIJIP/TLS Research Paper Series 80, 2022
[15] Internet Society and Center for Democracy & Technology, “Breaking Encryption Myths: What the European Commission’s Leaked Report Got Wrong About Online Security,” Global Encryption Coalition, 2020
[16] Internet Society and Center for Democracy & Technology, “Breaking Encryption Myths: What the European Commission’s Leaked Report Got Wrong About Online Security,” Global Encryption Coalition, 2020
[17] Lauren Leffer, “Here’s How to Actually Keep Kids and Teens Safe Online,” Scientific American, September 18, 2023
[18] Internet Society and National Network to End Domestic Violence, “Understanding Encryption: The Connections to Survivor Safety,” 2021
[19] Dag Tannenberg, “Political Repression,” in The Handbook of Political, Social, and Economic Transformation, ed. Wolfgang Merkel, Raj Kollmorgen, and Hans-Jürgen Wagener (Oxford, 2019)
[20] Loredana Crisan, “Launching Default End-to-End Encryption on Messenger,” Meta, December 6, 2023
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.