One of the most prominent justifications offered by advocates of removing end-to-end encryption from Instagram has been child safety. Law enforcement agencies including the FBI, Interpol, the UK’s National Crime Agency, and Australia’s Federal Police have argued that encrypted DMs create environments where child exploitation can occur without detection. Now that Meta has confirmed the removal of encryption from Instagram’s direct messages — effective May 8, 2026 — it is worth examining how strong that argument really is.
The child safety case against encryption rests on the premise that removing it allows platforms to detect harmful content through automated scanning and reporting. Without end-to-end encryption, platforms like Instagram can use existing tools to identify and flag potentially illegal content shared via direct messages. This is a real capability — one that law enforcement and child safety organizations argue has saved children from abuse.
However, critics of this reasoning note that it applies with equal force to every encrypted communication tool — not just Instagram. Removing encryption from Instagram does not prevent determined criminals from moving their communications to Signal, Telegram, or other fully encrypted platforms. The benefit to child safety may therefore be more limited than its proponents suggest, while the cost to user privacy is borne by all of Instagram’s billions of users.
Digital rights advocates propose an alternative framework: targeted safety tools that can detect specific harm indicators without requiring wholesale removal of encryption. These tools, they argue, can be effective without compromising the privacy of all users — and represent a more proportionate response to the problem of online child exploitation. The Australian eSafety Commissioner’s office acknowledged this complexity, noting that encryption and safety are not inherently incompatible.
The child safety argument is real, legitimate, and emotionally powerful. But it should not be accepted uncritically as a sufficient justification for removing privacy protections from an entire platform. The question of whether Meta’s decision meaningfully improves child safety — or simply shifts the problem while opening new data access opportunities for the company — deserves serious and ongoing scrutiny.