Prepare for another encryption battle: After parliament approved its Online Safety Bill yesterday, the U.K. government is pressuring Meta not to roll out end-to-end encryption (E2EE) on Facebook Messenger and Instagram unless it applies unspecified “safety measures” that the Home Secretary said should allow law enforcement to continue to detect child sexual abuse material (CSAM) while protecting user privacy.
Suella Braverman told BBC Radio 4’s Today Program this morning that Facebook Messenger and Instagram account for most online child sexual abuse detected by U.K. law enforcement. She then criticized Meta’s proposal to use E2EE “without safety measures” on the two services, saying it would “disable and prohibit law enforcement agencies from accessing this criminal activity [i.e. CSAM]”.
Social media giant has said it will encrypt all apps by 2023. Testing has increased this year. Policymakers’ friction has slowed the “pivot to privacy” founder Mark Zuckerberg announced in 2019, when he said the company would universally apply E2EE on its services.
Finally, Meta announced in August that Messenger would default to E2EE by year’s end. The Online Safety Bill gives the U.K. government new legal powers to attack that plan.
Experts have warned for years that surveillance powers in the law threaten E2EE. Policymakers ignored us and gave us a last-minute fudge. That means more crypto warring for Meta and U.K. web users.
Ministers have not been asserting their faith in Braverman’s claimed privacy-safe E2EE safety measures behind closed doors, and ministerial remarks earlier this month were widely interpreted to signify the government was pulling back on a clash with major tech firms over encryption (a number of which have warned they will pull services from the U.K. rather than torch user security), so the threatening noises from the H
But with millions of web users’ security and privacy repurposed for another kick, there’s no joy in another act of this familiar, seemingly endless power play.
“End-to-end encryption with safety”
If Meta goes ahead with its E2EE rollout without the additional measures she wants, Braverman told the BBC that Ofcom can fine Meta up to 10% of its global annual turnover for violating the Online Safety Bill. Again, she said the government wants to “work constructively” with the company to implement “end-to-end encryption with safety measures”.
“My job is to protect children, not paedophiles, and I want to help Meta implement the technology to do that. Protecting children and their commercial interests, she said. We know technology exists. We also just passed the Online Safety Bill, which gives us new and extensive powers to direct social media companies, via Ofcom, to remove indecent content, roll out technology, and protect children.
Braverman said Meta could face Online Safety Bill sanctions “ultimately, and potentially, and if necessary, and proportionate” if she doesn’t comply with the government. She stated her “clear preference” to work “constructively with them”.
Our first assumption is that the technology exists. Internet Watch Foundation concurs. She added, “The NSPCC agrees,” referring to undefined “safety measures” Meta should take. Technologists and tech companies have developed the technology, so Meta must do the right thing and work with us to prevent paedophiles from using their social media platforms. To introduce this technology that protects children and user privacy.”
The Home Secretary did not specify what “safety measures” the government is referring to, but new E2EE guidance suggests Meta should use its years-old hash matching technologies to detect CSAM on non-E2EE services.
Using content scanning technologies on strongly encrypted content where only the sender and recipient have encryption keys is difficult. Scores of security and privacy experts worry that the government push for “safety tech” will lead, via the Online Safety Bill, to Ofcom mandating that E2EE platforms bake client side scanning technology into their systems, endangering millions of web users.
The Home Office document does not explain how to square this circle, but it points to a 2021 “Safety Tech” challenge in which it invested public money in “proof-of-concept” CSAM detection technologies to be used in E2EE “whilst upholding user privacy”: “The fund demonstrated that it would be technically feasible.”
A spokesperson for the Department of Science, Innovation, and Technology, which is guiding the Online Safety Bill, said:
Our Safety Tech Challenge fund has shown this technology can exist, which is why we’re calling on social media companies to use their vast capacity for innovation to build on these concepts and find solutions that work for their platforms — so children are kept safe while maintaining user privacy.
Yesterday our landmark Online Safety Bill was passed through Parliament, meaning as a last resort, on a case by case basis and only when stringent privacy safeguards have been met, Ofcom can direct companies to either use, or make best efforts to develop or source, technology to identify and remove illegal child sexual abuse content.
The Home Office was asked which safety measures Braverman is referring to and if Meta is being ordered to apply client-side scanning to Messenger and Instagram. A Home Office spokeswoman did not answer directly but cited this Safety Tech challenge, stating that the fund showed privacy-safe scanning was “technically feasible”.
Braverman and the government face security and privacy experts’ denial.
In July, University of Bristol professor of cyber security Awais Rashid and director of the Rephrain Centre, which independently evaluated the Home Office’s Safety Tech Challenge projects, warned that none of the technology is suitable: “The issue is that the technology being discussed is not fit as a solution. Our evaluation shows that the proposed solutions will compromise privacy and have no built-in safeguards to prevent repurposing such technologies to monitor personal communications.
When asked about Braverman’s latest comments, including her claim that technology exists to scan messages for illegal content without compromising user privacy, Rashid reiterated that this is impossible.
He told TechCrunch, “Our independent evaluation of the prototype tools in the Safety Tech Challenge Fund, which include client side scanning mechanisms, concluded that such tools would lead to fundamental breaches of users’ privacy and human rights.” As researchers, we protect privacy and vulnerable users online, including children from sex offenders. However, scanning messages before encryption would compromise privacy for everyone, including young people whom the proposed approach aims to protect.
“Unscrupulous actors can use such technologies to monitor communications for other purposes. He added: “We mustn’t build any mechanisms that allow unfettered access to personal communications on a societal scale.” The Met police and NI [Northern Ireland] police data breaches show that even with good security mechanisms, large-scale data leaks are possible. We must follow the independent scientific evidence from the Rephrain centre and expert consensus nationally and internationally or the UK will not be the safest place to live and do business, as stated in the National Cyber Strategy.
At press time, the Home Office had not responded to Rashid’s comments and Rephrain’s Safety Tech project assessment.
More privacy and security experts agree the government’s approach is flawed. About 70 academics signed an open letter in July warning that surveillance technologies would undermine online safety.
Eerke Boiten, a cyber security professor and head of De Montford University’s School of Computer Science and Informatics, called the Home Office challenge “intellectually dishonest” and government-funded snake oil.
“End-to-end encryption means only the sender and receiver can read encrypted data. Not whether the last bit is 0, not whether the message is CSAM. In March, he wrote that the final Rephrain report states there is ‘no published research on computational tools that can prevent CSAM in E2EE’ and that a more honest formulation would have been to look for technologies that can protect users from specific types of abuse in services where providers are not fully monitoring all service users.
This would also eliminate another intellectual dishonesty in the challenge: the suggestion that any techniques developed would apply specifically and only to CSAM, rather than being (ab/re)usable for identifying and restricting other, potentially less obviously undesirable, content. Rephrain evaluations repeatedly remind us of this. It would have eliminated several projects before spending £85K each on full surveillance solutions.
Boiten replied, “In my opinion, such technology does not exist” when asked if law enforcement could access E2EE content while protecting user privacy. The scientific evaluation of former Home Secretary Priti Patel’s Safety Tech Challenge found that all submissions had significant issues with privacy, abuse prevention, transparency, disputability, and accountability.
If the government wants Meta to implement client-side scanning after its Safety Tech Challenge, which Boiten notes involved five candidates pushing “instances” of the tech (“one more place where nobody had any better ideas apparently”), was so poorly rated by independent experts, it’s not good.
The experts agree that baked-in technology that blanket-scans messages does not protect users or their privacy. (“After years of leaving it on the shelf, Apple have also just abandoned the idea because they realise they cannot get it to work,” Boiten noted.)
Hi GCHQ!
The Home Office guidance on E2EE and child safety cites an academic paper by “the UK’s leading cryptographers”—Crispin Robinson of U.K. intelligence agency GCHQ and Dr. Ian Levy, former technical director of the National Cyber Security Centre (another arm of GCHQ) turned Amazon employee. A government spokeswoman said this paper describes “a variety of techniques that could be used as part of any potential solution to end-to-end encryption — so both protecting privacy and security whilst enabling law enforcement action”.
Braverman’s comments today seem to go further, claiming that technology exists to allow law enforcement access while protecting user privacy. However, their paper only suggests that client-side scanning may mitigate some privacy concerns. Which also implies a significant shift from the Home Office’s promotion of ready-to-use CSAM-scanning tech that protects user privacy.
“We have not identified any techniques that are likely to provide as accurate detection of child sexual abuse material as scanning of content, and whilst the privacy considerations that this type of technology raises must not be disregarded, we have presented arguments that suggest that it should be possible to deploy in configurations that mitigate many of the more serious privacy concerns,” they argue tortuously in their paper’
(Levy and Robinson also state that their work is “not a high level design document”; “not a full security analysis of any particular solution”; and “not a set of requirements that the UK Government wishes to be imposed on commodity services”. “This paper is not an exposition of UK Government policy, nor are any implications that can be read in this document intended to relate to UK Government future policy,” they add.
Discussing Braverman’s ban on end-to-end encryption “without safety measures” A veteran of decades of crypto wars, Cambridge Department of Computer Science and Technology security engineering professor Ross Anderson was scathing.
“A few days ago, the government reassured us that there was no such technology, so we should relax because they could not enforce the new law until then. That line helped pass the [Online Safety] bill in Parliament. It appears GCHQ made a major scientific breakthrough this week! He wrote via email, “We look forward to seeing the details,” then dismissed Levy and Robinson’s paper as something he’d already rebutted.
He recently blogged that surveillance has not been effective in the past and the proposed measures are unlikely to be effective in the future. My paper concludes that ‘chatcontrol’ will harm child protection based on the evidence. We need to fight authoritarians morally as well as technologically and militarily, and it will undermine human rights. If not to defend democracy, the rule of law, and human rights, why fight?
When asked which “safety” technologies it recommends for E2EE platforms, the NSPCC was unsure. The child protection charity’s spokeswoman cited the GCHQ paper, saying “GCHQ and others have made clear that technical solutions are possible” without specifying which technologies.
She also mentioned SafeToNet, a U.K. safety tech startup that sells parental controls and child-location tracking for third-party apps, which has “developed technology that can identify known and new child abuse material before being sent”.
According to SafeToNet, SafeToWatch is a “predictive analysis” technology that can detect CSAM in real time on a user’s device if forcefully embedded into E2EE messaging platforms as part of a client side scanning implementation to bypass strong encryption. Why does scanning for CSAM break encryption when WhatsApp can scan files for viruses and links for suspicious content? SafeToNet responded to Wired’s article on WhatsApp’s Online Safety Bill concerns in a blog post earlier this year.
“Ultimately if companies are not happy with the technology that is being developed it is for them to invest in finding solutions which they may have to do in the future under the provisions of the Online Safety Bill,” the NSPCC spokeswoman said. “But it is not just about scanning. It involves understanding and mitigating platform risks and how end-to-end encryption could increase them.
The Home Office’s E2EE guidance document also urges Meta and other social media companies to geek out and develop innovative tech solutions to child safety issues.
“Currently, Facebook and Instagram account for over 85% of global tech company referrals of child sexual abuse,” the Home Office writes. “E2EE will significantly reduce monthly referrals of suspected child sex offenders to UK law enforcement. Meta and other social media companies should work with us and use their vast engineering and technical resources to develop a solution that protects children online and fits their platform design.
Meta’s updated report on “Safer Private Messaging on Messenger and Instagram Direct Messages” rejects scanning users’ E2EE messages as a proportionate (or even rational) approach to online safety concerns, suggesting it anticipated the Home Office’s latest attack.
“Meta believes that any form of client-side scanning that exposes message content without the consent and control of the sender or intended recipients is fundamentally incompatible with user expectations of an E2EE messaging service,” the report states. People who use E2EE messaging services trust that only the sender and intended recipients can read or infer a message’s contents.
“We strongly believe E2EE is essential to security. E2EE backdoors or message scanning without user consent and control compromise user safety, it says. Safety, privacy, and security are interdependent, and we are committed to delivering on all of them as Messenger and Instagram DMs move to E2EE.
“To keep people safe, we work with law enforcement, online safety, digital security, and human rights experts to provide the safest encrypted messaging service in the industry. Based on our work so far, we believe we can deliver that and surpass other encrypted messaging services.”
Meta’s spokesperson also responded to Braverman’s comments today:
The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals. We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security. We’re today publishing an update report setting out these measures, such as restricting people over 19 from messaging teens who don’t follow them and using technology to identify and take action against malicious behaviour. As we roll out end-to-end encryption, we expect to continue providing more reports to law enforcement than our peers due to our industry leading work on keeping people safe.
Meta seems to be holding off on E2EE client-side scanning for now.
However, legal liability under the new U.K. law and politicians threatening Ofcom with billion-dollar fines are mounting.
Never-ending crypto wars?
A few scenarios seem likely at this point: Tech companies like Meta are forced to implement client-side scanning by threats of huge financial penalties. That’s hard to imagine given the strong statement and public opposition. In addition to Meta, Signal, Apple, and Element have opposed E2EE surveillance.
Given wider reputational considerations (the U.K. is just one market they operate in) and relatively high leverage in light of the political (and economic) damage the country would suffer if a mainstream service like WhatsApp shut down for local users, tech firms may be more willing to threaten to pull services from the U.K.
The U.K. government’s harsh demands on E2EE platforms for undefined “safety measures” may soften and become another fudge: Say a set of checks and feature tweaks that don’t require client-side scanning and are platform-acceptable. No blanket surveillance of users but a package of measures platforms can layer on to claim compliance (and even brand as “safety tech”), such as age verification; limits on how certain features work when the user is a minor; beefed up reporting tools and resourcing; proactive steps to educate users on how to stay safe, etc. The original government demands were vague enough for politicians to claim they tamed the tech giant.
Age verification may also be a red flag: Wikipedia worries about the Online Safety Bill becoming a tool for state censorship if Ofcom locks certain information behind age restrictions.
There are larger forces at play, so the crypto wars will continue in some form.
In a phone call with TechCrunch, Anderson claims the government is using child safety as a populist excuse to embed surveillance infrastructure into strongly encrypted platforms to enable blanket access for intelligence agencies like GCHQ.
Everyone uses WhatsApp now. For many signals intelligence agency-relevant purposes, he said. None of these guys care about kids except as an excuse. My paper, Chat Control or Child Protection?, described what you would do if you cared about kids. None involve collecting more dirty pictures. None.
If a weird drunken guy rapes his 13-year-old stepdaughter in Newcastle, the police, school, church, and scouts must deal with it. Nothing to do with GCHQ. No concern. The GCHQ director will not be fired for abusing that child.”
Anderson claims that child pornography was used to justify backdooring encryption in the 1990s. After 9/11, spy agencies used terrorism to justify encryption backdoors.
He sees child safety as the latest version of the same “playbook”.
He also notes that the 2016 Investigatory Powers Act (IPA) allows the U.K. government to order E2EE platforms to remove encryption for national security threats. Targeted (and time-limited) access under emergency procedures and protocols is different from baking in blanket surveillance infrastructure that spooks can exploit via E2EE security vulnerabilities. To intercept international communications more easily.
A technical capability notice under IPA 2016 could force Facebook to permanently disable WhatsApp E2EE and keep quiet. Anderson noted that “tech hostage taking by authoritarian governments is already an issue” and predicted that Facebook would stop WhatsApp in the UK and pull its engineering staff.
Facebook provided GCHQ with “a live feed while the crime was in progress” during the 2013 Woolwich murder of a soldier by Islamic terrorists. The UK intelligence agency then filed an MLAT with Facebook in Ireland.
“Spooks don’t like saying please or dealing with people they can’t control, like foreigners. “They’re bad at filling out American government forms,” he said. “You can’t get stuff across borders on an emergency procedure when the emergency is over.”
So MLAT requests take time and timely information exchanges via that legal route require more competence than government and law enforcement have shown. “Things go wrong in this space because the Home Office and police tried to do things they’re useless at,” he said.
So now what? Ofcom’s consultations on Online Safety bill standards will be the next round of this crypto battle. Anderson expects new academic conferences and activity to address legislative coloring-in outrages. He predicted “history repeating itself as farce.”
Anderson also has his eye on the European Union, where lawmakers are pushing a similar proposal to drive platforms toward CSAM-scanning. However, European courts would likely rule that client-side scanning on messaging apps is disproportionate due to stronger privacy, connectivity, and personal data protections. Of course, not having unworkable and illegal laws is better. So the fight continues.
“The real game is in Europe,” he said. We think we have a blocking minority in the European Council.
This report now includes DSIT comments. We also changed the text to reflect that Dr. Ian Levy is now Amazon’s director and former NCSC director. We added Anderson’s context on MLATs and Investigatory Powers Act emergency access.