pseudorandom/ftpsi-parameters.html

113 lines
12 KiB
HTML
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang=en>
<head>
<meta charset="utf-8">
<title>Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI | pseudorandom</title>
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@sarahjamielewis" />
<meta name="twitter:creator" content="@sarahjamielewis" />
<meta property="og:url" content="https://pseudorandom.resistant.tech/ftpsi-parameters.html" />
<meta property="og:description" content="Last week, Apple published more additional information regarding the parameterization of their new Fuzzy Thres" />
<meta property="og:title" content="Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI" />
<meta name="twitter:image" content="https://pseudorandom.resistant.tech/ftpsi-parameters.png">
<link rel="alternate" type="application/atom+xml" href="/feed.xml" />
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="styles.css">
<link rel="stylesheet" href="/katex/katex.min.css" integrity="sha384-RZU/ijkSsFbcmivfdRBQDtwuwVqK7GMOw6IMvKyeWL2K5UAlyp6WonmB8m7Jd0Hn" crossorigin="anonymous">
<!-- The loading of KaTeX is deferred to speed up page rendering -->
<script defer src="/katex//katex.min.js" integrity="sha384-pK1WpvzWVBQiP0/GjnvRxV4mOb0oxFuyRxJlk6vVw146n3egcN5C925NCP7a7BY8" crossorigin="anonymous"></script>
<!-- To automatically render math in text elements, include the auto-render extension: -->
<script defer src="/katex/auto-render.min.js" integrity="sha384-vZTG03m+2yp6N6BNi5iM4rW4oIwk5DfcNdFfxkk9ZWpDriOkXX8voJBFrAO7MpVl" crossorigin="anonymous"
onload="renderMathInElement(document.body);"></script>
</head>
<body>
<header>
<nav>
<strong>pseudorandom</strong>
<a href="./index.html">home</a>
<a href="mailto:sarah@openprivacy.ca">email</a>
<a href="cwtch:icyt7rvdsdci42h6si2ibtwucdmjrlcb2ezkecuagtquiiflbkxf2cqd">cwtch</a>
<a href="/feed.xml">atom</a>
</nav>
</header>
<article>
<h1 id="revisiting-first-impressions-apple-parameters-and-fuzzy-threshold-psi">Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI</h1>
<p>Last week, Apple published more additional information regarding the parameterization of their new Fuzzy Threshold PSI system in the form of a Security Threat Model<em class="footnotelabel"></em>.</p>
<p class="sidenote">
<a href="https://www.apple.com/child-safety/pdf/Security_Threat_Model_Review_of_Apple_Child_Safety_Features.pdf">Security Threat Model Review of Apples Child Safety Features</a>
</p>
<p>Contained in the document are various answers to questions that the privacy community had been asking since the initial announcement. It also contained information which answered several of my own questions, and in turn invalidated a few of the assumptions I had made in a previous article<em class="footnotelabel"></em>.</p>
<p class="sidenote">
<a href="/obfuscated_apples.html">Obfuscated Apples</a>
</p>
<p>In particular, Apple have now stated the following:</p>
<ul>
<li>they claim the false acceptance rate of NeuralHash is 3 in 100M, but are assuming it is 1 in 1M. They have conducted tests on both a dataset of 100M photos and on a dataset of 500K pornographic photos.</li>
<li>the threshold <span class="math inline"><em>t</em></span> they are choosing for the system is <strong>30</strong> with a future option to lower. They claim this is based on taking the assumed false positive rate of NeuralHash and applying it to a assumed dataset the size of the largest iCloud photo library to obtain a probability of false reporting of 1 in a trillion.</li>
</ul>
<p>One might ask why if the false acceptance rate of NeuralHash is so low then why take such precautions when estimating <span class="math inline"><em>t</em></span>?</p>
<p>I will give Apple the benefit of the doubt here under the assumption that they really are attempting to only catch prolific offenders.</p>
<p>Even still, I believe the most recent information by Apple still leaves several unanswered questions, and raises several more.</p>
<h2 id="on-neuralhash">On NeuralHash</h2>
<p>To put it as straightforward as possible, 100.5M photos isnt that large of a sample to compare a perceptual hashing algorithm against, and the performance is directly related to the size of the comparison database (which we dont know).</p>
<p>Back in 2017 WhatsApp estimated that they were seeing 4.5 billion photos being uploaded to the platform per day<em class="footnotelabel"></em>, while we dont have figures for iCloud we can imagine, given Apples significant customer base, that it is on a similar order of magnitude.</p>
<p class="sidenote">
<a href=">https://blog.whatsapp.com/connecting-one-billion-users-every-day">Connecting One Billion Users Every Day - Whatsapp Blog</a>
</p>
<p>The types of the photos being compared also matter. We know nothing about the 100.5M photos that Apple tested against, and only that a small 500K sample was pornographic in nature. While NeuralHash seems to have been designed as a generic image comparison algorithm, that doesnt mean that it acts on all images uniformly.</p>
<h2 id="on-the-thresholds">On the Thresholds</h2>
<blockquote>
<p>Since this initial threshold contains a drastic safety margin reflecting a worst-case assumption about real-world performance, we may change the threshold after continued empirical evaluation of NeuralHash false positive rates but the match threshold will never be lower than what is required to produce a one-in-one-trillion false positive rate for any given account - Security Threat Model Review of Apples Child Safety Features</p>
</blockquote>
<p>Apples initial value of <span class="math inline"><em>t</em>=30</span> was chosen to include a <strong>drastic safety margin</strong>, but the threat model gives them the explicit ability to change it in the future, but they promise the floor is still 1 in a trillion for “any given account”.</p>
<p>We still know very little about how <span class="math inline"><em>s</em></span> will be chosen. We can assume it will be in the same magnitude as <span class="math inline"><em>t</em></span> and that as such the number of synthetics for each user will be relatively low compared to the total size of their image base.</p>
<p>Also given that <span class="math inline"><em>t</em></span> is fixed across all accounts, we can be relatively sure that <span class="math inline"><em>s</em></span> will also be fixed across all accounts, with only the probability of choosing a synthetic match being varied on some unknown function.</p>
<p>Note that, if the probability of synthetic matches is too high, then the detection algorithm<em class="footnotelabel"></em> fails with high probability. Requiring more matches, and an extended detection procedure.</p>
<p class="sidenote">
As an aside, if you are interested in playing with the Detectable Hash Function yourself <a href="https://git.openprivacy.ca/sarah/fuzzyhash">I wrote a toy version of it</a>
</p>
<h2 id="threat-model-expansions">Threat Model Expansions</h2>
<p>The new threat model includes new jurisdictional protection for the database that were not present in the original description - namely that the <strong>intersection</strong> of to ostensibly independent databases managed by different agencies in different national jurisdictions will be used instead of a single database<em class="footnotelabel"></em> <span class="sidenote">(such as the one run by NCMEC)</span>.</p>
<p>Additionally, Apple have now stated they will publish a “Knowledge Base” containing root hashes of the encrypted database such that it can be confirmed that every device is comparing images to the same database. It is worth noting that this claim is only as good as security researchers having access to proprietary Apple code.</p>
<p>That such a significant changes were made to the threat model a week after the initial publication is perhaps the best testament to the idea, as Matthew Green put it:</p>
<blockquote>
<p>“But this illustrates something important: in building this system, the <em>only limiting principle</em> is how much heat Apple can tolerate before it changes its policies.” - <a href="https://twitter.com/matthew_d_green/status/1426312939015901185">Matthew Green</a></p>
</blockquote>
<h2 id="revisiting-first-impressions">Revisiting First Impressions</h2>
<p>I think the most important question I can ask of myself right now is that if Apple had put out all these documents on day one, would they have been enough to quell the voice inside my head?</p>
<p>Assuming that Apple also verified the false acceptance rate of NeuralHash in a way more verifiable than :we tested it on some images, its all good, trust us!" then I think many of my technical objections to this system would have been answered.</p>
<p>Not all of them though. I still, for example, think that the obfuscation in this system is fundamentally flawed from a practical perspective. And, I still think that the threat model as applied to malicious clients undermines the rest of the system<em class="footnotelabel"></em></p>
<p class="sidenote">
See: <a href="/a_closer_look_at_fuzzy_threshold_psi.html">A Closer Look at Fuzzy Threshold PSI</a> for more details.
</p>
<h2 id="its-about-the-principles">Its About the Principles</h2>
<p>And, of course, none of that quells my moral objections to such a system.</p>
<p>You can wrap that surveillance in any number of layers of cryptography to try and make it palatable, the end result is the same.</p>
<p>Everyone on Apples platform is treated as a potential criminal, subject to continual algorithmic surveillance without warrant or cause.</p>
<p>If Apple are successful in introducing this, how long do you think it will be before the same is expected of other providers? Before walled-garden prohibit apps that dont do it? Before it is enshrined in law?<em class="footnotelabel"></em> <span class="sidenote"><a href="https://twitter.com/SarahJamieLewis/status/1423403656733290496">Tweet</a></span></p>
<p>How long do you think it will be before the database is expanded to include “terrorist” content“?”harmful-but-legal" content"? state-specific censorship?</p>
<p>This is not a slippery slope argument. For decades, we have seen governments and corporations push for ever more surveillance. It is obvious how this system will be abused. It is obvious that Apple will not be in control of how it will be abused for very long.</p>
<p>Accepting client side scanning onto personal devices <strong>is</strong> a rubicon moment, it signals a sea-change in how corporations relate to their customers. Your personal device is no long “yours” in theory, nor in practice. It can, and will, be used against you.</p>
<p>It is also abundantly clear that this is going to happen. While Apple has come under pressure, it has responded by painting critics as “confused” (which, if there is any truth in that claim is due to their own lack of technical specifications).</p>
<p>The media have likewise mostly followed Apples PR lead. While I am thankful that we have answers to some questions that were asked, and that we seem to have caused Apple to “clarify”<em class="footnotelabel"></em> <span class="sidenote">(or, less subtly, change)</span> their own threat model, we have not seen the outpouring of objection that would have been necessary to shut this down before it spread further.</p>
The future of privacy on consumer devices is now forever changed. The impact might not be felt today or tomorrow, but in the coming months please watch for the politicians (and sadly, the cryptographers) who argue that what can be done for CSAM can be done for the next harm, and the next harm. Watch the EU and the UK, among others, declare such scanning mandatory, and watch as your devices cease to work for you.
</article>
<hr/>
<h2>
Recent Articles
</h2>
<p><em>2021-08-16</em> <a href="ftpsi-parameters.html">Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI</a><br><em>2021-08-12</em> <a href="a_closer_look_at_fuzzy_threshold_psi.html">A Closer Look at Fuzzy Threshold PSI (ftPSI-AD)</a><br><em>2021-08-10</em> <a href="obfuscated_apples.html">Obfuscated Apples</a><br></p>
<footer>
Sarah Jamie Lewis
</footer>
</body>
</html>