
According to online researchers, flaws in Apple’s new child abuse detection tool could allow bad actors to target iOS users. Apple, on the other hand, has denied these claims, claiming that it has purposefully built safeguards against such exploitation.
It’s just the latest stumbling block for the company’s new features, which have been widely panned by privacy and civil liberties advocates since they were first announced two weeks ago. Many critics see the updates, which are designed to search iPhones and other iOS products for signs of child sexual abuse material (CSAM), as a step toward broader surveillance.
The most recent criticism focuses on claims that Apple’s “NeuralHash” technology, which scans for bad images, can be exploited and tricked to potentially target users. This all started when online researchers dug up and then shared code for NeuralHash in order to better understand it. AsuharietYgvar, a Github user, claims to have reverse-engineered the scanning tech’s algorithm and published the code on his page. In a Reddit post, Ygvar stated that the algorithm was basically available in iOS 14.3 as obfuscated code, and that he had taken the code and rebuilt it in a Python script to get a better understanding of how it worked.
Problematically, another researcher claimed that within a couple of hours, they were able to use the posted code to trick the system into misidentifying an image, resulting in what is known as a “hash collision.”
Apple’s new system, known as “hashes,” searches for unique digital signatures of specific, known photos of child abuse material. The National Center for Missing and Exploited Children’s database of CSAM hashes will be encoded into future iPhone operating systems so that phones can be scanned for such material. Any photo that a user attempts to upload to iCloud will be checked against this database to ensure that it is not already stored in Apple’s cloud repositories.
However, “hash collisions” occur when two completely different images produce the same “hash” or signature. According to critics, in the context of Apple’s new tools, this has the potential to produce a false-positive, potentially implicating an innocent person for having child porn. A malicious actor could cause a false-positive by accident or on purpose.
Apple, on the other hand, has claimed that it has implemented numerous fail-safes to prevent this situation from occurring in the first place.
For one thing, Apple claims that the CSAM hash database encoded in future iPhone operating systems is encrypted. This means that unless an attacker is in possession of actual child porn, which is a federal crime, there is very little chance of an attacker discovering and replicating signatures that resemble the images contained within it.
Apple also claims that its system is specifically designed to detect collections of child pornography, as it is activated only after 30 different hashes have been identified. According to the company, this fact makes the occurrence of a random false-positive trigger extremely unlikely.
Finally, if other mechanisms fail, a human reviewer is tasked with reviewing any flagged CSAM cases before they are forwarded to NCMEC (who would then tip-off police). In such a case, a false-positive could be manually weeded out before law enforcement gets involved.
In short, Apple and its supporters argue that it is difficult to imagine a scenario in which a user is mistakenly flagged or “framed” for having CSAM.
Jonathan Mayer, an assistant professor of computer science and public affairs at Princeton University, told Gizmodo that the concerns about a false-positive problem may be exaggerated, but that there are legitimate concerns about Apple’s new system. Mayer would know because he helped design the system on which Apple’s CSAM-detection technology is based.
Matthew Green, a well-known cybersecurity professional, is concerned in the same way. In an interview with Gizmodo, Green stated that not only is there the possibility of a bad actor exploiting this tool, but that Apple’s decision to launch such an invasive technology so quickly and unthinkingly is a major liability for consumers. The fact that Apple says it has built safety nets around this feature is not comforting at all, he added.