Developer Reverse-Engineers Apple’s CSAM Detection System, Finds Serious Flaws in it

BY Sanuj Bhatia

Published 18 Aug 2021

apple logo

Apple announced the new CSAM detection system earlier this month. Since the feature’s announcement, CSAM has received a lot of backlash with not only security researchers but even Apple’s own employees calling it out. Now, an independent developer has reverse-engineered CSAM and has found some serious flaws in it.

The developer, Asuhariet Ygvar, posted code for a reconstructed Python version of NeuralHash on Github. Surprisingly, the developer claims he extracted the code from iOS 14.3, despite Apple claiming CSAM detection will be available in iOS in future versions.

Ygvar posted a detailed guide on Github on how to extract NeuralMatch files from macOS or iOS. After he revealed the reverse coded system on Reddit, he posted a comment saying:

“Early tests show that it can tolerate image resizing and compression, but not cropping or rotations. Hope this will help us understand NeuralHash algorithm better and know its potential issues before it’s enabled on all iOS devices.”

Once the code was available on Github, developers around the world started finding flaws in it. But before we explain what the flaw is, you need to know how iOS detects CSAM content. iOS divides a photo into a number of hashes. Then these hashes are then matched against the hashes of the photo in the database of the National Center for Missing and Exploited Children (NCMEC). If the hashes match, a flag is raised.

An alert is triggered if more than 30 flags are raised after which a human reviewer sees what triggered the flag. Apple says if it finds a person guilty, it may take an action.

Developers have found that Apple’s system may generate the same set of hashes for two completely different photos, which would be a significant failure in the cryptography underlying Apple’s new system. If someone, somehow, gets access to the database of NCMEC CSAM photos, the system could be reverse engineered to generate the same hash for a non-CSAM photo. This would require 30 colliding images to create a false positive trigger, and most likely won’t happen, but still would require human intervention.

Apple’s child-safety features have been a matter of debate ever since its announcement. Some say it’s fine for the company to check child abuse in the photos while some say it’s a breach of their privacy. How do you feel about Apple searching the iCloud Photo Library for CSAM? Do you think this is a breach of your privacy? Drop a comment and let us know your thoughts!