"Early tests show that it can tolerate image resizing and compression, but not cropping or rotations"
What you need to know
- Security researchers have found the source code for Apple's CSAM detection.
- Initial reports suggest that there may be flaws in the technology.
Reports indicate that Apple's CSAM technology may be flawed, after the code for system was allegedly found in iOS 14.
The Verge reports:
Researchers have found a flaw in iOS's built-in hash function, raising new concerns about the integrity of Apple's CSAM-scanning system. The flaw affects the hashing system, called NeuralHash, which allows Apple to check for exact matches of known child abuse imagery without possessing any of the images or gleaning any information about non-matching pictures.
A Reddit user posted reverse-engineered coge allegedly for the new CSAM system stating "Believe it or not, this algorithm already exists as early as iOS 14.3, hidden under obfuscated class names. After some digging and reverse engineering on the hidden APIs, I managed to export its model (which is MobileNetV3) to ONNX and rebuild the whole NeuralHash algorithm in Python. You can now try NeuralHash even on Linux!"
According to Asuhariet Ygvar testing indicates the CSAM technology "can tolerate image resizing and compression, but not cropping or rotations". This is strange because of the technical assessments provided by Apple that state:
Apple has produced a technology that can compute fingerprints from pictures. these fingerprints are very small compared to pictures. When two fingerprints match, it is very likely that the pictures match. Simple operations like resizing, cropping, or compressing a picture will not change its fingerprint
Another concern raised about the tech is collisions, where two different images generate the same hash, which could, in theory, be used to fool the system into detecting images that don't actually contain CSAM, however as The Verge explains this would require "extraordinary efforts to exploit" and wouldn't get past Apple's manual review process:
Generally, collision attacks allow researchers to find identical inputs that produce the same hash. In Apple's system, this would mean generating an image that sets off the CSAM alerts even though it is not a CSAM image since it produces the same hash as an image in the database. But actually generating that alert would require access to the NCMEC hash database, generating more than 30 colliding images, and then smuggling all of them onto the target's phone. Even then, it would only generate an alert to Apple and NCMEC, which would easily identify the images as false positives.
Ygvar said they hoped that the source code would help researchers "understand NeuralHash algorithm better and know its potential issues before it's enabled on all iOS devices." iMore has reached out to Apple for comment. Given the conclusions above are drawn from some source code allegedly found within an earlier version of iOS, and that Apple hasn't officially rolled out any of its Child Safety technology yet, these findings should definitely be taken with a pinch of salt.
Reports raise concerns about flaws in iPhone child abuse scanning tech posted first on http://bestpricesmartphones.blogspot.com
No comments:
Post a Comment