Like many other tech companies, Apple has systems in place to try and cut down on the amount of illegal traffic passing through its networks. Part of this is its monitoring of messages sent and received by its customers, which includes processes for the detection and reporting of images displaying child abuse.
Revealed in a search warrant filed in Seattle and found by Forbes, it is now known that Apple is using automated systems to check messages, specifically hashes. The system compares hashes of files with a database of existing hashes it knows belong to previously-reported abuse images.Emails that contain the questionable files are flagged for inspection.
For flagged emails, a staff member checks the content of the message and the files to check for illegal material. If a message is deemed to contain such content, Apple then passes the message along to the authorities.
Apple’s process is seemingly more thorough than those of other firms, which typically pass the message over to organizations such as the National Center for Missing and Exploited Children when the automated system flags the message, with little to no manual checking of the content itself.
The details were brought up in the search warrant from comments made by an Apple employee, explaining how the system detected “several images of suspected child pornography” being uploaded by an iCloud user, which prompted an examination of their emails.
Emails containing suspect images are not delivered to their intended recipient, the employee wrote. In the warrant’s case, one individual sent 8 emails, with 7 messages containing 12 images and the other holding another four, all to the same recipient.
As the seven emails were identical in terms of content and files, it was suspected by the employee either the person “was sending these images to himself and when they didn’t deliver he sent them again repeatedly,” or the intended recipient told them the messages weren’t getting through.
As part of its disclosure to law enforcement, Apple also provided data on the iCloud user, including their name, address, and phone number, though it is unclear if this was included as part of the disclosure to law enforcement. The government later made requests for the user’s emails, texts, instant messages, and “all files and other records stored on iCloud.”
While there is the possibility some critics will object to the privacy implications of Apple staff inspecting flagged messages, the process of flagging the messages is performed using hashes and is automated, so the content of the images isn’t taken into account. The use of humans for inspection helps limit the possibility of false positives in file verification, where a file may have the same hash value as another in an existing database, but have completely different properties.
The findings also cover communications that are not encrypted, namely those that don’t go through the secure end-to-end encryption that has become one of the main elements of the ongoing encryption debate. Just as how law enforcement cannot access encrypted content on Apple’s products or services, Apple also cannot look at the same material.