31-08-2021 | | By Robin Mitchell
Recently, Apple announced that it would soon introduce algorithms to find child abuse content uploaded to their iCloud system automatically. How will the new system operate, why could this introduction violate privacy, and what implications will arise from such systems?
Recently, Apple announced that the next generation of iOS would integrate a new cryptographic identification system that will scan user photos before they are uploaded to iCloud. During the scanning phase, a cryptographic signature is developed, compared to a database containing signatures of child abuse images. If a match is detecting, a human reviewer will observe the images and report their findings to local authorities.
The system developed by Apple is somewhat similar to password hashing; instead of needing the raw image data to compare to, a signature can be made from it. This helps to preserve user privacy as well as reduce the transmission of potentially illicit content. Furthermore, signatures also remove the need to store copies of child abuse content on a server, meaning that new images can be converted and remembered without needing the content to exist any longer.
There is no doubt that some users will see the introduction of child protection systems as a force for good. Such advocates often say the same about increasing the use of security cameras and will use the excuse “if you have nothing to hide, you have nothing to fear”.
While the Apple system may only generate cryptographic data patterns from scanned content, there are two significant implications to this system; the very act of scanning all personal content and the ability to generate identification data.
All photos uploaded to the iCloud will be scanned regardless of their nature. Most photos taken are usually personal for the sake of preserving memory (such as holiday pictures), but some can contain extremely sensitive information. For example, it is very well known that many users take embarrassing pictures of themselves, photos of bank statements, and passwords. This means that a piece of software designed and tied to a third party (i.e. Apple) is gaining access to this data. The ability to determine who can and cannot see such content is taken away from the data owner.
The second challenge is that cryptographic identifiers are produced from data. While Apple states that only those that match child abuse content will be checked, the algorithm also allows Apple to view any categorised content that may be of interest to them. For example, nude photos would undoubtedly produce a unique cryptographic identifier that would allow Apple to identify nude uploads. Someone who can hack Apple systems (as has been done before) could install malware that directs uploads to their server that contain nude identifiers.
The immediate effects of file scanning and reporting algorithms are harmful enough, but the real danger lies in drawing a new line. From a scale of absolute privacy to no privacy, each step taken to expose people’s data moves us closer to no privacy. In this case, Apple has signalled to the world that looking at people’s files is perfectly acceptable.
This presents the world with a new challenge as child abuse content is not the only kind of illicit content out there. For example, the Chinese government may enforce a policy on Chinese apple users to look for politically sensitive images (such as mockery and propaganda against the communist regime). The same scanning system could further be extended to text messages, and this would, in turn, be abused by governments around the world to start spying on what people are saying. In the case of text messages, the excuse may be “to find terrorists”, and the need for this will most likely be blown out of proportion.