13-04-2021 | | By Sam Brown
Privacy concerns in IoT, AI, and general transmission of data continue to increase, but a new encryption method could solve this. Why is AI facing challenges in the field of data privacy, what is homomorphic encryption, and how could it transform the world of data?
One of the biggest challenges facing the data industry is privacy. This privacy concern is arguably holding back the accelerated development of AI in key fields such as medical. For example, if shared with an AI algorithm, medical records of patients could be used to identify and formulate treatments for others. However, accessing those records would require the AI to have access to private data, and by extension, the system's managers.
Protection of privacy is done in a multitude of ways, all of which complement each other. Encryption allows private data to be unreadable to those without the key, hardware security prevents physical devices from accessing private data, and software security can identify unusual activity on the system to deny access to private data.
Current encrypting data methods are extremely strong, and most data that is encrypted is rarely, if ever at all, brute force attacked. Instead, most attacks are indirect and usually target a system (by finding the key) or looking at large amounts of data and comparing to find common elements. However, while modern encryption methods are extremely strong, they also mean that they are required to decrypt the data for any outside process to work on that data. This opens up all private data to the process, and this is where privacy can be lost.
Recognising the need for data to remain private, and yet requiring outside sources to operate on that data, IBM recently announced their homomorphic encryption services. But before we can understand what the service offers and its development, we first need to understand what homomorphic encryption is.
We will NOT be going through any maths or concept as homomorphic encryption itself is extremely complex, but…
Homomorphic encryption is an encryption method that allows for computation on encrypted data as if it was decrypted. In other words, if a message is encrypted using Homomorphic encryption, then any operations on the encrypted message will apply to the decrypted message in the same way.
A good analogy to understand how this works is to imagine a box with a lock and two people; the owner and a worker. The owner places their private data into the box, and locks the box with their key. The outside worker can manipulate the data inside the box with protective gloves (like a biological barrier found in fume cupboards), but they cannot see what that data is. When the user unlocks the box, the data will have been manipulated, but never decrypted to the outside world.
For example, the owner puts the number 10 into the box. The worker multiplies the encrypted contents by 2, and when the owner unlocks the box, the number 20 is now inside.
While the concept of homomorphic encryption has been around since the 70s, it is only recently that it has become a viable method for encryption. The first systems developed at IBM in 2011 would take 30 minutes just to process a single bit of data. Fast forward to 2015, and IBM compared two human genomes using homomorphic encryption in less than an hour (for perspective, DNA holds 691MB).
Now that IBM has improved the performance of their homomorphic encryption significantly, they are offering digital services to researchers and educators. Furthermore, IBM has released open-source toolkits to accelerate and encourage users to use the service. It is also mentioned that the service and algorithms are still in developmental stages, and as such may require more development time before being implemented in real-world applications.
The ability to process encrypted data without actually seeing the encrypted data could be a major game-changer in the world of AI. The first, and most obvious application, is the use of homomorphic encryption in medical AI.
Instead of submitting sensitive user data to an AI service, the FHE (full homomorphic encryption) would allow users to submit data to AI which can work on that data. Then the original user can see the results of that computation. The same can be applied to other confidential data applications such as medical insurance.
However, there is cause for confusion as AI in training would not be able to use such data. There is nothing wrong with the idea of operating on encrypted data to manipulate it, but an AI system that needs data to learn cannot do so from encrypted data. It is hard to understand the hype around medical AI and FHE when the AI won’t be able to learn from the data.
Since FHE can be used to change the decrypted data without decrypting it, then it could be possible to use FHE data in AI instead of raw data. However, the purpose of encryption is to make a message totally unreadable. Since each user would use a different key, any identical data would be different and unreadable by an AI learning system.
It is not clear exactly how FHE will help except for applications with a clear logic step of processes (such as an add, multiple, then divide). But it appears that IBM is excited about the new encryption method. If FHE can be used in learning AI systems, we can expect medical AI to improve dramatically as data is made available.