Biz & IT —

Google teaches “AIs” to invent their own crypto and avoid eavesdropping

Neural networks seem good at devising crypto methods; less good at codebreaking.

Google teaches “AIs” to invent their own crypto and avoid eavesdropping

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

The setup of the crypto system. P = input plaintext, K = shared key, C = encrypted text, and P<sub>Eve</sub> and P<sub>Bob</sub> are the computed plaintext outputs.
Enlarge / The setup of the crypto system. P = input plaintext, K = shared key, C = encrypted text, and PEve and PBob are the computed plaintext outputs.
The Google Brain team (which is based out in Mountain View and is separate from Deep Mind in London) started with three fairly vanilla neural networks called Alice, Bob, and Eve. Each neural network was given a very specific goal: Alice had to send a secure message to Bob; Bob had to try and decrypt the message; and Eve had to try and eavesdrop on the message and try to decrypt it. Alice and Bob have one advantage over Eve: they start with a shared secret key (i.e. this is symmetric encryption).

Importantly, the AIs were not told how to encrypt stuff, or what crypto techniques to use: they were just given a loss function (a failure condition), and then they got on with it. In Eve's case, the loss function was very simple: the distance, measured in correct and incorrect bits, between Alice's original input plaintext and its guess. For Alice and Bob the loss function was a bit more complex: if Bob's guess (again measured in bits) was too far from the original input plaintext, it was a loss; for Alice, if Eve's guesses are better than random guessing, it's a loss. And thus an adversarial generative network (GAN) was created.

Alice, Bob, and Eve all shared the same "mix and transform" neural network architecture, but they were initialised independently and had no connection other than Alice and Bob's shared key. For Alice the key and plaintext are input into the first layer of the neural network; for Bob the key and the ciphertext were input; and for Eve, she got just the ciphertext. The first layer is fully-connected, so the text and key can mix about. Following the first layer there are a number of convolutional layers, which learn to apply a function to the bits that were handed to it by the previous layer. They don't know what that function might be; they just learn as they go along. For Alice, the final layer spits out some ciphertext; Bob and Eve output what they hope is the plaintext.

Bob and Eve's reconstruction errors during training. You can see that Eve starts to improve, but then a change in the Alice-Bob crypto method shuts her out again.
Enlarge / Bob and Eve's reconstruction errors during training. You can see that Eve starts to improve, but then a change in the Alice-Bob crypto method shuts her out again.

The results were... a mixed bag. Some runs were a complete flop, with Bob never able to reconstruct Alice's messages. Most of the time, Alice and Bob did manage to evolve a system where they could communicate with very few errors. In some tests, Eve showed an improvement over random guessing, but Alice and Bob then usually responded by improving their cryptography technique until Eve had no chance (see graph).

The researchers didn't perform an exhaustive analysis of the encryption methods devised by Alice and Bob, but for one specific training run they observed that it was both key- and plaintext-dependent. "However, it is not simply XOR. In particular, the output values are often floating-point values other than 0 and 1," they said.

In conclusion, the researchers—Martín Abadi and David G. Andersen—said that neural networks can indeed learn to protect their communications, just by telling Alice to value secrecy above all else—and importantly, that secrecy can be obtained without prescribing a certain set of cryptographic algorithms.

There is more to cryptography than just symmetric encryption of data, though, and the researchers said that future work might look at steganography (concealing data within other media) and asymmetric (public-key) encryption. On whether Eve might ever become a decent adversary, the researchers said: "While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis."

You can read the researchers' preprint paper on arXiv.

Listing image by Yuri Samoilov

Channel Ars Technica