- Sticker warns men changing diapers about ‘feminization of the American male’ 3 Years Ago
- The genius way Genius caught Google allegedly stealing lyrics Today 3:03 PM
- This bubble tea challenge is a balancing act Today 2:15 PM
- Laura Dern gifts the internet with more ‘Big Little Lies’ memes Today 1:54 PM
- The Stonks meme is back—and it’s weirder than ever Today 1:27 PM
- Video shows officer threatening to shoot pregnant Black woman in front of her children Today 1:12 PM
- Netflix’s ‘Leila’ tells a familiar dystopian horror story Today 12:37 PM
- O.J. Simpson says in Twitter video that he never slept with Kris Jenner Today 12:06 PM
- GOP commissioner jokes on Facebook about running over Trump protesters Today 11:52 AM
- 2 trans women killed within 3 months in the same neighborhood Today 11:35 AM
- DNC tries to pander with tone-deaf Beyoncé meme, fails miserably Today 10:45 AM
- Parkland grad says Harvard rescinded offer after racist comments surfaced Today 10:10 AM
- ‘The Edge of Democracy’ chronicles the downfall of Brazil’s political leaders Today 9:42 AM
- Suzanne Collins is writing a ‘Hunger Games’ prequel Today 9:31 AM
- KSI rips Logan Paul for delay in their YouTube boxing rematch Today 9:02 AM
Google sets artificial intelligence loose on inventing encryption methods
Machines teaching machines certainly sounds like the start of a sci-fi horror movie.
Humans have already demonstrated the ability to create some seriously secure encryption algorithms, but that wasn’t enough for the Google Brain team. So, instead of relying on those soft, mushy human brains, Google engineers decided to let a trio of AI minds put their own encryption skills to the test.
Starting with three different AIs named Alice, Bob, and Eve, Google gave specific goals to each: Alice was tasked with sending Bob a message, and it was Eve’s job to intercept and decode it. Both Alice and Bob were given matching keys with which to encode and decode their conversation, while Eve had to attempt to translate the encrypted message into plaintext without the key.
The important part here is that Alice had to come up with her own encryption algorithm, but neither Bob nor Eve was given that information. Bob didn’t have any idea how the key should be applied to the encrypted message, and Eve was basically working completely from scratch.
At first, neither of the AI recipients were able to successfully decode the message with any degree of reliability, but soon Bob cracked the code with the help of his key. Alice, on the other hand, struggled longer, but eventually started to hash things out. That is, until Alice tweaked the encryption and Eve once again hit a brick wall:
The results of the testing shows that AI can decode encrypted text with little or no guidance, but that it’s also relatively easy to fool a blind AI that is forced to guess how to solve the problem. The team notes in its report (PDF) that “While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis.”
In short, AI is great at encryption, but not so great at blind decryption, at least for now.
Mike Wehner is a former tech editor for the Daily Dot who now writes for BGR. His work has appeared everywhere from Yahoo to CNN, and there’s a good chance his Apple Watch is dead right now.