- Tom Brady keeps supplying us with new meme material Friday 5:55 PM
- Emails reveal Facebook’s knowledge of Cambridge Analytica Friday 3:43 PM
- ‘Fast and Furious’ + ‘American Ninja Warrior’ = Netflix’s ‘Hyperdrive’ Friday 3:15 PM
- Trump jokes drop in Dow is because Seth Moulton dropped out of 2020 race Friday 3:13 PM
- What we learned when we visited Mr. B, America’s chonkiest cat Friday 1:46 PM
- Trump’s new plan to fight opioid overdose? This tweet Friday 1:06 PM
- Fitness influencer shamed for ‘sharing numbers’ in weight loss posts Friday 1:04 PM
- The VSCO Girl has always been here Friday 1:01 PM
- Tomi Lahren’s new ‘Freedom’ clothing line is made for meme mockery Friday 12:21 PM
- Taylor Swift’s ‘London Boy’ is a bop, but Brits don’t think her lyrics are accurate Friday 12:02 PM
- Popeyes blasted for employee welfare amid chicken sandwich war Friday 11:59 AM
- Cory Booker says nonbinary ‘niephew’ taught him about trans issues Friday 11:53 AM
- Megachurch pushes conversion therapy on Instagram, Facebook with #OnceGay Friday 11:11 AM
- Christian movie review site blasts Netflix’s ‘The Family’ Friday 10:50 AM
- YouTube removes ‘coordinated’ channels spreading Hong Kong misinformation Friday 8:58 AM
Humans have already demonstrated the ability to create some seriously secure encryption algorithms, but that wasn’t enough for the Google Brain team. So, instead of relying on those soft, mushy human brains, Google engineers decided to let a trio of AI minds put their own encryption skills to the test.
Starting with three different AIs named Alice, Bob, and Eve, Google gave specific goals to each: Alice was tasked with sending Bob a message, and it was Eve’s job to intercept and decode it. Both Alice and Bob were given matching keys with which to encode and decode their conversation, while Eve had to attempt to translate the encrypted message into plaintext without the key.
The important part here is that Alice had to come up with her own encryption algorithm, but neither Bob nor Eve was given that information. Bob didn’t have any idea how the key should be applied to the encrypted message, and Eve was basically working completely from scratch.
At first, neither of the AI recipients were able to successfully decode the message with any degree of reliability, but soon Bob cracked the code with the help of his key. Alice, on the other hand, struggled longer, but eventually started to hash things out. That is, until Alice tweaked the encryption and Eve once again hit a brick wall:
The results of the testing shows that AI can decode encrypted text with little or no guidance, but that it’s also relatively easy to fool a blind AI that is forced to guess how to solve the problem. The team notes in its report (PDF) that “While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis.”
In short, AI is great at encryption, but not so great at blind decryption, at least for now.
Mike Wehner is a former tech editor for the Daily Dot who now writes for BGR. His work has appeared everywhere from Yahoo to CNN, and there’s a good chance his Apple Watch is dead right now.