Tech

The evolution of evil chatbots is just around the corner

Thanks to advances in machine learning, chatbots can now be exploited by hackers to deceive unsuspecting victims into handing over sensitive information.

Photo of Simon Chandler

Simon Chandler

rogue chatbots

Of all the threats artificial intelligence poses to humanity, we unsurprisingly focus on the most dramatic. We fear that AI and automation will steal jobs, will make authentic human relationships redundant, and will even enslave or destroy the entire human race. However, one danger is already here, and while it may not be quite as dystopian as those above, it’s no less disconcerting: the arrival of malicious AI chatbots. Thanks to advances in machine learning, chatbots can now be exploited by hackers in order to deceive unsuspecting victims into clicking dubious links and handing over sensitive information.

Featured Video

For the most part, malicious chatbots will look and act just like the regular chatbots you find on websites: They’ll appear in a small pop-up window in the corner of a webpage, will ask visitors and customers if they need any help, and will use machine learning and natural language processing to respond intelligently to comments. But rather than provide genuinely useful information or assistance, they’ll ask people for their personal info and data, which will ultimately be used by hackers for less-than-honorable ends.

Such “bad chatbots” have received scant attention up until now, but that’s likely to change in 2019. Publishing its cybersecurity predictions for the coming year, the Seattle-based security firm WatchGuard has put malicious chatbots first on its list of security threats (above ransomware, “fileless” malware, and a possible “fire sale” attack).

At a time when chatbots are on track to be used by 80 percent of companies by 2020, such a threat is very real, if only because the usually innocuous chatbot is becoming disarmingly familiar. “In many ways, AI-based chatbots are the next frontier in computational development,” explains Yaniv Altshuler, a researcher at MIT’s Media Lab and the CEO/co-founder of Endor, an Israel-based AI company that’s building an automated “prediction engine” for businesses. “Just like the point-and-click interface changed the way people interacted with their technology, AI-based chatbots are making our computing experience more personal and lifelike.”

Advertisement

Because chatbots promise a more personable user experience, it’s no wonder that most corporations are flocking toward them for customer-service purposes. “The vast majority of web platforms are pursuing or implementing AI-based chatbots to bolster the customer service initiatives,” Altshuler says via email, “and, in many cases, these chatbots are so responsive that users don’t even know they are engaging with a computer.”

Yet the fact that bots are becoming more convincing is a problem. And because AIs can now even write poetry and imitate human voices, the growing ubiquity of chatbots is almost certain to mean that opportunistic cybercriminals will harness them for their own ends.

At least, this is what WatchGuard CTO Corey Nachreiner believes. “We have not seen chatbots used as part of a social engineering campaign yet,” he says, “but believe they present a major opportunity for hackers as businesses and consumers increasingly rely on them. As these chatbots get better at emulating natural human language, their value for malicious activity grows.”

READ MORE:

Advertisement

WatchGuard isn’t the first group to predict the rise of bad bots. Kaspersky raised the likelihood of their appearance in 2016, while back in February, researchers at the University of Oxford and elsewhere released a report titled “The Malicious Use of Artificial Intelligence.” In it, they too heralded the emergence of harmful chatbots—and worse. “As AI develops further,” its authors warned, “convincing chatbots may elicit human trust by engaging people in longer dialogues, and perhaps eventually masquerade visually as another person in a video chat.”

While malicious chatbots haven’t been detected yet on any significant scale, there have already been some documented instances of chatbots being used for cybercrime. In June, Ticketmaster admitted that a chatbot it used for customer service had been infected by malware, which gathered customers’ personal information and sent it to an unknown third party. And as Nachreiner explains, such incidents are likely to become more common in the future.

“We expect chatbots to be another hook in the phisher’s ‘tackle box,’” he says. “They won’t outpace traditional email phishing anytime soon, or even ransomware, but they’ll augment attacks by adding another layer of credibility and deception. Once the first malicious actor uses, say, a cross-site scripting attack to inject a fake chatbot into a legitimate website, and has some social engineering success, this attack vector could take off.”

Advertisement

In other words, hackers will increasingly try to exploit vulnerabilities and bugs in websites so as to insert bad chatbots into them. And once the cybercriminal “community” sees that such bots are encountering success in obtaining personal info, more hackers will try to use them.

And given that all phishing-related attacks currently exact a toll of just over $700 million a year in the U.S. alone (according to the FBI), the rise of bad chatbots could push America’s annual bill closer to the billion-dollar region. This risk becomes even more acute as hackers build such bots using more sophisticated technology, something WatchGuard believes will happen soon.

“The most basic [chatbots] may have no intelligence at all,” Nachreiner says. “They would just emulate the looks of a normal chatbot (a dialog window with a picture of a person offering to help), and would respond to anything with a fake answer directing the victim to a malicious link. However, we are much more concerned with the ones that will leverage AI, machine learning, and automation. These advanced methods are what will significantly improve a chatbot’s ability to actively socially engineer humans in an automated fashion.”

In view of all this, Nachreiner advises the public to be wary of any bot that sends them links to external domains or asks for sensitive information. However, he affirms that the onus for guarding against malevolent bots lies predominantly with affected websites.

Advertisement

“Website owners have the most direct responsibility for guarding against these types of attacks,” he says. “If they design secure web applications that aren’t vulnerable to various injection flaws, it severely limits the potential for malicious chatbots attacks to be successful. Malicious chatbot attacks will rely heavily on the trust you place in the websites you visit.”

Good cybersecurity practice may be enough to stave off of the initial wave of hacked chatbots. And for the immediate time being, it would seem that the public shouldn’t be too fearful of the threat they pose. That’s because the companies who actually produce them affirm that they haven’t received any reports of malicious bots, at least not yet.

“We haven’t seen any instances of rogue chatbots being inserted into websites,” says Mauricio Gomes, the CTO and co-founder of Black Ops, a chatbot developer based in Detroit. “The website itself would first have to be compromised in order for a malicious actor to gain the ability to insert a chatbot.”

That said, even if bad bots aren’t a problem now, Gomes concedes that they could become a recurring issue in the future. “So while taking over an official web property or Facebook page for an organization is still a high bar to cross, we do think we’ll begin seeing chatbots designed to harvest personal or billing information,” he adds. “These chatbots wouldn’t live on official websites, but they could be landing pages designed to imitate a legitimate website. The chatbot would then coerce a user into providing personal information.”

Advertisement

And as the authors of February’s “Malicious Use of Artificial Intelligence” report also warn, text-based “evil” chatbots will soon evolve into more elaborate beasts. “As AI systems extend further into domains commonly believed to be uniquely human (like social interaction), we will see more sophisticated social engineering attacks drawing on these capabilities,” they write. “These are very difficult to defend against, as even cybersecurity experts can fall prey to targeted spear phishing emails. This may cause an explosion of network penetrations, personal data theft, and an epidemic of intelligent computer viruses.”

Nachreiner agrees, explaining, “Once you have a machine that can carry on a natural human conversation, the sky’s the limit in terms of how attackers could leverage it. Not only would a machine be able to carry out almost any human con […] but they would be capable of doing so on an automated scale that we have never seen before.”

As an example, Nachreiner offers the prospect of the simple “tech support” scam being carried out by masses of AIs. “Right now this scam is limited by actual humans having to call victims individually. With AI and voice automation, a malicious program could carry out this scam against thousands of victims at once!”

This then, is yet one more reason to be concerned about the rise of AI, although in contrast to more apocalyptic warnings, it’s on the cusp of becoming an everyday reality. Consumers should be careful next time they find themselves chatting with a bot, or else AIs may end up taking more than our jobs.

Advertisement
 
The Daily Dot