Tech

How hackers can use AI to hide their malware and target you

Hackers can use the same technology powering your appliances to create smart malware.

Photo of Ben Dickson

Ben Dickson

ai malware

You’re about to fire up a video-conferencing app you’ve used dozens of times before. Your colleagues have already joined the call. Suddenly, a vicious ransomware virus launches in its place, encrypting all your files.

Featured Video

Thanks to advances in artificial intelligence, such fine-grained targeted cyberattacks are no longer the stuff of dark hacker movies, as security researchers at IBM demonstrated at the recent Black Hat USA security conference in Las Vegas.

AI has made it possible for our devices and applications to better understand the world around them. Your iPhone X uses AI to automatically recognize your face and unlock when you look at it. Your smart security camera uses AI to detect strangers and warn you. But hackers can use that same AI technology to develop smart malware that can prey on its targets and detect them out of millions of users.

The researchers of IBM have already created DeepLocker, a proof-of-concept project that shows the destructive powers of AI-powered malware. And they believe such malware might already exist in the wild.

Advertisement

Why is AI-powered malware dangerous?

Most traditional malware is designed to perform its damaging functions on every device they find their way into. This is suitable when the attackers’ goal is to inflict maximum damage, such as last year’s WannaCry and NotPetya ransomware outbreaks, in which hundreds of thousands of computers were infected in a very short period of time.

But this method is not effective when malicious actors want to attack a specific target. In such cases, they have to “spray and pray,” as Marc Stoecklin, cybersecurity scientist at IBM Research, says, infecting a large number of targets and hoping their target is among them. The problem is that such malware can quickly be discovered and stopped before it reaches its intended target.

There is a history of targeted malware attacks, such as the Stuxnet virus, which incapacitated a large part of Iran’s nuclear infrastructure in 2010. But such attacks require resources and intelligence that’s often only available to nation states.

Advertisement

In contrast, AI-powered malware such as DeepLocker can use publicly available technology to hide from security tools while spreading across thousands of computers. DeepLocker only executes its malicious payload when it detects its intended target through AI techniques, such as facial or voice recognition.

“This AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected,” Stoecklin says. “But, unlike nation-state malware, it is feasible in the civilian and commercial realms.”

How does AI-powered malware work?

To find its target and evade security solutions, DeepLocker uses the popular AI technique deep learning, from which it has gotten its name. Deep learning is different from traditional software in the sense that instead of defining rules and functions, programmers develop deep learning algorithms by feeding them with sample data and letting them create their own rules. For instance, when you give a deep learning algorithm enough pictures of a person, it’ll be able to detect that person’s face in new photos.

Advertisement

The shift away from rule-based programming enables deep learning algorithms to perform tasks that were previously impossible with traditional software structures. But it also makes it very difficult for contemporary endpoint security solutions to find malware that use deep learning.

Antivirus tools are designed to detect malware by looking for specific signatures in their binary files or the commands they execute. But deep learning algorithms are black boxes, which means it’s hard to make sense of their inner workings or reverse-engineer their functions to figure out how their work. To your antimalware solution, DeepLocker is a normal program, such as an email or messaging application. But beneath its benign appearance is a malicious payload, hidden in a deep learning construct.

DeepLocker identifies its target through one or several attributes, including visual, audio, geolocation and system-level features, and then executes its payload.

AI-powered malware in action

To demonstrate the danger of AI-powered malware, the researchers at IBM armed DeepLocker with the popular ransomware WannaCry and integrated it into an innocent-looking video-conferencing application. The malware remained undetected by analysis tools, including antivirus engines and malware sandboxes.

Advertisement

“Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms,” says Stoecklin. Hackers can use AI to help their malware evade detection for weeks, months, or even years, making the chances of infection and success skyrocket.

While running, the application feeds camera snapshots to DeepLocker’s AI, which has been trained to look for the face of a specific person. For all users except the target, the application works perfectly fine. But as soon as the intended victim shows their face to the webcam, DeepLocker unleashes the wrath of WannaCry on the user’s computer and starts to encrypt all the files on the hard drive.

“While the facial recognition scenario is one example of how malware could leverage AI to identify a target, other identifiers such as voice recognition or geo-location could also be used by an AI-powered malware to find its victim,” Stoecklin says.

Malicious actors can also tune the settings of their AI-powered malware to target groups of people. For instance, hackers with political motives might want to use the technique to hurt a specific demographic, such as people of a certain race, gender or religion.

Advertisement

How serious is the threat of AI-powered malware?

It’s widely believed and discussed in the cybersecurity community that large criminal gangs are already using AI and machine learning to help launch and spread their attacks, Stoecklin says. So far, nothing like DeepLocker has been seen in the wild. But that doesn’t mean they don’t exist.

“The truth is that if such attacks were already being launched, they would be extremely challenging to detect,” Stoecklin says.

Stoecklin warns that it’s only a matter of time before cybercriminals look to combine readily available AI tools to enhance the capabilities of their malware. “The AI models are publicly available, and similar malware evasion techniques are already in use,” he says.

Advertisement

In recent months, we’ve already seen how publicly available AI tools can become devastating when they fall into the wrong hands. At the beginning of the year, a Reddit user called deepfakes used simple open-source AI software and consumer-grade computers to create fake porn videos featuring celebrities and politicians. The outbreak of AI-doctored videos and their possible repercussion became a major concern for tech companies, digital rights activists, lawmakers and law enforcement.

However, for the moment, Stoecklin doesn’t see AI-powered malware as a threat to the general public. “This type of attack would most likely be used to target specific ‘high value’ targets, for a specific purpose,” he says. “Since this model of attack could be attached to different types of malware, the potential use-cases would vary depending on the type of malware being deployed.”

How can users protect themselves?

Current security tools are not fit to fight the AI-powered malware, and we need new technologies and measures to protect ourselves.

Advertisement

“The security community should focus on monitoring and analyzing how apps are behaving across user devices, and flagging when a new app is taking unexpected actions such as using excessive network bandwidth, disk access, accessing sensitive files, or attempting to circumvent security features,” Stoecklin says.

We can also leverage AI to detect and block AI-based attacks. Just as malware can use AI to learn common patterns of behavior in security tools and circumvent them, security solutions can employ AI to learn common behaviors of apps and help flag unexpected app behaviors, Stoecklin says.

A handful of companies are working on tools that can counter evasive malware. IBM Research has developed a method known as “Decoy Filesystem,” which could trick malware into deploying within a fake filesystem stored within the victim’s device, while leaving the rest of the device and files intact. Other companies have developed security tools that trick malware into thinking it is constantly in a sandbox environment, preventing it from executing its malicious payload.

We’ll have to see whether these efforts will help defuse the threat of AI-powered malware. In the meantime, Stoecklin’s advice to users: “In order to reduce the risks associated with this type of attack, individuals should take precautions such as limiting the access their applications have.”

Advertisement

That means, Stoecklin notes, you should probably deny access to your computer’s camera and microphone to any apps that don’t need them.

 
The Daily Dot