man holding sign that says 'DEEPFAKE' cyber concept on sides

Panchenko Vladimir/Shutterstock (Licensed)

Cybercriminals are reportedly using deepfakes to apply for remote work jobs

Some of the jobs would provide criminals access to customer and financial data.

 

Mikael Thalen

Tech

Posted on Jun 28, 2022   Updated on Jun 28, 2022, 2:14 pm CDT

The Federal Bureau of Investigations (FBI) has released a new warning regarding the use of deepfakes by cybercriminals to apply for remote jobs.

Featured Video Hide

In a public service announcement on Tuesday, the FBI Internet Crime Complaint Center (IC3) stated that it has received “an increase in complaints reporting the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.”

Advertisement Hide

The remote positions, the FBI added, often came from fields related to information technology and computer programming as well as database and software-related roles. Some of the positions would provide access to everything from customer PII and financial data to proprietary information.

The FBI noted not only the reported use of deepfakes, which use artificial intelligence to depict an individual as saying or doing something they didn’t actually say or do, but of voice spoofing during online interviews with some applicants.

“In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking,” the FBI said. “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”

The AI-based methods would also often be used in conjunction with stolen PII, allowing cybercriminals to pass pre-employment background checks.

Advertisement Hide

In a statement to the Daily Dot, Giorgio Patrini, founder and CEO of the deepfake detection firm Sensity, called the FBI warning a worrying development “but by no means a surprise.”

In a report released last month, Sensity showed how it was able to use deepfake technology to bypass automated liveness tests, also known as “know your customer” or KYC tests, utilized by banks and cryptocurrency exchanges to verify users’ identities.

“As highlighted in our recent report, any company relying on onboarding via remote identity verification is under serious threats of frauds by deepfakes,” Patrini said.

The FBI is urging any companies or individuals targeted by such tactics to immediately report them to the IC3.

Advertisement Hide

Read more of the Daily Dot’s tech and politics coverage

Nevada’s GOP secretary of state candidate follows QAnon, neo-Nazi accounts on Gab, Telegram
Court filing in Bored Apes lawsuit revives claims founders built NFT empire on Nazi ideology
EXCLUSIVE: ‘Say hi to the Donald for us’: Florida police briefed armed right-wing group before they went to Jan. 6 protest
Inside the Proud Boys’ ties to ghost gun sales
‘Judas’: Gab users are furious its founder handed over data to the FBI without a subpoena
EXCLUSIVE: Anti-vax dating site that let people advertise ‘mRNA FREE’ semen left all its user data exposed
Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.
Share this article
*First Published: Jun 28, 2022, 2:13 pm CDT
 

Featured Local Savings

Exit mobile version