Deepfakes make way into attacks to evade security

Spread the love

Since the time Deepfake technology came into its existence, it continues to remain controversial for its manipulative capabilities and wrong uses. Deepfakes uses deep learning algorithms – a form of artificial intelligence (AI) that is capable to make fake audio, videos and images of events that actually didn’t exist.

However, now deepfakes have entered the dark world of cybercrimes and that further gives enough reasons why deepfake technology should be banned and it needs more attention.

The latest edition of the Global Incident Response Threat Report published by VMware has warned against increasing deepfake attacks and cyber extortion.

Increase in Deepfake attacks

As per the report, deepfake attacks have increased by 13% and 66% of survey respondents have claimed to witnessed deepfake attacks in the past 12 months.

“Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls,” said Rick McElroy, Principal Cybersecurity Strategist – VMware.

“Two out of three respondents in our report saw malicious deepfakes used as part of an attack, a 13% increase from last year, with email as the top delivery method,” added McElroy.

Form of Deepfake attacks

The majority of respondents said deepfake attacks most often took the form of video (58%) rather than audio (42%), and top delivery methods included email (78%), mobile messaging (57%), voice (34%) and social (34%).

Citing the FBI, the report revealed that there’s been an increase in complaints involving the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.

According to McElroy, these cybercriminals have evolved beyond using synthetic video and audio simply for influence operations or disinformation campaigns.

“Their new goal is to use deepfake technology to compromise organisations and gain access to their environment,” pointed out McElroy.

New challenges for security teams

Certainly, the use of deepfakes in cyberattack methods is throwing new challenges for security and incident response teams. Such emerging threats demand broader visibility in the organisation in a bid to respond against those threats and attacks.

“In order to defend against the broadening attack surface, security teams need an adequate level of visibility across workloads, devices, users and networks to detect, protect, and respond to cyber threats,” said Chad Skipper, Global Security Technologist – VMware.

“When security teams are making decisions based on incomplete and inaccurate data, it inhibits their ability to implement a granular security strategy, while their efforts to detect and stop lateral movement of attacks are stymied due to the limited context of their systems,” concluded Skipper.

Leave a Reply

Your email address will not be published. Required fields are marked *