Across 2022 and 2023, indiscriminate attack levels ranged from 50,000 to 100,000 times per month. At the same time, the number of groups exchanging information related to attacks against biometric and remote human identification grew significantly, biometric identity solution provider iProov has revealed its latest report.

The buzz surrounding Artificial Intelligence (AI) continued throughout 2023, right the way through to now – thanks to the seemingly limitless potential of the emerging technology. However, as the AI space has quickly evolved, so have threats that are weaponising these developments.

Easily accessible and criminally weaponised generative AI tools are increasing the need for more secure remote identity verification. According to iProov’s new threat report, Threat Intelligence Report 2024: The Impact of Generative AI on Remote Identity Verification, bad actors are leveraging advanced AI tools to use convincing face swaps alongside emulators to exploit loopholes in biometric systems to conceal the existence of virtual cameras, making it harder for biometric solution providers to detect a deepfake.

An emulator is a software tool that mimics a user’s device, such as a mobile phone. iProov says that this has created ‘the perfect storm’ with attackers making face swaps and emulators their preferred tools to perpetrate identity fraud.

The use of emulators and metadata spoofing by threat actors to launch digital injection attacks across different platforms was first observed in 2022, but grew most substantially in 2023; growing by 353 per cent from H1 to H2 2023.

These attacks are rapidly evolving and pose significant new threats to mobile platforms: injection attacks against mobile web surged by 255 per cent from H1 to H2 2023.

iProov also revealed a significant increase in packaged AI imagery tools deployed which make it far easier and quicker to launch an attack and this is only expected to advance. It saw a 672 per cent increase from H1 2023 to H2 2023 in the use of deepfake media such as face swaps deployed alongside metadata spoofing tools.

Emerging threat trends
Andrew Newell talks deepfake tech
Andrew Newell, chief scientific officer at iProov

Andrew Newell, chief scientific officer at iProov, explained: “Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesised media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification.

“While the data in our report highlights that face swaps are currently the deepfake of choice for threat actors, we don’t know what’s next. The only way to stay one step ahead is to constantly monitor and identify their attacks, the attack frequency, who they’re targeting, the methods they’re using, and form a set of hypotheses as to what motivates them.”

The iProov Security Operations Center observed two primary attack types: presentation attacks and digital injection attacks. Presentation and digital injection attacks may have different levels of impact, but they can pose a significant threat when combined with traditional cyber attack tools like metadata manipulation.

The report also includes case studies on anonymous prolific threat actor personas. Each case study evaluates the sophistication of each actor’s methodologies, efforts, and the frequency of their attacks. This analysis provides invaluable intelligence and supports iProov in continually improving its biometric platform’s security helping minimise the risk of exploitation for organisations of both present and future remote identity verification transactions.



Image and article originally from thefintechtimes.com. Read the original article here.