Share

O Gartner, a world leader in research and advice for companies, predicts that by 2026 attacks using deepfakes generated by Artificial Intelligence (AI) to create false facial biometrics will cause companies to no longer consider these identity verification and authentication solutions as trustworthy when used alone. 

“In the last decade, there have been several points of interference in the fields of Artificial Intelligence that allow the creation of synthetic images. A Deepfakeis a type of technology that can be used to create fake videos, usually with the aim of tricking people. These artificially generated images of real people's faces can be used by malicious individuals to undermine biometric authentication or render it inefficient,” he states. Akif Khan, Vice President and Analyst at Gartner. “As a result, companies may begin to question the trustworthiness of identity verification and authentication solutions, as they will not be able to tell whether the face being verified is that of a real person or not.” 

Identity verification and authentication processes using facial biometrics today rely on presentation attack detection (PAD) to assess user liveness. “Current standards and testing processes for defining and evaluating PAD mechanisms do not cover digital injection attacks using deepfakes generated by AI that can currently be created,” said Khan. 

According to Gartner research, presentation attacks are the most common vector, but injection attacks increased by 200% in 2023. Preventing such attacks will require a combination of PAD, injection attack detection (IAD), and image inspection.

 

Combine injection attack detection and image inspection tools to contain security threats Deepfakes: To help companies protect themselves against fakes generated with Artificial Intelligence beyond facial biometrics, CISOs (Chief Information Security Officer) and risk management leaders must choose vendors that can demonstrate their capabilities and a plan that goes beyond current standards, as well as demonstrate that they are monitoring, classifying and quantifying these new types of attacks. 

“Enterprises should begin defining a minimum baseline of controls by working with vendors who have specifically invested in mitigating the latest security-based threats. Deepfake, using injection attack detection in conjunction with image inspection”, says Khan. 

Once the strategy is defined and the baseline is established, CISOs and risk management leaders should include additional recognition signals, such as device identification and behavioral analysis, to increase the detection of possible attacks and risks to the security processes. identity verification. 

Above all, security and risk management leaders Those responsible for managing identities and access must take measures to mitigate the risks of security attacks. Deepfake driven by Artificial Intelligence, selecting technologies that can prove genuine human presence and implementing additional measures to prevent account takeover. 

Gartner clients can learn more at “Predicts 2024: AI and Cybersecurity – Turning disruption into opportunity“. Gartner for Cybersecurity Leaders equips security leaders with the tools to help restructure roles, align security strategy with business objectives, and create programs to balance protection with business needs. Additional information is available at https://www.gartner.com/en/cybersecurity

quick access

en_USEN