Rise of deepfake threats means biometric security measures won’t be enough

Rise of deepfake threats means biometric security measures won't be enough

February 1, 2024 at 01:53PM

Gartner predicts that cyber attacks using AI-generated deepfakes will cause doubt in the effectiveness of facial biometrics for identity verification. Deepfakes pose a challenge for security systems that rely on facial recognition and liveness detection, requiring additional layers of security. This could include verifying device information and using AI to detect deepfakes.

Based on the meeting notes, the key takeaway is that the use of AI-generated deepfakes present a significant challenge to the security of biometric authentication measures. Gartner’s VP Analyst Akif Khan highlighted the potential undermining of security measures, such as facial recognition, by AI-generated deepfakes. He also emphasized the importance of supplementing existing security measures and improving on them to address this new threat.

Additionally, he mentioned potential supplementary security measures such as device verification, location tracking, and frequency of requests from the same device. Moreover, security system developers are exploring the use of AI, specifically deep neural networks, to detect deepfake images.

In conclusion, organizations are advised to adopt a multi-layered defense-in-depth security approach to effectively mitigate the threats posed by deepfake technology to biometric security.

Full Article