April 11, 2024 at 06:06PM
LastPass recently reported an attempted deepfake audio attack on one of its employees impersonating the CEO, using WhatsApp, an uncommon business channel. The attack failed, but LastPass shared details to raise awareness of AI-generated deepfakes in executive impersonation fraud. The rise in deepfake attacks prompted alerts and advice from organizations like the US Department of Health and Human Services and the FBI.
Based on the meeting notes, the key takeaways are:
1. LastPass employee was targeted in a voice phishing attack using deepfake audio to impersonate the CEO, but the attack failed as the communication occurred through an uncommon business channel, WhatsApp, leading the employee to correctly ignore and report it.
2. The incident has prompted LastPass to raise awareness of AI-generated deepfake attacks being used in executive impersonation fraud, warning other companies of the potential threats.
3. The U.S. Department of Health and Human Services (HHS) has also issued an alert about cybercriminals using social engineering tactics and AI voice cloning tools to deceive their targets, particularly targeting IT help desks. The use of audio deepfakes makes it harder to verify the caller’s identity remotely, making attacks difficult to detect.
4. Both the FBI and Europol have cautioned about the increasing sophistication and potential widespread use of deepfakes, including AI-generated or manipulated audio, in cyber operations, CEO fraud, and evidence tampering.
Overall, it’s essential for organizations to be aware of the growing threat posed by deepfake attacks and to implement measures to prevent and detect such attacks, such as requiring callbacks to verify employee requests, training help desk staff to identify social engineering techniques, and considering in-person verification for sensitive matters.