January 8, 2024 at 04:27AM
NIST highlights AI’s security and privacy challenges, including adversarial manipulation of training data, exploitation of model vulnerabilities, and exfiltration of sensitive information. Rapid integration of AI into online services exposes models to threats like corrupted training data and privacy breaches. NIST urges the tech community to develop better defenses against these attacks.
In the meeting notes, the U.S. National Institute of Standards and Technology (NIST) highlights the privacy and security challenges associated with the increased deployment of artificial intelligence (AI) systems. NIST points out potential adversarial manipulation of training data, exploitation of model vulnerabilities, and malicious manipulations to exfiltrate sensitive information. The agency broadly classifies the attacks as evasion, poisoning, privacy, and abuse attacks and notes the lack of robust mitigation measures to counter these risks. NIST urges the tech community to develop better defenses against these threats. The notes also mention the release of guidelines for secure AI systems by the U.K., the U.S., and international partners, emphasizing the vulnerability of AI technologies to attacks and the need for improved security measures.