November 1, 2023 at 05:29PM
The US, UK, China, and EU, along with 28 countries in total, have signed the Bletchley Declaration to address the risks associated with emerging artificial intelligence (AI). The agreement focuses on managing cybersecurity, biotechnology, and disinformation risks associated with frontier AI. The declaration highlights the shared responsibilities of governments and promotes global collaboration in AI safety and security research. Cybersecurity experts are applauding the agreement, seeing it as a proactive approach to ensuring the safe and responsible development of AI.
The Bletchley Declaration is a cooperative agreement signed by the US, UK, China, and European Union, among other countries, to manage the risks associated with emerging artificial intelligence (AI) tools, specifically in terms of cybersecurity. The agreement acknowledges that significant risks may arise from intentional misuse or unintended issues of control of frontier AI, with concerns also about cybersecurity, biotechnology, and disinformation risks. The declaration was hosted at Bletchley Park, known for its role in World War II as a hub for cryptographic and computing breakthroughs. It is now a museum and a tribute to the founders of computer science.
Cybersecurity experts have praised the global agreement, emphasizing that AI safety and innovation are not in conflict but instead go hand in hand. Making AI safer will enable society to realize the opportunities provided by these technologies. The declaration is seen as a significant step in ensuring the safe and responsible development of AI. It highlights concerns about cybersecurity and privacy protection, demonstrating a worldwide commitment by the signatories to responsible AI innovation. The Declaration establishes shared responsibilities between governments and establishes a process for global collaboration on AI safety and security research.
The signing of the Bletchley Declaration occurred shortly after the Biden White House issued an executive order setting new standards for AI safety and cybersecurity. Despite being mindful of the risks involved, the CEO of Darktrace, Poppy Gustafson, remains positive about AI, expressing excitement about how the conversation on AI will evolve and the actions it will drive.