May 28, 2024 at 03:08PM
OpenAI forms a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The committee will make safety and security recommendations for OpenAI’s projects and operations, starting with a 90-day evaluation period. Concerns have been raised about the potential impact on societal benefit and the effectiveness of the change.
Summary of Meeting Notes:
OpenAI has formed a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The committee is tasked with making recommendations to the board on safety measures and security decisions for OpenAI projects and operations.
The committee’s first focus is to evaluate and develop the company’s processes and safeguards for the next 90 days, after which their recommendations will be reviewed by the board before being shared with the public.
This move follows the resignation of a former OpenAI safety executive and the disassembling of its “superalignment” safety oversight team. Cybersecurity expert Ilia Kolochenko raises skepticism about how this change will benefit society, emphasizing that while making AI models safe is essential, it does not guarantee accuracy, reliability, fairness, transparency, and non-discrimination.
Feel free to ask if you need further assistance or details on the meeting notes.