Crafting an AI Policy That Safeguards Data Without Stifling Productivity

Crafting an AI Policy That Safeguards Data Without Stifling Productivity

November 7, 2023 at 01:08PM

Chief information security officers (CISOs) are facing challenges in the evolving threat landscape with the rise of AI. While some companies are tempted to ban AI due to concerns about bias, lack of transparency, security, and data privacy, AI brings undeniable benefits. CISOs should establish a corporate AI policy that embraces the advantages of AI while setting boundaries to mitigate risks. The policy should involve collaboration across the organization, establish ground rules for responsible AI use, create a process for evaluating use cases, and highlight success stories to encourage compliance.

The meeting notes discuss the evolving threat landscape for chief information security officers (CISOs) due to advancements in AI technology. While AI can bring numerous benefits to the workplace, such as fueling creativity and improving efficiency, there are also concerns regarding bias, security, transparency, and data privacy. Some companies are opting to ban AI altogether, but a more effective approach is to establish a comprehensive corporate AI policy that enables employees to embrace the benefits while setting clear boundaries to mitigate risks.

The notes highlight the internal threat posed by AI, particularly with the rise of large language models (LLMs) and generative AI (GenAI) technology. Employees may inadvertently submit proprietary information, source code, or regulated customer data to AI tools like chatbots, potentially violating data handling laws. The storage of user inputs by LLMs also poses security concerns. Enterprises can mitigate these risks by using enterprise-focused GenAI licenses that do not ingest inputs as training data and discouraging the use of non-enterprise licenses on personal devices.

To create an effective AI policy, the notes suggest the following steps:

1. Make policy development a companywide effort: Since AI will impact every business area, involve key stakeholders across the organization in policy development. This collaborative approach helps identify potential risks, determine the company’s risk tolerance, and find the middle ground for the AI policy.

2. Establish general ground rules: Define responsible and irresponsible behaviors related to AI usage. For example, uploading source code or sensitive data to LLMs that retain training rights is universally undesirable, while using enterprise licenses, validating outputs, and safeguarding company IP are universally good practices.

3. Create an ongoing process for case-by-case decisions: Develop a straightforward process for employees to submit AI use cases for evaluation and approval. This allows for flexibility in addressing specific requests and modifying the policy accordingly.

Additionally, it is important to champion success stories to encourage employees to take the AI policy seriously. Highlight concrete use cases and real-life wins to demonstrate the benefits of AI and showcase strategies that ensure safe and efficient AI utilization.

While the field of AI continues to evolve, a well-defined policy that incorporates best practices from across the company and offers flexibility for updates will empower employees to leverage AI’s benefits while minimizing risk exposure.

Full Article