Acceptable Use Policy for Generative AI: Where to Start? 

Generative artificial intelligence (AI) are prediction algorithms that can be leveraged to create any type of content, be it text, code, images, audio, or video – think ChatGPT, Bing Chat, Bard, Stable Diffusion, Midjourney, and DALL-E, for example. With the emergence of generative AI-as-a-Service – which has lowered barriers to entry – generative AI is spreading to most business units: marketing and sales, customer operations, software engineering, and R&D. As a result, organizations of all sizes are starting to use them for different types of use cases.

Still, generative AI, in its current state, is associated with a variety of risks that range from security threats to misinformation, deception, discrimination, and more broadly, noncompliance: violation of confidential obligations, violations of intellectual property rights such as trade secrets and copyrights, violations of privacy and data protection-related obligations and rights.

The concerns around generative AI are warranted. According to the 2024 State of Data Security Report, which surveyed 700 data professionals across industries and geographies, 88% said employees at their organization are using AI, regardless of whether the company has officially adopted it and put a policy in place to dictate its use. Yet, just 50% say their data security strategy is keeping up with the pace of AI evolution, which introduces significant risk of sensitive information being exposed or misused.

Generative AI should therefore be governed by internal policies, just like any third party or in-house tool that processes protected information. But governing generative AI usage requires going beyond traditional security obligations: it requires setting rules for human oversight and review, as well as transparency when content produced for public consumption or decision-making is supported by generative AI.

What Is an Acceptable Use Policy?

An Acceptable Use Policy (AUP) is one policy within a broader information security program, along with data classification, data retention, and incident response policies, among others. It usually sets rules for using an organization’s resources, and in particular IT-related resources, including hardware, software, and networks.

An AUP defines acceptable behavior and prohibits unwanted behavior, with a view to protect the organization’s assets while ensuring an effective work environment. Typically, the organization will require its employees to acknowledge the AUP before being granted access to IT resources, and detected violations will lead to disciplinary action. AUPs are thus the natural home for governing generative AI usage.

To become effective, just like any other policy, AUPs will require education and training to make sure employees familiarize themselves with the newly-adopted rules, and understand how to comply with them in practice.

A Warning About AI Regulations

It is crucial to track new developments, as lawsuits have been brought against a variety of generative AI providers and, in response, lawmakers are switching from a reactive to a proactive stance.

Lawsuits against generative AI providers make it clear that the risk surface is complex and multifaceted, involving multiple legal pitfalls such as intellectual right violations, as well as privacy and data protection violations. For example, the lawsuit against GitHub, Microsoft, and OpenAI that centers on GitHub Copilot’s ability to transform commands written in plain English into computer code, and the lawsuit against Stable AI, of which Stable Diffusion generates images following text prompts from users, both raise copyright issues. Lawsuits (see also here, here, and here) against OpenAI concerning ChatGPT and a lawsuit against Google related to Bard-data scraping practices also raise privacy and data protection issues.

Regulators are in the process of or have already started to regulate some practices. The recently adopted US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, for example, mandates the issuance of a series of guidelines and recommendations to the benefit of the Federal Government and Agencies, with a view to advance the responsible and secure use of generative AI. In particular, general bans are discouraged:

“Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments; establish guidelines and limitations on the appropriate use of generative AI; and, with appropriate safeguards in place, provide their personnel and programs with access to secure and reliable generative AI capabilities, at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights.”

Key Steps for Building an AUP

There are five key steps an organization should follow to build an AUP targeting generative AI usage.

1. Define the AUP Scope

An AUP is, in principle, applicable to all of an organization’s employees. There is no reason why the section on generative AI should be treated differently.

It is essential to precisely define the generative AI tooling that is in scope, eventually with examples, to make sure most employees will actually feel impacted by such rules. Generative AI tooling can be used in a variety of contexts, e.g., by engineering and research teams to produce code, by marketing teams to produce content, and by all teams to generate meeting notes.

2. Set Clear Acceptable Terms of Use

Set clear acceptable use terms so that there is no ambiguity when employees find themselves in questionable scenarios. Acceptable terms of use should cover:

  1. The list of legitimate business purposes for which generative AI tooling can be used.
  2. A process for vetting AI tools for legitimate business purposes involving competent team(s), such as security and privacy teams. The vetting process will require these teams’ due diligence in considering related purposes for which the generated content will be used, as well as the types of data inserted within prompts, who can access/reuse prompts/results and why, intellectual property clearance impacting the training/deployment/improvement phases, legal bases for the training/deployment/improvement phases when personal data is at stake, the operationalization of data subject rights, and the impact of the practice supported by the tool upon fundamental rights. For an overview of risks associated with the use of input models, also called foundation models and which include some generative AI models, read our article on AI risks. One challenge with the generative AI tooling currently available on the market is that AI providers are struggling to identify a lawful legal basis under data protection laws, which jeopardizes the tools’ lawfulness.
  3. A requirement to create an inventory of vetted generative AI tools.
  4. Rules related to the protection of authentication credentials used to access generative AI tooling, retention and disclosure of prompt history, and individual prompts.
  5. The list of categories and classes of data held by the organization that can be inputted within generative AI tooling for each use case. The AUP should align with the internal Data Classification Policy on this point.
  6. Rules setting forth the oversight and review process, in particular when generative AI tooling is used to produce content that is shared externally, or is used to support decision-making that may impact individuals. A variety of teams should be involved, including privacy and ethics teams.

3. Blacklist Prohibited Practices

It may be a bit excessive, but unwanted practices should be listed as such for the sake of clarity. By way of example:

  1. Content produced with generative AI tooling should never be publicly released without adding a specific acknowledgement.
  2. Prompts and query results should not be disclosed to third parties, except for authorized service providers.
  3. Information classified as ‘Strictly Confidential’ should never be inserted within prompts.
  4. Generative AI tooling should not be accessed with personal emails.

4. Allocate Responsibilities and Acknowledge Rights

Allocate responsibilities in relation to the role performed by the employee. By way of example, the following could be specified:

  1. All employees must comply with a predetermined list of standards that should govern the use of generative AI systems in all cases: professionalism, reflexive thinking, respect for others, compliance with laws including equality laws, privacy and data protection laws, etc.
  2. All employees are encouraged to work collaboratively and discuss concerns/questions related to the use of generative AI tooling as soon as they arise.
  3. All employees have a duty to report violations to this policy.
  4. Line Managers should be made responsible for ensuring that their teams are aware of and comply with the AUP policy.
  5. The IT department is responsible for managing, consolidating, and/or overseeing the approved list of generative AI systems to ensure that only authorized tools are used within the organization.
  6. The Compliance team, with the assistance of the Legal team, is responsible for handling complaints related to violations of the AUP policy.
  7. The Compliance team, with the assistance of the Legal team, is responsible for adapting/amending the AUP policy to meet new applicable legal/ethical requirements.
  8. Employees should be informed about their rights, and in particular privacy and data protection rights, when generative AI tooling is switched on within the organization, e.g., the right to object/access/correction/deletion. One major problem with B2B generative AI-as-a-Service offerings is that they don’t always offer the possibility for users to exercise their rights at the individual level. Even when they do so, they are often limited in their ability to fulfill data subject requests, such as deletion requests.
  9. Employees should be informed about the implications of not adhering to the AUP policy, and about changes made to the policy.
  10. A timeframe for reviewing the policy should be established, e.g., on an annual basis, and employees should be given an opportunity to provide feedback about the policy to ensure they buy into it.

5. Set an Incident Reporting Process

To detect potential misuse of generative AI tooling as early as possible, employees should be informed about the Incident Reporting Process, of which description could live in another policy. By way of example, the following considerations could be addressed within the policy:

  1. Employees should be encouraged to discuss concerns/questions with competent teams (IT, privacy, ethics) at any point in time following a particular process.
  2. Employees must report violations of this policy to the Compliance department following a particular process.
  3. All reports of suspected violations or incidents will be investigated promptly and thoroughly, within a predefined time frame.

Getting Started

With these five tactics, the process of building an AUP that addresses generative AI becomes more clear, but also more urgent. Generative AI tools and LLMs are evolving quickly, as demonstrated by the half of data professionals that said their data security strategy is failing to keep up with it. In order to ensure that the use of generative AI tools does not become unmanageable and thus highly risky, organizations must prioritize the introduction of an AUP.

Learn More About AI Security

Check out our other AI blogs.

Read More
Blog

Related stories