As generative artificial intelligence (AI) continues to evolve and permeate various sectors, many businesses have started forming and implementing policies to govern its use. These businesses face crucial decisions about how to integrate these tools, including the use of training data and prompts into their operations while managing associated risks. Companies, ranging from creative service firms to hedge funds, are increasingly seeking guidance on implementing generative AI policies that address their unique needs. This brief article explores the diverse concerns and risk profiles of different industries, providing insights into why developing an internally aligned generative AI policy is essential for optimizing usage and minimizing potential pitfalls.

Concerns Across Industries

When consulting with clients across various sectors, we have noted distinct requests for generative AI policies tailored to their specific contexts. Here are three representative types of businesses and their principal concerns:

Creative Service Businesses: These firms often express anxiety about potential copyright infringement when generative AI tools produce materials reminiscent of the content on which the AI tools were trained. The ability of such tools to generate similar—and even identical— written materials or artworks creates a pressing need for policies that safeguard intellectual property rights while still allowing employees to engage in creative ideation with the AI tools. These firms typically seek to identify AI tools that provide broad indemnification rights for infringement caused by the AI tool (offered by many, but frequently subject to caveats and carveouts that leave the customer exposed to significant risk), and develop clear policies for their employees to follow (often opting for simplified “Do and Don’t” lists).

Hedge Funds and Venture Capital: For hedge funds and venture capital funds, the added regulatory scrutiny they face means higher stakes for data security risks and material nonpublic information (MNPI) ingestion when using generative AI. There are heightened risks of inadvertent disclosure of confidential or personal information when using an AI tool that is not locally hosted or on a private instance, or when any MNPI is used to train such AI tools. A generative AI policy for financial services firms must include provisions that ensure adequate cybersecurity measures are in place and restrict the input of confidential information into the AI tools. Venture capital firms may also want to identify a process for ensuring their portfolio companies are using AI tools responsibly as well. In general, policies that include a pathway to new AI tool approval and clear diligence objectives for assessing these vendors are key to empowering employees while setting clear benchmarks for vendors.

Software Solution Businesses: Companies that develop or integrate software solutions look at generative AI as a tool for innovation. However, they must address issues surrounding provenance and ownership of data used for training and prompts, user data security, infringement risk, open-source license compliance, and the ethical implications of AI-generated solutions. Any drafted policy expected to apply firm-wide must consider each department’s concerns—from marketing to product development. As many of these businesses rely on funding from sophisticated investors, (such as the venture capital firms mentioned above) they must stay on the forefront by adopting a policy that addresses responsible usage of AI tools as investors have come to expect such reassurances in funding rounds.

Crafting An Internally Focused Policy

An effective generative AI policy should be internally facing, guiding employees on acceptable usage while considering the diverse needs of various stakeholders. For instance, the creative services sector might allow tools for ideation but prohibit using generative AI to create client deliverables. Conversely, a hedge fund may limit AI tool usage solely to those pre-approved by legal and technology departments based on security assessments and only for specifically approved use-cases. Businesses with preexisting policies addressing aspects of concerns can amend such policies for nuances in AI tool usage, but a more comprehensive approach, with an individual and distinct policy for AI-tool usage that points employees to a single resource, would consolidate the issues and provide more use-case specific guidance.

When formulating a policy, it is critical to ensure practicality. A policy that is too restrictive could inadvertently push employees to circumvent it, rendering the guidance ineffective. To mitigate this risk, organizations should engage their teams in discussions about potential uses for generative AI tools and the data needed to support it. Establishing a framework for employee-driven inquiries into new tools encourages compliance and fosters an environment of accountability.

Legal counsel can assist in identifying the best approach to policy development. Management must ensure that employees clearly understand their expectations, whether they curate a comprehensive long-form policy, a simplistic 'Dos and Don'ts' list, or a hybrid model. Businesses must inform departments subject to specific AI regulations (such as state laws that restrict usage in connection with employment related decision making) of regulations and identify the proper person to contact with questions about implementation.

Conclusion: Tailor Your Approach

Not every business will benefit from a generative AI policy. But no firm will benefit from a policy that is unnecessarily restrictive or does not comply with laws applicable to its business. Organizations must critically assess their needs and risks before adopting any standardized ‘one-size-fits-all’ policy.

Organizations should also consult with legal counsel to determine if a generative AI policy is necessary and what guardrails to establish. A policy that promotes innovation and creativity without unnecessary restrictions that stifle initiative can satisfy the optimal goal of empowering employee and increasing efficiency.

A generative AI policy tailored to fit specific business needs enables companies to better navigate the evolving AI landscape, harness the benefits these tools offer, and protect their interests.

Bryan Sterba is a partner in Lowenstein Sandler’s emerging companies and venture capital who advises clients on how to leverage emerging technologies to achieve their business objectives and day-to-day intellectual property and commercial contract matters.

Mark Kesslen is chair of the firm's intellectual property group. He devotes his practice to clients engaged in creating businesses, launching new products, and conducting M&A and venture capital transactions.

Reprinted with permission from the December 3, 2024 edition of Legaltech News © 2024 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Click here to view the full article