In November 2023, Responsible Innovation Labs (RIL), a coalition of technology industry founders and investors focused on responsible innovation, published a set of guidelines for AI development and investment titled “Voluntary Responsible AI Commitments for Startups and Investors.” The accompanying “Responsible AI Protocol” provides expanded information on guideline implementation to assist founders and funders in adhering to the AI commitments. 

RIL’s commitments encourage governance processes for the development of AI, transparency in the building of AI systems, forecasting of the risks and benefits of AI technologies, testing to ensure product safety, and regular ongoing improvements to promote efficacy and mitigate risks. RIL’s protocol provides a more in-depth exploration of each commitment, offering a step-by-step implementation guide and directing readers to additional resources. The commitments are aimed at startups, venture capital firms, and other early-stage investors. To date, over 100 venture capital funds and others in the technology industry have signed the commitments.

Implications for Due Diligence

RIL’s commitments have the potential to reshape the diligence landscape for AI startups. Investors who have signed the commitments will be looking for philosophical alignment on the commitments, clear forecasts of the risks and benefits of new technology, detailed auditing and testing protocols to ensure safety and mitigate risks, evidence of governance structures monitoring AI use and development, and red-teaming (ethical hacking) to ensure product design safety. Investors will also require companies to put more resources into not just their AI products but also the concepts and explanations supporting their products.

Industry Action Precedes AI Regulation

The RIL commitments follow the Biden administration’s Voluntary AI Commitments (July 2023), agreed to by seven leading AI companies, and President Biden’s executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 2023), both of which are informative but provide no mandates. While the effects of President Biden’s executive order will take time to develop, the RIL commitments and other industry-driven self-regulatory measures may end up fundamentally shaping the AI industry norms around development and investment. 

Commitment signatories can help improve safety and efficiency throughout the AI industry by pushing non-signatories in the market to adopt similar self-regulatory approaches. For the present, businesses that have implemented or are considering the implementation of AI programs should seek the advice of qualified counsel to ensure that AI usage, policies, and procedures are tailored to business objectives, closely monitored, and sufficiently flexible to change as the technological and legal landscapes continue to evolve.