Organizations Embrace Generative AI Amid Legal Risks

Date:

As the Year of Generative Artificial Intelligence comes to a close, a recent survey found that more than half of organizations are in the pilot or production stage with generative artificial intelligence tools.

Given the clear productivity benefits, it could be considered irresponsible not to leverage generative AI for certain business functions. Yet, the procurement and licensing of third-party generative AI technology presents unique risks that must be considered and, in some cases, mitigated.

Even in the absence of new laws regulating AI, contracts that are silent on the allocation of risks and responsibilities for things such as bias testing and ensuring adequate rights in the training data could leave businesses vulnerable to litigation and regulatory enforcement.

There are several underlying reasons why licensing generative AI from third parties is riskier than licensing other technologies, such as software-as-a-service platforms or cloud services.

For one, because the foundational models for generative AI are trained on such vast amounts of data, there are immense due diligence challenges when it comes to ensuring that the vendor has adequate rights to the data; that the data is a true or appropriate representation of the context or intended use of the AI system; and that the AI system’s use of such data doesn’t violate third-party intellectual property rights or applicable privacy laws.

Second, generative AI learns and evolves over time, which requires additional layers of oversight, monitoring, and auditing. Most software is routinely updated — when you license a SaaS product, updates are likely automatically installed. But because of its scale and complexity, generative AI systems may require more frequent maintenance due to data, model, or concept drift.

These issues are now playing out in courts and conversations between regulators and industry leaders. Cases filed this year include claims that companies used data lakes containing unlicensed works to train AI tools, as well as those from several visual artists that allege copyright infringement.

See also  ChatGPT and Generative AI in Payments: Exploring Reality, Discerning Hype, and Understanding What's Next and How to Prepare

In the absence of federal regulation, the Federal Trade Commission’s letter to OpenAI provides some useful guidance for deploying generative AI. The letter asks the company to explain how they source their training data, vet it, and test whether the models generate false or misleading statements, or personally identifiable information.

To that end, businesses looking to license generative AI should consider the following best practices:

Understand the specific use cases and desired outcomes. Given the buzz surrounding generative AI, it can be tempting to rush into an investment. Yet being thoughtful about how your company intends to use it will inform not only which tool you license, but also the contract terms you’ll want to make sure are included in the agreement.

For instance, if you’re using generative AI technology for voice authentication and fraud prevention, you’ll want to ensure that the vendor complies with biometric privacy laws — and that there are contractual remedies in the event of noncompliance.

Similarly, if generative AI technology is being used for data analytics, you’ll want to ensure that any personal or confidential information put into the AI system won’t be shared with the vendor — or any other users of the AI system — or become part of the training data set of the foundational model.

Conduct thorough due diligence. Due diligence processes must recognize the scale and complexity of generative AI systems, as well as the rapid emergence of AI laws and regulations. Depending on the type of AI system and the proposed application, due diligence areas could include AI governance and oversight, intellectual property rights, data privacy compliance and privacy by design, cybersecurity risks and mitigation, the potential for bias or disparate impacts, and litigation and regulatory enforcement risks.

Assign responsibilities and allocate liability. It’s important for both licensors and licensees of generative AI technology to mitigate risk by entering into a robust written agreement governing the relationship between the parties.

See also  HCL Technologies Beats Expectations, Analysts Raise Earnings Estimates for FY24-FY26

At a minimum, the contract should address restrictions on external data sets and other inputs used within the AI system — including those that may be restricted due to privacy or IP considerations. It also should address requirements around transparency and explainability of the AI system, security and resiliency standards, and responsibility for ongoing testing and monitoring.

The contract should also establish the rights and responsibilities of the parties with respect to the customer’s inputs into the generative AI system. For example, the terms and conditions for Microsoft Corp.’s generative AI services expressly provide that Microsoft doesn’t use the customer’s input to train, retrain, or improve the foundational models used by the generative AI technology, and that Microsoft doesn’t own the customer’s output content.

Apply senior management level oversight and governance. Given the complexity of AI systems, governance should start at the highest level within a company, such as the board of directors.

The board (or equivalent governing body) should oversee the effective implementation of policies, procedures, and processes to manage AI risk, including independent reviews and assessments, as well as internal controls and accountability procedures consistent with industry protocols, such as the National Institute of Standards and Technology’s AI Risk Management Framework.

The governance framework should include a written AI policy that establishes guardrails for the use and deployment of AI technology. This could include the formation of a cross-functional committee comprised of representatives from appropriate disciplines to review and approve use cases.

Moving forward, we can expect more regulation around the use of generative AI, plus new licensing and royalty models (particularly as smaller players in this space gain traction).

Whatever the case, there are steps businesses, vendors, and their counsel can take now to mitigate licensing risks — and put generative AI to good (and legal) use.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.