Existing Enterprise Agreements: A Trojan Horse for AI Risk? | Fieldfisher
Skip to main content
Insight

Existing Enterprise Agreements: A Trojan Horse for AI Risk?

26/07/2023

Locations

United Kingdom

AI functionality is beginning to appear in tools used by organisations but it is not always getting picked up for review by legal and risk functions. The impact is that business leaders can remain unaware of the nature and scale of risks that are being introduced into their organisations. Don't let existing relationships and implementations become Trojan horses.

A key issue facing business leaders in managing the risks of artificial intelligence, is the lack of full visibility of AI usage within their organisations. Whilst many are looking at putting processes in place to capture new potential use cases and projects as they arise, and are seeking information from technology and business teams as to what might already be in use, this is missing a critical element. 

Organisational Firewall

Organisations are scrambling to respond to the opportunities, challenges, and risks presented by artificial intelligence since OpenAI's large language model, ChatGPT, drew worldwide attention upon launch in November 2022. The pace of development has caught many by surprise, and we're seeing a range of maturity levels of AI governance across the business world.

Even as organisations seek to understand the implications of AI, software vendors are embracing its transformative power to enhance their products and services. Whilst organisations will understandably want to take advantage of the ways in which AI systems can revolutionise their operations, they also recognise the importance of approaching a vendor's AI offering with caution.

As a consequence, we're seeing existing and new AI projects being placed under careful scrutiny by legal teams and risk management functions. Despite this heightened level of scrutiny and alertness when it comes to anything involving artificial intelligence, there remains a substantial gap in the organisational firewall meaning that AI risk is being introduced without review.

A Trojan Horse

Enterprise IT—from productivity suites, to web browsers, to ERP/CRM systems—are embedded and integrated systems that go to the very heart of business operations. They are procured often on the basis of largely standard terms with a relatively low level of scrutiny. Moreover, further engagement with risk management functions on renewal (which may occur automatically) tends to be limited.

However, the vendors of these products are not sitting idly on their hands when it comes to the use of AI, and we're seeing AI functionality starting to appear in a range of tools. The nature of the contractual relationships with product vendors means that this AI functionality is not getting picked up for review by legal and risk functions.

The impact is that business leaders will remain unaware of the nature and scale of risks that are being introduced into their organisation, and AI tools may be being made available that are inconsistent with the risk approaches and policies being introduced by leadership. Existing implementations and contractual relationships can become Trojan horses, introducing risk into the business without alerting the sentries. 

A Proportionate Response

It is important to factor in the potential for AI to be introduced through existing vendor relationships without this triggering a legal or risk review under existing processes. The integration of AI into enterprise IT products may result in additional contractual terms, and you should take care that these are not accepted without adequate oversight at the point of renewal. Alternatively, it may be that no new terms are introduced when they really should be.

At the outset, you may have negotiated specific legal and commercial arrangements addressing risk, based on key factors such as the level of control, accountability and oversight that you or the vendor have respectively over the services or the output of the product. AI has the potential to transform those key risk factors and you will want to make sure that the changing risk profile can be managed through the contractual and operational protections you have in place.
 
You should consider what processes the introduction of new functionality into existing tools should trigger, and who should be responsible for triggering them. We'd suggest reviewing vendor roadmaps to seek insights into when a vendor is intending to introduce AI solutions, and working with technology sourcing teams to identify renewal dates and plan for risk review. Technology and business teams need to play their part in the organisation's approach to vendor and contract management, and good lines of communication between those responsible for the tools, and those managing risk, will be critical.
 
Here at Fieldfisher we're getting a huge volume of questions about how to address AI-related risk. Our lawyers have been tracking the issues for over a decade and are actively engaged with clients in a wide range of sectors on the future of AI, the regulatory landscape, and the risks involved.

If you'd like to talk to us about implementing a proportionate and business-focused approach to managing AI legal and compliance risk, please contact Chris Eastham.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Technology and Data

Related Work Areas

Artificial Intelligence