AI-Related Lawsuits: How The Stable Diffusion Case Could Set a Legal Precedent | Fieldfisher
Skip to main content
Insight

AI-Related Lawsuits: How The Stable Diffusion Case Could Set a Legal Precedent

Locations

Germany

The groundbreaking legal case of Stable Diffusion opened a Pandora’s box of questions as to how AI technology should be regulated and what precedent needs to be set to do so. This lawsuit set the stage for a new era of litigation in the realm of AI-related claims, and the implications could be far-reaching and long-lasting. The implications are clear: without proper legal regulation, the consequences of AI-related lawsuits could be devastating. As such, Stable Diffusion has created a ripple effect in the legal world that will undoubtedly reverberate for years to come.
 

An Overview

Before we explore the Stable Diffusion lawsuit in detail, let’s pause to discuss what an AI-related lawsuit is and introduce the company Stable Diffusion.
 

AI-related lawsuits

As lawyers, it is important to understand and be aware of the legal issues surrounding the use AI. AI-related lawsuits involve allegations of harm caused by the use of AI, and these claims can range from discrimination due to biased algorithms to negligence due to faulty AI technology. AI-related lawsuits have become increasingly common in recent years, indicating that companies should take extra precautions when dealing with AI technology.

Discrimination is a widely acknowledged legal issue related to AI. For instance, the US Department of Housing and Urban Development took Facebook to court in 2019 for allegedly allowing advertisers to filter specific groups based on race through its algorithm. Additionally, COMPAS has been the subject of criticism as it has been found to replicate gender and ethnic biases in its determination of the recidivism rate of defendants in court.

The lawsuit of Rana v. Amazon et al was another example of how AI-related litigation can involve claims of negligence due to the use of faulty AI systems. Here, the plaintiff accused Amazon of vicarious liability for a car crash caused by its delivery driver’s carelessness, however the court refused the request because AI could not be an inventor.

It’s vital for businesses to know and understand the legal implications of utilizing AI technology. Companies should take measures to reduce potential AI risks, like routinely auditing algorithms so as to make sure they don’t contain any bias, and consistently testing AI programs to guarantee that they are working as intended. Companies must also comprehend applicable privacy laws like HIPAA and civil rights regulations such as the Fair Credit Reporting Act which forbid discrimination on the basis of particular attributes.
 

What is Stable Diffusion?

Stable Diffusion is a deep learning, text-to-image algorithm released in 2022. It is commonly used to create detailed images based on textual descriptions, but it can also perform tasks like inpainting, outpainting, and creating image-to-image translations with a text prompt. It’s a latent diffusion model, a type of deep generative neural network constructed by the CompVis group at LMU Munich. The way it works is through the use of noise: it adds noise to an image, then reverses the process and improves the image quality until there is no noise left, thus generating a realistic image that matches the text prompt. The model is powered by a variational autoencoder (VAE) that compresses the image from pixel space to a smaller latent space. Noise is iteratively added to this latent representation during a forward diffusion. A U-Net block composed of a ResNet backbone denoises the output of the forward diffusion to obtain the latent representation. Finally, the text-understanding component deciphers the text information into numeric form, taking into account concepts learned with an attention mechanism. The Stable Diffusion model generates images by iteratively denoising random noise until all configured steps have been completed.
 

The dispute

The Joseph Saveri Law Firm and lawyer Matthew Butterick have filed a lawsuit against Stability AI and Midjourney on behalf of three artists, accusing the companies of unlawfully accessing and using copyrighted images to train their software. The suit has been brought to the High Court of Justice in London and alleges that Stability AI and Midjourney have scraped the internet for billions of works, which are then used to create derivative works, without obtaining authorization. Additionally, Getty Images has sent a letter before action to Stability AI in the UK, claiming breach of copyright and violation of their terms of service, including web scraping.

Despite not seeking financial compensation or the stoppage of AI art tools, they hope to establish a new legal standard more beneficial to Getty Images and its contributors. The allegations made in the suit still need to be proven in court, yet numerous legal experts suggest that these matters will have to be handled in court. This case is an intensification in the ongoing conflict between AI businesses and content developers for recognition, profit, and the eventual course of creative industries.
 

Consequences for Future Disputes

The ruling in the Stable Diffusion case may have an impact on future lawsuits concerning AI. It could serve as an example for cases such as when a company does not properly assess the risks of releasing a new AI product. It can also provide evidence to back up claims in cases involving biased AI algorithms, where a corporation is held accountable for the prejudice caused by their technology.

The consequences of using faulty AI technology are far-reaching and potentially devastating. Companies are utterly responsible for properly testing their AI products before they hit the market, and their negligence of this responsibility could set a powerful legal precedent. Furthermore, the ethical implications of using AI technology also give rise to a slew of legal questions — questions that are here to stay. Lawyers and start-ups should be hyper-vigilant about the compliance of their AI products, because the future of this technology is bright — and the possibility for legal disputes is rife.


​Dennis Hillemann is a specialist in administrative law and a partner in our Hamburg office. He has recently published and lectured on the use of AI in the public sector. In addition, he advises companies and public authorities on digitalization issues.
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE