On 4 November 2025, the High Court of England and Wales delivered its decision in the long-awaited case between Getty Images and Stability AI (concerning the latter’s image-generation model Stable Diffusion), marking the first major UK judgment on whether the use of copyrighted visual works to train a generative AI model constitutes copyright infringement.
What was at stake
Getty had advanced a multi-pronged claim: primary copyright infringement, database-right infringement, secondary copyright infringement, trade mark infringement (relating to watermarks), and passing off. However, prior to judgment the primary copyright and database-right claims were withdrawn — Getty accepted it could not show that the relevant acts of reproduction or storage had occurred within the UK jurisdiction, and that Stability AI ensured that the prompts which were generating infringing outputs were blocked (at para. 9). As a result, the Court was asked only to decide on secondary copyright claims (whether the AI model itself is an “infringing copy” under UK law) and the related trade mark/passing off claims.
The Court’s reasoning and outcome
On copyright, Justice Joanna Smith DBE found that the statutory term “article” under the Copyright, Designs and Patents Act 1988 (CDPA) can, as a matter of construction, embrace intangible objects (e.g., electronic storage, cloud-based artifacts) with reference to s.17 of the CDPA (please see the approach to statutory construction in para 562), highlighting that:
‘I consider that an article, which must be an infringing copy, is capable of being an electronic copy stored in intangible form. Standing back, I agree with Getty Images that if the word “article” were construed as only covering tangible articles, this would deprive authors of protection in circumstances where the copy is itself electronic and it is then dealt with electronically. Not only would that be inconsistent with the words of the statute, but it would also be inconsistent with the general scheme of copyright protection which is to reward authors for their creative efforts’ (para 590).
However, this does not suffice to render an AI model an infringing copy. What matters is that the “copy” must embody a recognisable reproduction or substantial part of the original work. The Court accepted expert evidence explaining that Stable Diffusion does not store or reproduce the photographs on which it was trained: its “weights” encode statistical patterns, not stored images, and inference (the generation of images) does not require use of the training data itself. Experts in the case provided evidence on how this AI technology works (see paras. 3-7). Stable Diffusion is a diffusion model (please note that not all AI models are diffusion models). Some models memorise works but some do not. In this case, Getty could not assert as a fact that the weights included a copy of copyright work, i.e. there was no evidence of memorisation (para. 559). As a result, and what makes the case more interesting, is that the decision of Mrs Justice Smith DBE will apply to diffusion models (and models that behave like Stable Diffusion) but not necessarily to other AI models. Accordingly, the secondary copyright infringement claim was dismissed.
On the issue of memorisation, notwithstanding the different facts of the case and differences in the technology and legislation at issue, the Munich Regional Court (case no. 42 O 14139/24 between GEMA v OpenAI) took a different view on the question of whether an AI system could be said to store and reproduce content. A 2023 paper by Nicholas Carlini et al quickly became a favourite reference in copyright cases. What’s usually ignored is that Carlini’s own work shows these memorised outputs are extremely rare. The misuse became so widespread that Carlini had to publish a blog post explaining why his privacy research shouldn’t be applied to copyright.
More recent studies reinforce this. Cooper et al found that memorisation of books is minimal, with only a few outliers. Other research, like Somepalli et al on images, shows that memorisation mostly happens when the same data appears repeatedly in the training set.
The bottom line: memorisation exists, but it’s uncommon, especially for text, and it can be reduced through proper training. So, citing Carlini’s 2023 paper as blanket evidence of copyright infringement is misleading. Copyright law cares about reproduction, not the theoretical possibility of memorisation. Without proof that a model actually outputs copyrighted material, claims of infringement remain speculation.
On the trade mark front, the Court accepted that some early versions of Stable Diffusion — under certain prompts — had produced outputs bearing visible watermarks associated with Getty (or its iStock subsidiary). That gave rise to a narrow finding of trade mark infringement under the Trade Marks Act 1994. Nonetheless, this part of the ruling was described as “historic and extremely limited in scope”, applying only to a small number of early outputs; no infringement was found for later or current model versions (still, the finding illustrates the tangible risk for AI deployers).
Thus, the judgment represents a partial win for Getty (on trade marks) — and a far more substantial win for Stability AI (on copyright).
Key takeaways
- The Court’s recognition that an “article” may be intangible is in itself a significant development. It indicates that UK courts are willing to adapt copyright statutory language to new digital realities (e.g., electronic storage, cloud hosting).
- Crucially, the judgment emphasises that not all generative-AI models are alike — the conclusion that Stable Diffusion is not infringing depends on particular facts about how diffusion models work (i.e., they do not store training images). The Court left open the possibility that different model architectures (e.g., ones that memorise or embed actual images) could yield different outcomes.
- Because Getty withdrew the primary (training-based) infringement claim — due to an inability to establish that reproduction/storage occurred in the UK — the Court avoided deciding whether training itself (in the UK) would infringe. That remains untested.
- For rights-holders, the judgment is arguably disappointing: generative-AI developers may continue training and deploying models (even across jurisdictions) without consent, so long as those models do not store reproductions of copyrighted works in a recognisable form. This outcome underscores, once again, the “gap” between current copyright law and the realities of AI.
- The trade mark finding — albeit narrow — signals that rights-holders can still target AI-generated content bearing protected marks or watermark-style elements.
Conclusion
Getty Images v Stability AI does not herald a wholesale rejection of generative-AI development under UK copyright law — but neither does it close the door on future rights-holders’ claims. Rather, it draws a line: an AI model is not automatically an “infringing copy” simply because it was trained on copyrighted works; what matters is how the model encodes or stores information, and what a “copy” means in the context of machine learning.
Moving a step further: licensing. Should one seek to make arrangements for the training to happen or for the display of outputs? Developers will most likely object to the first (if follow the Getty ruling that it did not constitute a reproduction); for the second option, developers in Germany would probably argue that the display itself comes from training, which entails elements of memorisation (!), which could potentially lead to conflicting approaches.
For now, the law remains uncertain. Until either (a) other cases (involving different model architectures or domestic training) produce different outcomes, or (b) the legislature intervenes, generative AI in the UK will continue to operate in a somewhat ambiguous space.
Leave a Reply