Over the summer of 2025, the much-watched case Bartz v. Anthropic took an unexpected turn: the parties reached a proposed settlement for US$1.5 billion. This development raises a number of legal, practical, and doctrinal questions — especially for authors, publishers, and institutions grappling with generative AI. But first, some background information on the litigation.
The lawsuit was brought by authors alleging that Anthropic had used pirated copies of books (drawn from sites such as LibGen and PiLiMi) to train its models, without obtaining permission. In June 2025, Judge Alsup ruled on summary judgment that using books without permission to train AI was fair use if they were acquired legally, but he denied Anthropic’s request for summary judgment related to piracy, finding that the piracy was not fair use. Judge Alsup scheduled a trial to determine Anthropic’s potential liability for piracy for the 1st of December 2025.
The district court certified a class comprising rightsholders (authors and publishers) of books (that Anthropic had obtained from piracy sites), subject to eligibility rules (e.g. books registered with the U.S. Copyright Office, having ISBN or ASIN, and meeting timing criteria). More specifically, the work must have been registered within 3 months of publication, or it must have been registered within 5 years of publication and before the download date of the 10th of August 2022.
The settlement agreement was preliminarily approved by Judge Alsup on the 25th of September 2025, for USD 1.5 billion, to be paid over time (in several installments).The settlement suggests that, after fees and deductions, rightsholders can expect approximately USD 3,000 per work (though that sum must be divided among co-authors, publishers, or other rights-holders). For example, for “trade and university press” works, the default is a 50/50 division between the author side and the publisher side. The claimant may, however, override that split by providing contract documentation or a good-faith alternative split.
Maria A. Pallante, President and CEO of Association of American Publishers commented:
‘This settlement is a major step in the right direction in holding AI developers accountable for reckless and unabashed infringement. Piracy is an astonishingly poor decision for a tech company, and—as the settlement figure demonstrates—an expensive one. The law should not reward AI companies that profit by stealing.’
Dan Conway, CEO of the Publishers Association in London said:
‘This is a significant moment in the ongoing global debate around copyright and AI. For the first time, a major AI company has agreed to pay publishers and authors for using pirated material in its training data. This is a clear win for rightsholders with copyright registered in the US and – we hope – a signal to AI developers worldwide that they must license works and establish a functioning market for AI training. It’s important that we all recognise that this outcome is fundamentally limited, however, in that it is only past use of pirated book content for training that is covered. The outcome does not establish a clear precedent that all copyright-protected content must be licensed and paid for in AI training, either in the US or globally.
The US court-approved process for claimants will be published shortly and we, along with other UK bodies, will signpost to this when it becomes available. The first step for UK publishers is to verify which of their works are included and work with their industry associations on next steps. We will be supporting our members with this, including through direct engagement with US Class Counsel and written guidance.
Overall, this is a step in the right direction in the US, but the bigger battle for saving copyright in the world of AI domestically and globally is still very much on. Here in the UK that means the government needs to turn its mind to the AI Bill and show political leadership around ensuring AI firms are transparent about the content they have used. The data which has enabled this settlement demonstrates that granular transparency is technically possible, despite the lobby of big tech to the contrary.’
As a next step, authors must search the Works List once it is published (expected by 2 October 2025), submit a Claim Form by the deadline (23 March 2026), and decide whether to opt out.
The court will hold a final approval hearing in April 2026. If approved, distributions are expected to be calculated by June 2026, with payments to follow (though delays are possible, especially in the event of appeals).
The Anthropic settlement is a pivotal moment in the evolving interface between generative AI and copyright. While it does not resolve all legal controversies, it demonstrates that authors and publishers are not powerless and that AI firms may be compelled to negotiate or litigate for access to copyrighted data. For law scholars, this case will be closely watched for future AI-copyright conflicts.
0 Comments
1 Pingback