ChatGPT Wrote Game Of Thrones Fanfic, And Now George RR Martin Is Suing
George RR Martin is waging one of the most high-stakes copyright battles in decades. In October 2025, a New York federal judge denied OpenAI's request to dismiss Martin's accusations that ChatGPT had violated the copyrights of his books and television programs when it produced outlines of sequels to Game of Thrones without his consent. The lawsuit consolidates similar claims from other major authors, including activist Ta-Nehisi Coates, comedian Sarah Silverman, and fiction giants Jonathan Franzen, David Baldacci, Jodi Picoult, and John Grisham. The ruling comes a few weeks after plaintiffs secured internal Slack messages in which OpenAI discussed deleting datasets of pirated books. The revelation could help determine the willfulness of OpenAI's infringement actions, potentially raising the penalties from $750 to $150,000 per work. For context, OpenAI's various court cases concern tens of millions of artistic and journalistic works.
The case is one of a litany of copyright battles facing AI companies this year. In cases against Meta, Anthropic, and OpenAI, plaintiffs claimed that AI giants scraped pirated book repositories, like LibGen and Bibliotik, to populate their training algorithms. In September, Anthropic paid a $1.5 billion settlement to a group of authors over the use of these bootlegged repositories to train its chatbot, Claude. In March, a judge ruled that OpenAI and Microsoft could not block a lawsuit from a group of news organizations, alleging that the New York Times and other news organizations were used to train the company's data systems. AI video slop social media apps, like Meta's Vibes, OpenAI's Sora, and Character.AI's Feed, have also faced backlash over reproducing protected characters. En masse, the results of these cases will carry major implications for the world's fastest-growing industry.
Fair use?
At the center of Martin's lawsuit is the debate over a 19th-century legal framework known as "fair use," which allows for the limited application of copyrighted materials for activities such as criticism, news reporting, or education. Courts use several criteria to judge such cases, including the purpose of the use (such as commercial or nonprofit), the nature of the work, the amount copied, and the economic effect on the copyright-holder. Critically, the burden of proof for fair use is on the infringer, not the copyright holder, meaning that AI companies need to prove that their algorithms qualify. On the flipside, plaintiffs assert that AI companies copied their work to produce substantially similar products.
Despite being codified in Section 107 of the Copyright Act of 1976, the law's application is historically mixed, both limiting and enabling the reuse of copyrighted material. In June 2025, for instance, a federal judge gave a bifurcated ruling stating Anthropic's use of books to train its AI chatbot, Claude, was sufficiently transformative to be considered fair use. However, the judge also ruled that plaintiffs could take Anthropic to trial over the company's pirated training library, resulting in Anthropic's $1.5 billion settlement. Conversely, a 2025 U.S. court case in which Thomas-Reuters alleged that research firm Ross Intelligence used information from its legal search engine to train a competing AI algorithm, rejected claims that the AI's use of copyrighted material was sufficiently transformative, while noting that the infringement financially competed with Thomas-Reuters. Together, the two cases point to a murky regulatory environment for both AI companies and authors as they fight over the role of published works in training AI models.
The case
In their complaint, Martin and his contemporaries allege that OpenAI violated copyright by using their work to train its large language models, generating outputs that infringed upon the plaintiffs' copyrights, and torrenting books from illegal shadow libraries. Critically, the authors only need to prove one of these claims to receive full damages.
OpenAI has argued that ChatGPT's outputs were transformative enough not to violate copyright protections. However, in his October 27, 2025, ruling, U.S. District Court Judge Sidney Stein denied this claim, noting that outputs were "substantially similar to plaintiffs' works." To illustrate this logic, Stein highlights two examples related to Martin's famed series, stating that a reasonable jury could find that its summaries and sequel ideas violated copyright laws "by parroting the plot, characters, and themes of the original." Stein also dismissed OpenAI's claim that the plaintiffs could not add piracy complaints to the docket, permitting the charge against OpenAI's possession of pirated works, regardless of their usage. Importantly, the decision did not address the question of fair use.
The consequences of these court cases extend far beyond their potential financial restitutions, as they look to establish a legal precedent defining the nature of AI outputs and the libraries that enable them. In doing so, they also strike at an "ask for forgiveness, not permission" mindset common in an industry that prioritizes growth above all else, as even companies that once claimed to not use unlicensed material to train their AI models are now coming under legal fire for the practice. What effect such lawsuits have on the quality, scale, and applications of AI models, let alone those whose work was used to train them, remains to be seen. But one thing is for certain: ChatGPT should think twice before writing its next Game of Thrones spin-off.