Law & Politics
Amid Multiple Lawsuits, OpenAI Says Creating A.I. Would Be ‘Impossible’ Without Copyrighted Material
The A.I. developer is starting to hit back at its critics in the hopes of defending its business model.
The A.I. developer is starting to hit back at its critics in the hopes of defending its business model.
Jo Lawson-Tancred ShareShare This Article
OpenAI, the organization behind ChatGPT and text-to-image generator DALL-E, has made a controversial plea to be given legal access to copyrighted material in the service of developing A.I. The company has recently been inundated with copyright infringement lawsuits from the New York Times as well as authors like George R.R. Martin, Jodi Picoult, and Jonathan Franzen.
“Because copyright today covers virtually every sort of human expression—including blog posts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading A.I. models without using copyrighted materials,” the company claimed to the U.K.’s House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide A.I. systems that meet the needs of today’s citizens.”
In order to “train” A.I. systems, companies like OpenAI have scraped vast amounts of data, including copyrighted material, from the internet in a process known as “text and data mining.” Towards the end of last year, a document circulated online that outlined how the company Midjourney had built a database of artists to train its own text-to-image generator. A lengthy list of creators whose work had been used for this purpose included major contemporary artists like Anish Kapoor, Gerhard Richter, Damien Hirst, Banksy, and Yayoi Kusama, among many others.
In November, the list was submitted as evidence in an ongoing class action lawsuit against Midjourney, DeviantArt and Stability A.I., which was first filed last January. On October 30, a federal judge sided with the A.I. companies but the artists’ lawyers have since doubled down their efforts by filing an amended complaint and adding seven new plaintiffs.
In the face of legal action, OpenAI has sought out partnerships with potential data providers, most notably initiating a content licensing deal with the German media company Axel Springer. It remains to be seen if a similar deal could be struck up with an individual artist or a gallery. OpenAI has also touted a new opt-out option for creators that don’t want their work in future datasets, although this has been met with some skepticism. More recently, OpenAI is speaking out to defend itself, yesterday publishing a blog post in which it argued that the NYT lawsuit is “without merit.”
News of OpenAI’s claim that the development of A.I. would be “impossible” without access to copyrighted material has enraged online commentators. “Rough Translation: We won’t get fabulously rich if you don’t let us steal, so please don’t make stealing a crime!,” wrote renowned A.I. skeptic Gary Marcus. “Sure Netflix might pay billions a year in licensing fees, but *we* shouldn’t have to!”
The claim was submitted as written evidence to the House of Lord’s Communications and Digital Committee inquiry into large language models, the highly sophisticated but data-hungry neural networks that power tools like DALL-E and ChatGPT. In early 2023, the U.K. government proposed a new exemption in copyright laws that would allow A.I. developers to freely use copyrighted material for training purposes. It retracted this proposal in February after facing considerable backlash from the creative industries. Currently, copyrighted material can be used for non-commercial research purposes only in the U.K. Some believe that more lenient laws would attract A.I. developers and support the government’s plans to make the U.K. a global A.I. superpower by 2030.
Meanwhile, the U.S. venture capital firm Andreessen Horowitz, known for backing Big Tech, also warned U.K. ministers that “over-regulation could lead to the West falling behind in areas such as cybersecurity, intelligence, and warfare.”
“If the U.K. leads the West by standing at the forefront of A.I., we can drive global norms that prioritize these democratic values in the emerging A.I.-driven world,” it added in its own submission to the inquiry.
More Trending Stories:
Art Dealers Christina and Emmanuel Di Donna on Their Special Holiday Rituals
Stefanie Heinze Paints Richly Ambiguous Worlds. Collectors Are Obsessed
Inspector Schachter Uncovers Allegations Regarding the Latest Art World Scandal—And It’s a Doozy
Archaeologists Call Foul on the Purported Discovery of a 27,000-Year-Old Pyramid
The Sprawling Legal Dispute Between Yves Bouvier and Dmitry Rybolovlev Is Finally Over