Image created with A.I. using DreamUp. Photo courtesy of DeviantArt DreamUp

Since launching in 2000, DeviantArt has become one of the internet’s most popular platforms for artists to upload and share work. Then, in a single day, the company did much to destroy two decades’ worth of goodwill and community building.

The uproar arrived with the rollout of DeviantArt’s DreamUp, an A.I. image generation tool it created alongside Stable Diffusion and incorporated into its platform on November 9. Upon its release, every single piece of art on the platform was available for DreamUp to scrape. Deviants, as the platform calls its users, were furious, a sentiment further aggravated by being required to opt out of the A.I. datasets—one work at a time—with the platform claiming more expedient options were too complicated and technical to implement. Many of DeviantArt’s more than 60 million registered members have thousands of works uploaded.

Less than 12 hours later, criticism forced a backtrack with the company announcing in a statement, “We heard community feedback and now all deviations are automatically labeled as not authorized for use in A.I. datasets.” Many Deviants, however, were neither convinced nor satisfied by the U-turn.

“DeviantArt’s release of DreamUp was completely tone-deaf and smells of desperation to stay relevant in the machine-learning gold rush,” Steven Zapata, a professional illustrator and DeviantArt user, told Artnet News. “To launch their ‘protections’ on an opt-in basis was clearly misaligned with the values of their community, as evidenced by the immediate backlash.”

The platform has long billed itself as a conscientious protector of artists. Last year, it released Protect Protocol, a tool which scans nine blockchains for NFT art infringements and theft; DreamUp itself was labelled “safe and fair” partly on account of the “noimageai” directives it offers users as they upload work. The perception of hypocrisy made matters worse.

“Man At a Crossroads”, courtesy of Steven Zapata.

“Ignoring the ethical concerns around A.I. and not waiting for the legal landscape to settle sent a message that they didn’t care what was good for artists,” Logan Preshaw, an Australia-based visual development artist, told Artnet News. “With their NFT theft protection, they seemed to say, ‘We are with you.’ But here, they seem to be saying, ‘We are more than happy to personally benefit from the exploitation of your work.’”

Despite DeviantArt changing the terms of how DreamUp operates, many creators argue the problem begins with Stable Diffusion, a model trained on the work of many artists from across the internet without permission. As Ian Fey, an artist who deactivated his DeviantArt account out of frustration, told Artnet News over email, “Art was already scraped into it. It’s already inherently unethical and non-consensual, and they aim to profit off it.”

A further, and somewhat unsolvable, problem for DeviantArt rests with its inability to force other machine-learning models to honor the safeguards it’s created for Deviants. Companies such as Stability, LAION, and Google operate on a generous interpretation of text and data mining that leaves DeviantArt unable to truly protect its creators without launching litigation.

“I am concerned about the potential for [A.I. generation tools] to negatively impact the artist community,” Jeff Gluck, an IP lawyer at Gluck Law Firm, told Artnet News. “Eventually, we will see new legislation around the use of A.I. Platforms will have to navigate a balance between enterprise and real creators who view it as a threat to their livelihoods and intellectual property protection.”

For the countless creators impacted by DreamUp, such a legal battle cannot come soon enough.