Don’t Believe the ‘Resistance-Is-Futile’ Hype When It Comes to A.I. and Art

Our columnist looks at how a recent, precedent-setting legal case might lead us to rethink narratives about tech's world-devouring promises.

A multiple-exposure photo illustration of GPT-4 and DALL-E developer OpenAI's logo transposed against screens depicting the company's product. (Photo by Jakub Porzycki/NurPhoto via Getty Images)

Every week, Artnet News brings you The Gray Market. The column decodes important stories from the previous week—and offers unparalleled insight into the inner workings of the art industry in the process 

This week, trying not to make the same mistake twice…
 

IS THIS THE END? 

If you’ve paid even passing attention to the pivot point between art and tech since Covid, then you know that we’re now being told for the second time in two years that we are at the advent of a technology that is about to change everything about how we live, work, and interact with images. From early 2021 until roughly mid-2022, crypto (including NFTs) were the supposed hinge innovation. But as crypto prices crashed, NFTs withered into niche collectibles, and FTX went down in pixelated flames, artificial intelligence—meaning text-to-image generators and advanced chatbots—took over the public consciousness and the art-tech discourse. 

As A.I. sucks up more and more of the oxygen in the media ecosystem every day, however, I find myself increasingly frustrated by the art world and the larger world’s willingness to make the same mistakes we just made when presented with another supposed society-reshaping innovation two years ago. However, a couple of key interactions between art, policy, and the law help clarify both the core problem with the A.I. discourse and to offer a window onto our best hopes for a solution. 

Let’s start with a salvo from the opposing side of the A.I. turf war. Last Wednesday, the nonprofit Future of Life Institute published an open letter signed by more than 1,000 figures in tech, science, politics, and other fields calling for an immediate hiatus on further development of the most powerful A.I. systems released to date, by whatever means necessary. “This pause should be public and verifiable, and include all key actors,” the letter states. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”  

The letter was triggered by a shared belief that “recent months have seen A.I. labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” 

The original signers of the letter include Elon Musk, who co-founded (then later exited) DALL-E and GPT-4 creator OpenAI; Apple cofounder Steve Wozniak; and, in the clearest expression of how dire this cohort believes the consequences could be, the scientist presiding over the organization that updates the Doomsday Clock.  

On one hand, the signatories have a point. Tech companies, led by Microsoft-backed OpenAI, have been pumping out ever more advanced image generators and algorithmic chatbots at a furious clip, and the technology’s improvement from version to version has been as staggering as it has been troubling. As a visual reference, here’s a tweet comparing the outputs for the identical prompt “tasty burgers” in Midjourney versions 2–5:  

Meanwhile, on the chatbot front, the recently released GPT-4 scores in the 90th percentile on the bar exam. Buzzfeed, in a complete reversal of what CEO Jonah Peretti pledged to do with the technology, recently began deploying travel articles written almost entirely by A.I. with some light editing by salespeople and product mangers (not journalists). Chatbots from competing companies have started offering up each other’s errors and “hallucinations” to users as facts. And in my favorite example, an algorithm recently managed to trick a human into solving a Captcha on its behalf by posing as a blind person so that it could access systems specifically designed not to be accessed by machines.  

So yeah, things are getting spooky.  

This is even more disturbing because the colossal business upside of winning the race to A.I. dominance is incentivizing more and more human decision-makers to jettison their own handpicked oversight professionals in order to press forward with maximum ruthlessness at exactly at the time when it has become more dangerous than ever. In recent weeks, Microsoft, Meta, Google, Amazon, and Twitter have all sacked members of their divisions focused on so-called “responsible A.I.” (also known as ethical A.I.), the employees tasked with alerting their employers when, why, and how to curb their products for the good of humanity.  

For a sense of the commercial stakes, Microsoft told investors on a call announcing its integration of OpenAI’s chatbot tech into its Bing search engine that every one percent of the search market it could reclaim from Google would be worth an additional $2 billion of annual revenue, per Platformer. That’s just the value of one percent for one use case of one product. Try to imagine what the winners stand to gain for a greater share of all the other potential use cases, too.  

Microsoft’s responsible A.I. cuts are particularly noteworthy for this column. Last year, the company’s “ethics and society” department made multiple recommendations to protect human artists’ rights by restricting the capabilities of the Bing Image Creator, a text-to-image generator built on top of DALL-E. The recommendations included denying the use of any living artist’s name as a prompt and/or responding to such prompts by serving up an actual marketplace to buy the artist’s work instead. The team also red-flagged OpenAI’s decision to change its terms of service to grant users “full ownership rights” over any images they generated via DALL-E. (As of my writing, the bounds of copyright in A.I.-assisted or A.I.-generated works are still very much up for debate at the U.S. Copyright Office and in the courts.) 

Employees said that neither strategy was implemented into Bing Image Creator, according to the Verge. The product went live in test markets in October 2022. DALL-E’s terms of service still supposedly grant users full ownership rights over their creations. Then, this January, Microsoft dissolved its ethics and society division.  

(Microsoft said Bing Image Creator was modified before launch to address concerns raised by the departed team members, that the elimination of the ethics department amounted to fewer than 10 job cuts, and that it still employs hundreds of staffers focused on responsible A.I.) 

Reading all this makes the Future of Life Institute’s letter sound awfully reasonable—and also awfully unlikely to spur the change its signatories want to see in the world. Humanity doesn’t have a great track record of bowing to prudence when billions of dollars and incredible power are up for grabs. 

Combine that dynamic with the increasingly astonishing capabilities of this technology, and it feels like A.I. has brought us to the type of actual wedge moment in history that blockchain couldn’t. Worse, listening to the dominant narrative makes it feel as if we’re totally unprepared for how to deal with a technology that we quite literally cannot understand, even if all we care about is art and image-making. 

But what if we’re wrong about all of this? Or more to the point, what if the excitement and peril around A.I. is clouding out our best options for what to do about it?  I actually think there’s a reasonable argument that this is true—and by buying into the dominant narrative, we’re once again playing right into the hands of the people who stand to benefit most from a world re-molded around a shiny new technology.  

Installation view of Hito Steyerl's Drill at the Park Avenue Armory. Image: Ben Davis.

Installation view of Hito Steyerl’s Drill at the Park Avenue Armory. Image: Ben Davis.

CONTROL THE NARRATIVE, CONTROL THE MARKETPLACE 

By far one of the strangest aspects of the A.I. conversation is this: in a 2022 survey of A.I. experts, the median estimate for the percentage chance that poor handling of the technology could lead to “human extinction or similarly permanent and severe disempowerment of the human species” came in at 10 percent. That’s higher than the percentage of pedestrian traffic fatalities linked to speeding in 2020 (8.6 percent). And yet, as Ezra Klein pointed out, the experts estimating this not-very-small chance that A.I. also stands for “Armageddon Imminent” are in many cases the same experts dedicating their lives to developing it as fast as possible! 

I don’t doubt that many, if not most, of the respondents to that survey were being sincere. But it’s also vital to recognize how self-serving the narrative about apocalyptic A.I. is to the companies and people directly involved in creating, developing, marketing, and selling it to everyone else. The artist Hito Steyerl expressed the idea to my colleague Kate Brown as follows when she was asked about the public release of A.I. tools like DALL-E and GPT-4: “It’s a great P.R. move by the big corporations. The more people talk and obsess over it, the more the corporations profit.” 

What’s more, the doomsaying-for-dollars strategy comes straight out of the same raggedy playbook that has been used by practically every other technological innovator in the history of civilization. 

In September 2021, journalist Joseph Bernstein went deep on a key fallacy undermining the now-rampant debate about fake news, misinformation, and disinformation channeled through social media: basically, there is still almost no legitimate evidence that Facebook, Twitter, YouTube, or any other platform is actually very effective at changing users’ minds about anything. This is partly because analyzing influence is still what he calls a “soft science” at best, and at worst, peddling it may be just as much an exercise in sales as any corporate or political campaign that such research could be used to shape.

On the corporate front, Tim Hwang, an attorney and former policy czar at Google, revealed in his book Subprime Attention Crisis just how difficult online attention is to quantify, direct, and therefore monetize; in it, he states that most online ads only capture the attention of existing customers, making much of the business “an expensive way of attracting users who would have purchased anyway.” On the darker side of the influence-peddling moon, Bernstein notes that there are still no standardized or even remotely rigorous definitions of the most basic terms in the field of study, ranging from “misinformation” and “disinformation,” to “clickbait” and “conspiracy theory,” with the upshot being that their practical meanings all too often seem to distill down to anything the evaluator disagrees with.  

So, it might not surprise you to hear that the promises and fears around social media’s near-hypnotic influence today mirror the promises and fears around banner advertising on websites before it, and T.V. spots before that, and radio before that, and on and on backward through time, probably all the way to the first town crier.  

But to hear that even the studies we do have today actually show that, say, the Cambridge Analytica scandal had essentially no measurable impact on Facebook users, let alone the outcome of the 2016 presidential election? That wouldn’t just be surprising. It would also be downright disastrous to Facebook’s central business model, i.e. that buying ads on its platform is the magical shepherd’s staff that will guide consumers wherever an advertiser want them to go. 

Which probably explains why, over the course of a single year, Mark Zuckerberg’s stance changed from it being “a pretty crazy idea” that Facebook pushed Donald Trump over the finish line to the presidency, to a sober pledge that the company would undertake serious measures to reform the toxic information ecosystem allegedly wreaking havoc on its platform and the populace.  

Facebook CEO Mark Zuckerberg arrives to testify before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Photo by Chip Somodevilla/Getty Images.

Facebook CEO Mark Zuckerberg arrives to testify before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Photo by Chip Somodevilla/Getty Images.

Last week, tech journalist Max Read invoked Bernstein’s piece to warn us that the same thing is happening again with the dominant narrative about the new generation of artificial intelligence products: 

By the same token, if you are trying to sell A.I. systems (or secure funding for research), it’s better to predict total imminent A.I. apocalypse than it is to shrug your shoulders and say you don’t really know what effects A.I. will have on the world, but that those effects will probably be complicated and inconclusive, occur over a long timeline, and depend to a large degree on social, political, and economic conditions out of any one A.I. company’s control. 

Read goes on to imply that the biggest problem here is misdirection: by controlling the narrative around A.I. (let’s summarize it as “god-like power and mystery = doomsday potential = hair-on-fire urgency to invest time, attention, and money into perfecting this tech to prevent disaster”), its developers increase the public’s interest in their products and simultaneously escape scrutiny on other major imbalances that have positioned them to succeedimbalances with more achievable but less sexy solutions, such as fixing a regressive U.S. tax structure that has allowed wealthy individuals and corporations to invest obscene fortunes into speculative investments like A.I. for decades. 

In fact, Bernstein’s full accounting of the context conveniently kept out of sight by the bogeyman of social-media misinformation also helps clarify just how many longstanding but addressable external factors have to align before any technology can attain an aura of disruption:  

In the United States, that context includes an idiosyncratic electoral process and a two-party system that has asymmetrically polarized toward a nativist, rhetorically anti-elite right wing. It also includes a libertarian social ethic, a “paranoid style,” an “indigenous American berserk,” a deeply irresponsible national broadcast media, disappearing local news, an entertainment industry that glorifies violence, a bloated military, massive income inequality, a history of brutal and intractable racism that has time and again shattered class consciousness, conspiratorial habits of mind, and themes of world-historical declension and redemption. The specific American situation was creating specific kinds of people long before the advent of tech platforms. 

None of this is to say that A.I. may not actually be a world-changing technology, or that its impacts may not be felt on a societal level much faster than other, earlier technologies. It is to say, however, that the dominant narrative around A.I.—both inside and outside the arts—tends to treat these algorithms, first, as if they (and they alone) are the force we must contend with to right civilization; second, as if our only hope to harness them for good is to scrap our existing civic, social, aesthetic, cultural, legal, and regulatory priors so that we can remake society from square one in response to the specific, allegedly novel traits of these man-made yet alien intelligences; and third, that the technology is speeding so far ahead of us already that we might as well just give up on trying to counteract it and hope for the best. 

I think this line of reasoning is reductive at best and psychologically harmful at worst. Reinforcing my feeling is a recent legal breakthrough on NFTs, which were of course supposed to upend art production, consumption, and sales to a similar degree to what we’re now being told A.I. image generators will do.  

Kevin McCoy’s Quantum (2014)–the first artwork ever minted–at Sotheby's . (Photo by Tristan Fewings/Getty Images for Sotheby's)

Kevin McCoy’s Quantum (2014)–the first artwork ever minted–at Sotheby’s . (Photo by Tristan Fewings/Getty Images for Sotheby’s)

‘QUANTUM’ COMPUTING 

In March, a U.S. district court dismissed a lawsuit brought against Sotheby’s and artist Kevin McCoy, generally regarded as the co-inventor of the NFT, by an anonymous plaintiff asserting ownership over McCoy’s work Quantum (2014). Quantum, billed as the first-ever NFT, went for nearly $1.5 million in Sotheby’s “Natively Digital” sale in 2021. The plaintiff argued that they had exercised a programming-level loophole in an early blockchain to nab the rights to the work, which McCoy had allegedly abandoned through negligence or poor practice or both. 

The judge’s precedent-setting opinion, however, stated that the plaintiff’s case “demonstrated nothing more than an attempt to exploit open questions of ownership in the still-developing NFT field to lay claim to the profits of a legitimate artist and creator.” Crucially, though, he didn’t reach this conclusion because of some wholly new, blockchain-specific reference point. He did it by applying long-established precedents about analog property rights to the nascent use case of crypto collectibles. 

“One of the takeaways of the Quantum case is that ‘code is not law’–law is law. Basic principles of law shouldn’t change because of a particular code at issue,” said William Charron, co-chair of Pryor Cashman’s art law practice and the leader of McCoy’s litigation team (along with partners Robert deBrauwere and Megan Noh).  

This principle offers a corrective to the default perspective almost invariably held by the developers of potent new technologies, whether they happen to be based on the blockchain, machine learning, or earlier, now-familiar advances like the world wide web and e-commerce. For these actors, controlling the narrative is a means to dual ends—namely, placing their products at the center of culture while also declaring them so novel that the old rules can’t possibly be used to rein them in.  

But it’s nonsense almost every time. If it weren’t, we would basically have to start society over every time some new invention came out of the gate. As Charron put it: “I don’t agree that the law will never be able to catch up with our changing technology. I think the law will always be two steps ahead of the technology. People just need to respect its presence and apply it to the technology under review.” 

Again, this framework doesn’t invalidate the possibility that many U.S. regulators aren’t yet mentally equipped to wrestle with technology as advanced and arcane as A.I. (If you watched the clips of the recent congressional hearings on TikTok, it’s hard to imagine that they are currently mentally equipped to regulate electric toothbrushes.) Still, it does invalidate the dominant narrative’s implication that we, collectively, have no choice but to start from zero when evaluating how A.I. fits into life in 2023. 

Instead, what’s so hard about figuring out how to protect art, culture, and society in an age of A.I. is figuring out the core principles and priorities that the technology seeks to distract us from. How much do we value human creativity in the first place, and why? To what extent should we incentivize people (especially young people) to pursue careers in the arts? How should the labor market and the commercial market be restructured or regulated to enable that outcome? 

My opinion is that, here in the spring of 2023, we’re overdue for a meaningful, critical, collective rethink about how to answer those questions. That would have been true even without the advent of high-fidelity text-to-image generators, hyper-literate chatbots, and other A.I. tools. Their appearance does sharpen the urgency to re-evaluate our priors. But it’s not because A.I. is so completely new; it’s because A.I. is in so many ways the next logical step along the path we’ve been traveling for too long without looking up to ask whether we’ve lost our way.  

That’s all for this week. ‘Til next time, remember: we have to be active participants in our own rescue.  


Follow Artnet News on Facebook:


Want to stay ahead of the art world? Subscribe to our newsletter to get the breaking news, eye-opening interviews, and incisive critical takes that drive the conversation forward.
Article topics