painting of a seated man and woman wearing overalls
Image: Herndon Dryhus Studio

You enter a cool, dark room containing a grandiose musical instrument with gilt panelling. Washed over with a sense of calm, you are transported by the sound of distant voices that ebb and flow in harmony. The scene described could have happened centuries ago and, perhaps, it is this very timelessness that makes it oddly affecting. Yet, peer closer, and the “organ” in question is stuffed with whirring GPUs. As it turns out, this haunting choral performance is the work of A.I.

With their latest exhibition “The Call,” at Serpentine in London’s Kensington Gardens, Berlin-based artist duo Holly Herndon and Mat Dryhurst have set out to disprove some of our assumptions about A.I. For example, that it only outputs speedy, low-quality pastiches of human efforts, or that it will be used to create impenetrable screen-based works that are only of interest to diehard techies.

Installation view of “The Call” by Holly Herndon and Mat Dryhurst with sub at Serpentine in London, 2024. Photo: © Leon Chew.

Since the arrival of text-to-image generators like DALL-E and Midjourney, A.I. has had a bad rap in the art world. Tapping out prompts to make a picture proved to be a controversial way to claim the title of “artist.” Worse still, some of these outputs looked suspiciously similar to the style of established artists who had never consented to their work being used as training data. Are we one day going to be replaced?

For several years, Herndon and Dryhurst have been brave advocates for A.I. amid these waves of frustration and fear it has inspired. In 2022, their company Spawning released tools like haveibeentrained.com to help artist protect their intellectual property from Big Tech. Their broader aim, however, is that we reconsider A.I. not as a competitor, but as an exciting invitation to new ways of working. To this end, they are creating their own generative model trained only on images in the public domain. This will then be fine-tuned by individual users using copyrighted data, for which the creator is fairly compensated through subscription fees.

Their show “The Call” is the culmination of months of research that saw Herndon and Dryhurst travel to meet choirs across the U.K. and collect a unique dataset to train their custom A.I. model. It welcomes both the passing viewer, intrigued by its polyphonic sound, and the more curious visitor, who is ready to dive into the wealth of ideas woven throughout the beautifully simple yet ornate installation by Niklas Bildstein Zaar of sub.

At the show’s opening, the artists sat down to delve deeper into “The Call.”

Why did you decide on the approach of working with regional choirs? 

Holly Herndon: The choir is a great metaphor for A.I. In hymnal history, musical languages developed over centuries with countless contributors. The human voice is a beautiful gray area between the individual and the group because you learn language and dialect through mimicking those around you. But you also perform your own voice through the agency of your body. Group vocalization [for] group coordination is one of the earliest human technologies. We’re trying not to see A.I. as an alien other, but rather as something that is from us and is just a very sophisticated way of humans coordinating. It’s essentially us in aggregate.

We’ve always used consensual data so it was important to have choirs contribute to the dataset rather than just scraping audio from the internet. We wrote our songbook in a flexible way so it can be interpreted differently. This creates a more rich and unique dataset.

Mat Dryhurst: A.I. is inherently participatory, it’s inherently recombinatory, and art is bigger than medium, it’s biography, it’s intention, it’s context.

Holly Herndon and Mat Dryhurst conducting a recording session with London Contemporary Voices in London, 2024. Courtesy: Foreign Body Productions

But how exactly does the A.I. element enhance what the choirs of humans are able to achieve on their own? 

HH: We combined the choral dataset with our own personal archive, so all of this stems from the albums we’ve created over the years. We were able to combine in a very strange way our mutated sound with the choirs to develop a new kind of polyphonic music that this model enabled. As a composer, I would never have written this kind of polyphony on paper because some of the decisions are quite strange, but then you hear them… Of course, you always have to audition the material and then you chose the best parts and place them together. For us, it was creating a new musical language.

You’ve said that not only is the output of your A.I. systems the artwork, but every part of the process including creating the dataset and training the model. Could you elaborate on that? 

HH: Most people approaching this subject are thinking about publicly-available models like Midjourney or ChatGPT. Often these will give you something averaged. They’re a great averaging of everything on the internet, which is very useful if you want to understand what a bottle looks like. But if you want to define the bottle of your own world then you need to train a bespoke model to your own particularities and your own dataset.

MD: The dataset is contextualized in a particular way. You can set with precision what went in, how it’s tagged, how it’s weighted, how people interact with it. These are all creative decisions. Right now, it’s maybe in the domain of nerds and super laborious to actually impose authorship on this, but that’s a matter of time. We’re trying to set a precedent. Infinite media is coming, so what kind of tools or approaches exist for an artist to be able to assert authorship over such a vast amount of context?

Installation view of “The Call” by Holly Herndon and Mat Dryhurst with sub at Serpentine in London, 2024. Photo: © Leon Chew.

Much talk around A.I. and art has to do with the use of copyrighted material to train models. Do we need to change how we think about A.I. and intellectual property [I.P.]? 

MD: There are a few angles here. One is that, as an artist, depending on what you’re interested in, there’s a lot you can do to make models your own or make the process of integrating these tools your own. Another is that, there’s a lot of work that needs to be done to [redress] the odd I.P. hiccups endemic to this technology, but that’s not a reason to dismiss it. Three:  don’t confuse the medium with the art. Don’t think that just because there’s a piece of software that can produce a bunch of cool pictures that that actually compensates for the role of an artist. That does everyone a disservice. It’s a bit of a self-own to concede that point. More broadly though, we’re of the position that the only way out is through.

HH: But we’ve also advocated for an opt-out option because…

MD: We haven’t advocated for it! We built it!

HH: Exactly! Because we think people should have that option. But we also are building a public A.I. model because we think if you have a public domain model, you can still have a powerful tool without infringing on other people’s property.

MD: The public domain is an opportunity to reach infinity. When your work becomes public domain, it becomes infinitely mutable by anybody. It becomes plastic, which is actually a gorgeous concept and much older than I.P. We invented the idea of an individual—and of an individual owning somethin—so there’s no reason that, when it comes to I.P., there isn’t a lot of room for imagination. Nobody’s in charge. This is a new thing and people are confused and figuring it out. This is a time for ideas.

It makes more sense to conceive of a model as a collective accomplishment that could distribute collective bounties or collective profits. That’s very native to this technology. It’s only because we are dealing with old [I.P.] law that we’re running into these issues.

HH: Information wants to be free and information wants to be expensive, and people always forget the second half of the saying.

Holly Herndon & Mat Dryhurst, xhairymutantx, Embedding Study 2, produced for the Whitney Biennial 2024 in New York. Photo: Ashley Reese.

So, what are the stakes here? What is the ideal outcome and what if we fail to adapt to A.I.? 

HH: I love the idea of people coming together and collaborating to create beautiful datasets. That’s my utopian vision.

MD: Artists are people who create distinction in the time in which they live and I don’t think that changes now. You can create distinction in this time by scratching something on the wall and rejecting A.I. entirely and that’s legitimate too. But art is beyond any medium.

HH: But even if you scratch the wall, you are scratching the wall within the context of an economy in a world that is moving towards A.I.. You can never escape it.

MD: I do spend a lot of time in policy discussions and I’m trying to be as transparent as possible. Those who profess that this is not a big deal, that A.I. companies are a fad and it will go away? You’re not best served listening to them. My feeling is this is bigger than the internet. That’s not to frighten people because I think art is far more resilient than people say, but I do have a bee in my bonnet about the suggestion that this stuff is going to disappear. We haven’t seen anything yet. My advice is to take it seriously.

The topic of A.I. and intellectual property law is covered in greater detail in my book A.I. and the Art Market (Lund Humphries), now available for pre-order. “Holly Herndon and Mat Dryhurst: The Call” is on view at Serpentine North Gallery in London until February 2, 2025.