People
Hito Steyerl on Why NFTs and A.I. Image Generators Are Really Just ‘Onboarding Tools’ for Tech Conglomerates
"This is the Future" has just opened at the Portland Art Museum.
"This is the Future" has just opened at the Portland Art Museum.
Kate Brown ShareShare This Article
It’s hard to keep track of all the overlapping technological, environmental, and political crises to worry about in 2023. As a guide to such a moment, few figures in art are better equipped than Hito Steyerl, whose imaginative and unsparring work looks directly at all these spheres, and maps how they connect.
Despite the complexity of her subject matter and research-intensive process, the German filmmaker’s works are enthralling and often manifest as highly ambitious, immersive architectural environments. No wonder, then, that her work has gained such a global following, with her largest-ever retrospective, “I Will Survive,” ending a European tour at the Stedelijk Museum in Amsterdam last year.
On the occasion of her new exhibition, “This is the Future,” at the Portland Art Museum in Oregon, Artnet News Europe editor Kate Brown spoke to Steyerl about the implications of artificial intelligence, the metaverse, crypto, and an increasingly imperiled natural world for humanity.
Note: This interview has been edited for clarity. To hear the full audio version, tune into the episode on the Art Angle podcast.
I understand that you do not readily describe yourself as an artist. Why is that?
There is no real reason to do it. I don’t mind if someone wants to call me an artist, but this wouldn’t be the first description that comes to mind when it comes to what I’m doing.
“Filmmaker” is obviously more appropriate—documentary is a foundational genre for you. Could you speak a little bit about your relationship to filmmaking?
That’s my foundational practice and it’s never fully gone away. This is where I start, from an intimate relation to film history. I just don’t have the same relation to art history.
How is it then that your work ended up in the art world?
The documentary film industry had turned to travel logs and food porn mainly. If you wanted to do content-based documentary or even socially inclined documentary, the industry just wasn’t there. So somehow—I think this was more or less post-Documenta 10 with Catherine David—there was some renewed interest in documentary forms in the art field, and I sort of slipped in with that.
What is your relationship to ideas of evidence and truth when you’re also working with elements of poetry, surrealism, and sci-fi in your films? I am curious about your thoughts as to how truth functions in the context of documentary in 2023—in a world where there is so much flattening out of any sense of a single truth about anything.
I don’t think truth is something that can be fully captured by any form of media. Hannah Arendt once said that you could capture “moments of truth.” So this is more or less what I’m hoping to weave in with other elements. But the full truth is never accessible to any documentary rendition, because it just doesn’t have enough dimensions and perspectives.
There’s also the “fake news” angle, which states that nothing at all is true, or all things are equally untrue, which is not factual either. You have things which are closer to reality or to things that happened historically, and you have things that have nothing to do with [reality] whatsoever, and there’s a huge difference between.
Your exhibition “This is the Future” opened at the Portland Art Museum last week. Alongside a work by the same name, there’s also a video installation called Power Plants. In terms of truth, both works are speculative, presenting technologies that don’t exist yet. I was watching them in Berlin in 2019 after they were presented at the Venice Biennial, a pre-pandemic moment that feels long ago. It’s interesting that they’re being shown again because there’s something uncanny about the A.I. that you present in these technologies, especially given that in the past six months we’ve seen five different artificial intelligence bots come forward, from Midjourney to ChatGPT. I wonder how you are re-reading your work right now given these developments.
I’m mainly rereading it through the lens of the earthquake and Turkey and Syria, because the story I’m telling partly takes place in a Turkish prison, and everyone who worked on that film has been massively affected by the earthquake right now.
On the other hand, there may be a reason why the work is still being shown even though the technology is outdated. This particular algorithm, which is a next-frame prediction algorithm, was so difficult to work with, and such a pain in the ass, that no one really deployed it. There isn’t a lot of imagery in the art field that used this specific visual effect. It isn’t as ubiquitous as, let’s say, the DALL-E aesthetic or certain types of StyleGAN aesthetics, which were very much used—in the case of DALL-E to the point of nausea. I think that’s a style that’s already foreclosed to artists almost because it’s just absolutely overused.
I think what is also quite interesting about OpenAI is the sudden public panic that has ensued since it went mainstream. I know artists like yourself have been thinking about these technologies for a long time. It really does seem like in the past two months it all became a mainstream concern. I wonder what you make of the anxiety.
It’s a great PR move by the big corporations. The more people talk and obsess over it, the more the corporations profit. For me, these renderings—I call them “statistical renderings”—they are the NFTs of 2022, right?
In 2021, we had NFTs. In 2022, we have statistical renderings. [These companies] onboard people into new technological environments; with NFTs, people learned how to use crypto wallets, ledgers, and metamasks, and learn all this jargon. With the renderings, we have basically the same phenomenon.
They are onboarding tools into these huge cloud infrastructures that companies like Microsoft are now rolling out, backed by these large-scale computing facilities like Azure, for example. Companies try to establish some kind of quasi-monopoly over these services and try to draft people to basically buy into their services or become dependent on them. That’s the stage we’re at. The renderings are basically the sprinklings over the cake of technological dependency.
It’s operating with this myth that it’s participatory. I can just log into ChatGPT and feel like I have agency in this technological moment. But one wonders how futile this will quickly become when they close the doors again, and it becomes obviously hegemonic, and content is just being dictated to us. I know you’ve spoken about this in other instances, about the idea of a shifting public in the face of these emergent machines.
The public is being captured. Again, it was already captured in Web 2.0 within these app silos, on social media. And web3, which is now being realized through all these machine learning applications, will basically create different silos, which are more software based. So you won’t be able to get any edition of the Adobe suite, let’s say, without integrated machine learning services that you have to pay extra rent for. Basically, you will be forced to subscribe to a lot of different services, which you actually really don’t need, but you have to pay for—I think that’s the business model more or less.
Have you been exploring them for your own work?
I’ve played around with them, but then I started asking myself, “Wow, what the fuck am I actually doing here? Do I really want these renderings?” Most of them look quite crappy. So what I’m actually doing is thinking a lot about them and trying to describe the images themselves. And thinking about the beginning of statistics as being rooted in eugenics and the obsession with breeding and survival of the fittest.
I wonder how imagery as such will change when it becomes thoroughly statistical instead of representational, when you don’t need any more outside input. You just need basically all your data, which are somehow organized in a statistical, latent space. So how does the relation to reality change? How does the relation to truth change? How are those tools also tied in with a huge infrastructure, which produces a lot of carbon emission and actively heats the climate? All of these are questions I’m trying to think through now.
It’s interesting to think about a point where there’s no more outer world needing to be inputted. To come back to something you said earlier, in talking about the earthquake, it made me think of something you said in a recent interview about the over-emphasis on the online sphere. The fact is that online is not a given, it is not going to be everyone’s reality. Could you expand?
Relating to the earlier part of your question—no, these renderings do not relate to reality. They relate to the totality of crap online. So that’s basically their field of reference, right? Just scrape everything online and that’s your new reality. And that’s the field of reference for these statistical renderings.
And then I’ve seen in the past months in many different places the reality of power cuts. We cannot take energy supply for granted, nor can we take the internet for granted. There are many different situations in which these technologies fail or are blocked, for example, by autocracies, in riots, or by the fact that there is devastation of some kind.
A reality in which internet is not accessible is already here. You know, even in quite mundane situations, you can’t imagine how many times conversations with people in the U.S. failed because there had been a weather event and the internet had gone down. There are so many reasons why the digital environment we are all trained to take for granted as our immediate reality might suddenly no longer be available.
To enjoy the rest of this interview, you can tune into it on the Art Angle.