A.I. Art Experts Dismiss Fears That the Technology Will Destroy Humanity. But It Could Make Culture Really Boring
Four A.I. experts weigh in on the future of creativity in the age of artificial intelligence.
Experts in the use of artificial intelligence in the arts believe that fears of human extinction at the hands of A.I. may be “inflated.” But they do worry that developments in the technology could lead to cultural stagnation and other risks.
Artnet News interviewed four prominent A.I. researchers to get their views after a statement was published by the Center for A.I. Safety, in which dozens of scientists and public figures called for global governments and institutions to prioritize “mitigating the risk of extinction from A.I.”
“A lot of the hype surrounding fears that ‘the robots are coming to kill us all’ is inflated and not particularly nuanced,” said Tina Tallon, a researcher and music composer who serves as an assistant professor of A.I. and the arts at the University of Florida.
She said scientists, politicians and the media have only been thinking about the effects of A.I. in recent decades but that artists have been considering the impacts of human automation since ancient times. Tales of human automata are present within Judaism and in Greek and Roman mythology, among other literature throughout history.
Because artists have been ahead of the curve, policymakers and developers should turn to artists to help “imagine some of the ways out of the problems we’ll be faced with,” she said.
Gathering data, manipulating opinion, and the question of sentience
Ahmed Elgammal, director of the Art and A.I. lab at Rutgers University in New Jersey and the founder of Playform A.I., one of the earliest generative A.I. platforms, admitted that A.I. may entail broader dangers, for instance through the ability of such systems to gather information and manipulate opinion.
“The danger that really will happen, which has started happening now, is when these systems can execute things on the internet and run codes. That’s where it becomes hard and really becomes dangerous,” Elgammal said.
Elgammal added that A.I. systems “can easily render false information.”
“That potential harm, which can be seen in the last few years in social media and ‘fake news,’ can affect western democracies — and now these systems can even create blogs and write things that become major threats,” Elgammal said.
Heidi Boisvert, a multi-disciplinary artist and academic researcher specializing in the neurobiological and socio-cultural effects of media and technology, explained that the idea of A.I. emerged as far back as 1947, as an outgrowth of discussions on cybernetics, a theory of communication and control in animals and machines.
“The potential for A.I. to take over cognitive faculties and various ways to personalize media and world views based on their own biological signatures, that is dangerous, essentially stripping us of human autonomy,” she said.
A related question is that of machine sentience: Will A.I. ever be able to think and feel like a human? Audrey Kim, the curator of the Misalignment Museum in San Francisco, said there’s “so much disagreement right now about sentience” and what humanity means.
Kim gave the example of her dog, which she has trained to use buttons at home to communicate with her – such as advising her when he needs food or to go out.
“I view consciousness as a gradient and I think there are ways to increase consciousness and also to decrease it. Obviously, someone in a coma is in a different gradient of consciousness than someone in a conversation with us right now,” Kim said.
“It’s fascinating to see this dog, I’m sort of maximizing his consciousness by giving him the means to explore the world … When we talk about sentience, or what it means to not be a robot, there are so many core existential questions even before we talk about introducing A.I. to that.
Elgammal believes A.I. lacks consciousness and likely always will. Instead he worries about how people might manipulate others into believing it does.
“Scientists will be able to develop a mirage of consciousness but not real consciousness. Future A.I. systems will be very good at giving you impression of having consciousness, but it will never have it. This is the good news,” Elgammal said.
“But with this impression of consciousness comes the damage that these tools can be used to manipulate people because they will think that it’s conscious and has the ability to create text or visual media.”
Boisvert speculated that the way “powerbrokers” could go about doing this is by running the “massive amounts of data” being collected about people daily through A.I. models to target them.
“The data is the gold and how we’re using the data to control human behavior, whether it’s Spotify preferences to something as devious as getting people to vote in a particular way, but changing people’s world views is one of the greatest potential dangers,” Boisvert said.
Kim noted that the societal benefits of A.I. that might outweigh perceived threats. She cited their use in “identifying respiratory signals over the phone” which can then be used to dispatch first responders to help people.
Boisvert cautioned that “there is a lot of hype about where we are in terms of generalized A.I.,” and that development is “not that far along.”
“We still have these radicalized and sci-fi views of humanity being completely extinct. There’s a lot of fearmongering happening. But there are many potential positive uses of the technology.”
Cultural threats and the changing labor landscape
In discussing how A.I. will impact the production, consumption, and appreciation of culture, each of the experts interviewed by Artnet News expressed a cautious optimism about the use of A.I. as a tool for artists.
“One thought is that A.I. is going to create a feedback loop where the models will be trained on data sets that include art that has been filtered for increasing human engagement,” said Tallon, adding that such feedback looks could limit human expression because A.I. models would filter out weird, unpopular, or experimental art.
An additional issue with generative A.I., Tallon said, is that new models are now being trained on A.I.-generated materials which creates another feedback loop. Kim called it “inbreeding of art.”
Elgammal, stressing the same concern, suggested that human expression would then stagnate because A.I. will keep generating the same types of images.
“If you go to Midjourney today, the things it creates are amazing – but they are all the same. You can tell it’s something made by Midjourney. That’s stagnation,” Elgammal said. “Infinite ideas every day come from human minds. A.I. on the other hand must recycle ideas and basically recycle in a way that looks new but in fact it is not. That’s how A.I., by construction, is supposed to work.”
Policymakers and A.I. developers will need to think about how these models exist within capitalist structures, Tallon said, adding that “it will be interesting to see how A.I. adapts to those temporal dynamics of trendsetting and creation and consumption.”
Tallon gave the example of a tool that was recently created and promoted for songwriters and musicians that has a ChatGPT-like interface, which she said would only allow users to do “very specific types of manipulations.”
“The people creating these tools are not artists, who might be very amateur artists or musicians, but their livelihood does not rely on creating art. At the end of the day, many of them are not familiar with the reality of what artists do on a daily basis,” Tallon said.
In applying the new tool to their process, songwriters are not able to use atonal pitches beyond the western 12-tone scale or uncommon time signatures.
Tallon suggested that, though such tools do “help facilitate the process for those who do not have the same training or skill sets,” the limitations of such tools for general viability are of detriment to artistic development in the commercial space.
Kim, who called herself both “pro-human” and very much a “tech optimist,” appeared to dismiss Tallon’s concerns about monopolies in A.I. technology and the effects on cultural stagnation.
“There’s a new A.I. that is pop-y because it was trained on all this pop stuff, that’s fine. So then there’s this opportunity for an emo-trained model. I don’t think this a zero-sum scarcity thing,” Kim said.
“Just because there’s an A.I. model that is specifically trained on this type of art or music doesn’t mean there can’t be an A.I. model that’s trained on another type. For me it’s not a zero-sum question.”
Elgammal said the biggest questions and threats in the artistic landscape have to do with changes in creative labor dynamics – potentially devaluing expression in fields such as advertising and graphic design.
“At the bottom line, these systems can be very helpful as partners to humans and create tools for humans and create ingenuity. A.I. can generate images but A.I. cannot generate art. Art is a human thing. Artists make art,” he said.
Tallon fears that the use of A.I. in the commercial arts will make human practitioners less employable.
“It used to be you could get a job with a graphic design firm or ad firm and you’re set. For visual artists, that was a stable source of employment. I don’t think that’s necessarily the case now and I do think a lot of that work is going to be delegated to A.I.”
However, Tallon remained optimistic that, as A.I. becomes more common, humans will begin to “really value” art made by other humans – seeking out to commission work because they value the process of collaborating with another human.
“I think those kinds of connections will always be at the heart of what we do as other artists,” Tallon said.
Kim added that shifts throughout art history have been recorded whenever new technology is developed, such as increased shutter speeds leading to developments in candid photography and street photography.
“That’s why artists all the time need to innovate and push for new things that affect us. Artists using A.I. have the ability to push through and create something novel,” Elgammal said.
“The same thing happened before when cameras came around. Artists at the time thought, ‘what’s the point of art?’ But that didn’t happen, the camera didn’t kill art. Yes, cameras took the jobs of artists who make portraits. But it opened lots of opportunities for art as well. Photography became an art form but art also advanced as well away from figurative art.”
Copyright, transparency and what it means to be human
Elgammal and Boisvert particularly urged companies and groups creating A.I. models to make research more transparent and generate greater discussions about the positives and negatives of the use of open-source code in product development to combat threats.
“Most of these problems happen when big companies hide what’s behind their A.I. models and abilities and that makes it harder for researchers in A.I. to assess the power and potential of these systems beyond what’s just in the media,” Elgammal said.
Tallon agreed—noting that OpenA.I., the creators of the chatbot ChatGPT, were “cagey” about the data set when they released GPT4.
Kim also pointed out “the huge limiting factors” of computer power and having the chips necessary to operate the algorithms create a lack of equity in the A.I. space.
“The tech could be open source as much as possible but if you don’t have the means to use it because it’s all owned by one corporation, it doesn’t matter,” she said.
“Having healthy competition and lots of alternatives allows for more freedom across people and more democratic. I am very pro-democracy. … We don’t want a single company to have all the power of A.I. That would be a terrifying world.”
Recently, Tallon helped prepare remarks for speakers at a meeting with the U.S. Copyright Office relating to A.I. and the music industry. She noted that only two of the 20 panelists were musicians.
“I don’t know if there’s a lot the copyright office can do. The model has been built and you can’t unbuild it. You can’t just take one thing out, so you would essentially have to knock down the building and these companies are not going to knock down the building,” Tallon said.
Tallon added the U.S. Copyright Office panel talked particularly about voice cloning and how “everyone and their mom” are making covers, such as making songs where Britney Spears sings like Rihanna.
“The problem with this in terms of copyright law is that you’re not recreating something that has been made. You’re creating something new. Someone’s voice is not something that can be copyrighted as of right now,” Tallon said.
Kim called such copyright concerns an “oversimplification” and noted that there are broader issues of copyright and ownership at play that pose challenges even without the introduction of A.I.
“There’s the question of how things are structured for IP and copyright ownership, that’s so flawed right now, which we saw with Taylor Swift’s masters,” Kim Said.
“So talking about A.I. doesn’t eclipse all of those things that are still a topic without the trendiness of A.I. It’s not A.I. that is introducing a lot of these questions because a lot of these issues have predated A.I.”
Still, Tallon said people “can copyright things humans make” but “can’t copyright the things that make us human.”
“When you look at an artwork in a gallery, you connect with it because of the experience of the artist who made it,” Elgammal said. “Yes, A.I. can create stories but this creation will just be re-rendering what it has been trained on.”
Kim compared the use of A.I. technology to the use of assistants, a practice that has existed in art throughout the centuries.
“We know that even people like Rembrandt had a lot of apprentices and would tell them to like ‘make a person dressed in red’ and he would oversee it but he wasn’t the person making that thing,” Kim said. “So, how many unnamed people were contributing to the master works we contribute to Rembrandt?”
Education is key, experts agree
Boisvert stressed the importance of education in combating threats, a sentiment shared by Kim. The Misalignment Museum of which she is curator is a new institution in the process of developing a permanent space to display art that she hopes will increase knowledge about A.I. including “the incredible opportunities for good and the possibility of destruction.”
Kim said the museum recently gave tours to groups during a temporary exhibition in San Francisco’s Mission District, and that the topic of A.I. seems to “resonate” with society across demographics.
“In the descriptions for the pieces themselves, we also added things like ChatGPT, Google Vision API, things that people aren’t considering mediums for art,” Kim said.
Kim particularly noted one piece that was shown, an A.I.-generated conversation between the German filmmaker Werner Herzog and the Slovenian philosopher Slavoj Žižek in a “never-ending discussion.”
“Part of the purpose of that exhibit is that it then helps raise awareness about how believable and quite easy it is to create deepfakes even in their own personalities,” Kim said. “This has huge implications for society.”
Follow Artnet News on Facebook:
Want to stay ahead of the art world? Subscribe to our newsletter to get the breaking news, eye-opening interviews, and incisive critical takes that drive the conversation forward.