Art & Tech
Scientific Journal ‘Nature’ Says No to Publishing A.I.-Generated Images and Videos, Calling Out Their Lack of ‘Integrity’
The journal, however, will allow text created using large language models, provided their use is documented.
The journal, however, will allow text created using large language models, provided their use is documented.
Richard Whiddington ShareShare This Article
One of the world’s oldest and most read scientific journals, Nature, has emphatically rejected publishing images, illustrations, and videos created in any way using generative artificial intelligence.
In a statement released on June 7, Nature’s editorial board said it had arrived at the decision after months of investigation and claimed generative A.I. challenged issues of basic scientific integrity including transparency, verification, and attribution. It also cited consent, privacy, and copyright infringement as major factors.
“The process of publishing—as far as both science and art are concerned—is underpinned by a shared commitment to integrity,” Nature wrote in a statement. “As researchers, editors, and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative A.I. tools do not provide access to their sources so that such verification can happen.”
From now on, photographers, artists, and filmmakers commissioned by Nature will be required to confirm that their work has not been created or enhanced using generative A.I. The journal will allow such images in articles specifically related to A.I.
Editorial: Why Nature will not allow the use of generative AI in images and video https://t.co/jKYz1xV8O1
— nature (@Nature) June 7, 2023
A surprising admission is Nature’s decision to allow authors to include text created using large language models, such as ChatGPT. Any such use will need to be documented in a paper’s methodology or acknowledgement section. Authors must also provide sources for all data generated through A.I. prompts—not necessarily straightforward given large language models have been known to invent sources. Last, a large language model will be accepted as an author.
The 153-year-old publication’s decision contrasts somewhat from that made by thousands of scientific journals in January to ban the use of large language models, most explicitly ChatGPT, from being used in published papers. The move was led by journals Science, Springer-Nature, and Elsevier largely on the grounds that large language models are unable to sign the accountability form required of authors and that its errors would seep into the literature.
The ban does, however, echo the latest call for publishers to restrict their use of A.I. generated images. In May, thousands of journalists and artists signed an open letter, launched by artist and activist Molly Crabapple and the Center for Artistic Inquiry and Reporting, demanding that newsrooms choose human illustrators over “vampirical” A.I. image generators and stay true to founding principles of journalism.
Nature touched upon this sense of change in its statement, acknowledging that while the technology holds great promise it is upending long-established conventions in science, art, and publishing. “These conventions have, in some cases, taken centuries to develop,” Nature’s editors wrote. “If we’re not careful in our handling of A.I., all of these gains are at risk of unraveling.”
More Trending Stories:
Is Time Travel Real? Here Are 6 Tantalizing Pieces of Evidence From Art History