Art historians have longed puzzled over the Mona Lisa’s beguiling smile, wondering what, if anything, it reveals about the sitter. This week, a freaky viral video clip that brought her to life raised even more questions.
The video was one of several released this week by researchers from Samsung’s AI Center and the Skolkovo Institute of Science and Technology, both located in Moscow. They were released along with a paper that revealed new techniques for making moving images from static pictures—a technology that’s ripe for the spread of insidious fake video.
Using metadata mined from image banks, the algorithm breaks down the “facial landmarks” of a portrait, then maps a custom physiognomy onto it.
The more pictures the algorithm has to work with, the more realistic the resulting video is. Yet the Mona Lisa example showed that even one source image is enough to generate uncanny effects.
Vermeer’s Girl with a Pearl Earring and Ivan Kramskoy’s Portrait of Unknown Woman received the same treatment, as did photos of Albert Einstein and Salvador Dalí. (It’s not the first time Dalí has been brought back to life using AI this year.)
The researchers refer to these moving images as “talking head models,” but most others know them as “deepfakes”—images or videos in which one face is mapped atop another.
Deepfakes have come up repeatedly in the news cycles, and have been connected to several examples of fake news and phony celebrity porn scandals. They came back into the public conversation this month when a video of House Speaker Nancy Pelosi, digitally altered to make her appear drunk, circulated online and was viewed millions of times.