The backlash against AI-generated art is easy to understand, motivated by ethical concerns about automated plagiarism, fears about human creativity being replaced by robotic output, and arguments that the art itself just doesn’t look very good.
If anything, this negative reaction is being encouraged by the very people trying to promote AI art on social media. Case in point: The recent spate of images using Adobe’s Generative Fill to extend iconic paintings and movie stills and “see what’s going on outside the frame,” a concept that might as well be invented to annoy film buffs and art historians.
Two generative fill Twitter threads have stirred a wave of controversy over the past few days, one starting with an expanded version of the Mona Lisa, and the other with a famous shot from Reservoir Dogs.
People have experimented with earlier versions of this idea over the past few months, with several AI TikTokers expanding paintings like Girl with a Pearl Earring to predict (rather implausible) backdrops. But with the launch of Adobe Firefly last week, AI creators have access to a more polished version of this technology.
On a practical, everyday level, generative fill can be used to fix photos by seamlessly replacing unwanted material (i.e. crowds of strangers obscuring the backdrop of a selfie) with imagery that fits the photo’s aesthetic. However, a lot of AI creators quickly gravitated to the same experiments around famous paintings and movies, posting expanded images that immediately sparked criticism from fans of the original work.
One of the key criticisms aimed at AI-generated art is that it is, by definition, uncreative. Like ChatGPT, generative fill can only work with what it’s given. These expanded paintings predict extra material that looks similar to the original… but the people who love and understand those paintings argue that the expanded version looks worse and adds nothing of value.
Whether we’re talking about Van Gogh’s Starry Night or a scene from Lawrence of Arabia, the original image was framed that way for a reason. Adobe Firefly isn’t “revealing” anything by generating a wider view of the landscape behind the Mona Lisa, a painting whose popularity is built on the subtle expression on its subject’s face.
Where is all the weird AI art?
AI content generators are partially guided by the user’s intentions, and it feels telling that the most visible forms of AI-generated art are so unimaginative.
The past few months have seen AI creators post a deluge of ersatz Wes Anderson videos and auto-generated Harry Potter fanart, along with (predictably) porn depicting big-boobed women with anime-adjacent facial features.
AI image generators lend themselves to this kind of material, but there’s also a visible lack of creativity among the humans operating this technology. The AI art community seems to be obsessed with derivative imagery, with little sign of people thinking outside the box to make anything weird or funny or unpredictable. (Ironically, one of the few entertaining examples is someone making fun of the expanded paintings thread.)
It’s human nature to find unexpected ways to make art using the tools available, and on social media that translates to memes and fanfic and fake Scorsese movies and overwrought TikTok Succession edits. All of this stuff takes inspiration from pre-existing material, yet it feels fundamentally different from the way AI creators generate their content—including the way that content is presented.
“Ever wonder what the rest of the Mona Lisa looks like?” Well, no. There is no “rest” of the Mona Lisa, and even as a thought experiment, the resulting answer is pretty boring. The main question people ask about that painting is what the Mona Lisa herself is thinking—and that isn’t something you can explain using Adobe Firefly.