Artificial intelligence is making a lot of people angry this week. “AI art” has been trending on Twitter for several days now, community members on the popular artist platform Art Station are staging anti-AI art protests, and the tech’s biggest advocates have wasted no time in pushing back against the wave of outrage. Hell, even Beeple chimed in with a fantastic visual born of the whole debacle.
But this week’s madness is only the symptomatic culmination of several months of technological developments and the widespread dissemination of AI-assisted artistic tools. The pressure has been building, and it’s now ruptured the surface. The resulting rush of noise that has dominated online spaces in the last few days has, if nothing else, revealed the true nature of the arguments of those who find AI art an unconscionable assault on “real” artists and even humanity itself. The only problem is those arguments don’t stand up to scrutiny. Instead, they disclose a much deeper-seated and philosophical concern.
The case against AI art
Two main critiques of AI art tools emerge when you sift through all the social media static of the past week. The first is the most easily dismissed, as it claims that AI art programs mash or stitch existing images together to create something new. This is simply not how the technology works. These AI models “learn” how to create in ways that are not entirely dissimilar to how the brain learns. The process that AI art programs use to create images is much more akin to construction than it is to collage.
At the outset, the second claim seems to carry a far more grave and essential concern. AI art programs are trained on billions of images scraped from the internet. MidJourney, DALL-E, and Stable Diffusion don’t discriminate in their data gathering. The images used to train these models include artists’ creations and copyrighted works. The ethical breach, critics claim, is that this was done without these artists’ consent or knowledge. There’s some validity to that critique, and this could be a circumstance in which technology is simply outpacing our ability to use it ethically.
But there is also a far deeper and more emotional concern that gets at the nature of art. The idea that programs can now do what previously only humans could — take in mass amounts of data in the form of influences and images and art traditions and turn them into an output — touches on the most sensitive of existential nerves. By their arguments, it’s possible that AI art critics’ concern about a breach of ethics could be influenced emotionally, upending deeper, more intellectual debates. It’s objectively shocking that a machine can engage in this seemingly sacred and uniquely human ability alongside us. Arguably, it’s always felt that way to many.
That’s not to belittle anyone who does feel this way. Such existential dread is entirely understandable, and it’s doubtful that anyone is entirely immune to it. Even the world’s greatest AI advocates, researchers, and technological philosophers have at times felt an unraveling pull at the thought of machines matching and outpacing human ability. At no time is this feeling more poignant than when technology touches on what some call the sacred realm of the soul. Even the non-religious are quick to argue that there is something ineffable about us, some spark or spirit that no algorithm, no matter how highly trained, could ever encroach upon.
AI art is no different than human art
But to argue that AI art programs are unethical in that they draw from artists’ work out in the world betrays a misunderstanding and a denial of human nature and creative endeavors. An illustrator or a painter who creates an image does so by pulling from countless influences, including images they’ve seen over their lifetime. They might have chanced upon those images and traditions in a museum, in a book, at university, or online. As technology increasingly dominates our lives, it’s even more likely that artists draw their inspiration from other people’s work they find on the internet.
Who would argue that they need consent from those artists to create? Plagiarism, cry the detractors of AI art tools, as if it were a knock-down argument against the technology. Yes — if someone builds and trains an AI art model specifically on an artist’s work, that’s plagiarism. But such conduct was a problem long before anyone even conceived of building these tools. To claim that AI art programs encourage plagiarism is no different than claiming that buying a guitar inspires people to rip off existing musical works.
There are several other pernicious suggestions that underlie the anti-AI art claims proliferating online recently. Some of the more shameful ones imply that the people using these programs are somehow unworthy of possessing a tool that lets them create. The subtle but specious claim amounts to little more than this: only those who have dedicated their careers and lives to art are worthy of experimenting with such technology creatively.
These claims are half-hearted concessions to so-called “legitimate” uses of artificial intelligence in creative endeavors, only to pull the rug out from under anyone they deem unworthy of the title of “artist.” Real artists who use AI as a tool in their work, they say, are fundamentally different (and, of course, less morally egregious) than the average plebian who dares to use prompt-based AI programs to explore and create something new.
To many non-artists, that argument can appear weak, and even insulting. The question of artistic authority and authorship has been under contention for a long time — many novels, like William Gaddis’ The Recognitions — directly confront the problem of “frauds, counterfeits, and fakery” in art, and often the conclusion about originality had an unmistakable theme of inevitability. And speaking from an economic standpoint, it would be difficult to convince willing buyers of high-minded ideas about the irreducibility of human subjectivity. Suffice to say that to most in the space, a defense of human-only art will appear arrogant. Worse still, the art world has often practiced a kind of gatekeeping that hinders genuine artistic talent despite several generations pushing back against it.
In short, the abundance of human artists gleefully adopting a negative position on AI art in recent weeks is discouraging to those involved in AI-generated art. But the debate is a lively one.
“Creation is our best weapon,” read a Twitter post from this week’s flare-up, featuring a hand-drawn soldier in the style of a Spartan warrior. The soldier’s shield has been drawn to mimic the now-popular anti-AI symbol making the rounds on social media this week. The post has more than 30,000 likes. It’s a shame so many people view the AI-art tool dynamic as a literal fight. It might feel that way now, but reveling in and mythologizing their position is probably not the best tact for their case, right or wrong.
The future isn’t going away
AI art tools are helping to democratize art. Rather than silo themselves off as a sacred class of citizen that are the sole keepers of truth, beauty, and meaning of artistic expression, artists could benefit from welcoming and encouraging it. Imagine the entire artistic community endorsing, engage with, and advancing AI art.
One of the more valid and upsetting critiques making the rounds this week revolves around the idea that people will use these tools to usher in a new era of lewd or pornographic deepfakes of anyone whose face has graced the internet. This is indeed a problem. While programs like MidJourney claim they automatically block text inputs that are explicitly violent or border on “adult content,” users have already found clever ways around this, carefully crafting their prompts without setting off any moderation alarm bells. Spend enough time on MidJourney’s Discord, and you’ll see plenty of people iterating on uncannily detailed images of both women and men in near-nude and hyper-sexualized forms. It’s a problem, but not an incomparable one.
Just like artistic plagiarism, this issue is not unique to AI art tools. Deepfakes have been around since the late 1990s, and plagiarism is arguably as old as humanity itself. Technological developments that make it easier for society to do or achieve amazing things inherently make it easier for us to do or achieve terrible things. That is more a reflection of the people behind the tools than it is of the tools themselves. Neither does this fact constitute a reason to do away with the technological advance altogether.
Technological breakthroughs aren’t going away anytime soon, and neither are AI art tools. The ethical concerns raised by so many of their detractors have their place in a larger conversation about how we should move forward as a society fairly and intentionally with them. But the straw-man arguments so often trotted out against them in bad faith have no place in that conversation.
Few people are arguing against transparency and disclosure when it comes to using these tools. Fewer still would say there are no issues that these tools raise that don’t deserve serious consideration and discussion. But fear-fueled backlash against AI art and the people who use and advocate for it gets us nowhere. It’s relevant that many AI art critics are also opposed to concept of the blockchain and NFTs — logically speaking, a totally separate issue.
However, the state of the debate on AI art isn’t overwhelmingly surprising. History is replete with new technologies disrupting established systems, and subsequently facing fierce opposition. So long as humans are human, that’s likely to be the case. But the degree and severity of that pushback don’t always have to be the same every time. Artists are, purportedly, in the most advantageous position to view novelty with nuance. But the trick with that is wanting to.
Read More: nftnow.com