If you’ve had your ear to the ground for the past couple of years, you’ll have heard at least some of the rumbles of debate over the ethics and impact of AI art. You may have even heard the names of some tools used to create AI art, like Midjourney, Stable Diffusion, and DALL-E. But you may also be wondering why these tools have spawned such strong opinions in the news, on social media, and even among people you know. After all, haven’t we been having the “robots will take our jobs” discussion for decades, now?
The hook behind these publically-available AI tools is that they can take wildly specific prompts and unflinchingly depict them, like an artist working on commission that doesn’t care if you want a lifesize painting of Mario and Luigi eating a barbecued Toad, just as long as they get paid. Except, of course, many of these tools do it for free. Many people are using tools like DALL-E to generate memeable images on social media, but others saw the commercial potential behind AI tools, and it wasn’t long before an artist entered a piece of AI-generated art (using Midjourney) into a competition — and won, causing outrage and concern for the art industry.
And yes, there are AI-generated video games, too. They’re not exactly good, but the use of AI to create games and art is a potential harbinger of doom for many developers and artists worried about their livelihood. We spoke to a handful of these creators to find out what the general consensus and mood are in the games industry towards AI art, and whether we should be worried that robots really will make us obsolete — or worried about something worse entirely.
What do developers and artists think about AI art?
For Ole Ivar Rudi, the Art Director on Teslagrad and Teslagrad 2, the situation surrounding AI art is somewhat of a monkey’s paw. “I’m a bit on the fence,” he tells me over Twitter DM. “On one level, I totally see the appeal and think it’s super fascinating… [but] the data sets are largely built from unethically sourced material, including the work of illustrators who certainly don’t want their work being used as input in this way, and this worries me a lot.”
There’s just something inherently interesting about throwing a coin in the wishing well or rubbing an oil lamp and asking for something
He does, however, admit that the results have their merits. “On one level, I totally see the appeal and think it’s super fascinating,” he tells me. “There’s just something inherently interesting about throwing a coin in the wishing well or rubbing an oil lamp and asking for something (Conan the Barbarian riding a lawnmower! A werewolf ordering French fries!) and then getting an unpredictable, distorted by the whims of the machine version of what you imagined in your mind as you typed your prompt.”
Martin Hollis, a game designer known for his role as the director of GoldenEye 007, agrees that the value of AI art is, to borrow a phrase from the 2000s, its ability to produce results that are just so random. “Most of the most valuable images I have seen are valuable to me because they are funny,” he says. “Part of the humour does derive from the lack of skill or understanding from the AI… for example, many AIs have trouble drawing hands.”
And that’s funny — in the same way Botnik’s “AI” predictive keyboard scripts are funny, because they go to places that make no sense, even if the grammar is technically correct.
“Mario is a fictional jerk. He is a Norwegian carpenter who mistreats women.”
– An excerpt from “Mario Wikipedia Page“, by Botnik
On the more professional side of things, Karla Ortiz, an award-winning concept artist whose clients include Marvel, HBO, Universal Studios and Wizards of The Coast, thinks that AI art could have its place. “I could see some very interesting use cases for AI,” she tells me in an email. “I would say it would be great for finding references, creating mood boards, heck, it may even be good for assisting art restoration!”
But Ortiz’s hope for the future of AI art is heavily tempered by its flaws. Her main problem with AI art is that it is exploitative by nature, since it draws from a large library of uncredited source images. They can only have a place in the art industry, she says, “if [they] were ethically built with public domain works only, with the express consent and compensation of artists’ data, and legal purchase of photo sets.” That is, of course, not the case as it stands right now.
Does AI training data infringe on copyrights?
Ortiz describes the current incarnations of AI art, like DALL-E and Midjourney, as “really more similar to a calculator” or even a “hyper advanced photo mixer.” They have no subjectivity, and can only make decisions based on their programming.
This leads to an issue at the core of algorithmically-generated art: It can only learn by copying. AI is not able to be creative on its own — you have to teach it, using a library of training data. This can be a literal library of books to teach an AI how to write, or a repository of music, art, and descriptions to teach an AI what is considered “good”, or at least “right”.
Even AI companies agree that current AI models copy copyrighted data
The way machine learning works means that a larger library is preferred, because more training data results in a more nuanced, comprehensive understanding of “art”. And the largest library available to us is… the internet, a place in which ownership is often disrespected, and anything posted without a watermark is often considered free game (and sometimes, people crop out the watermark anyway).
What happens then is that the AI extrapolates from that data. As Ortiz puts it, “the software makes a random guess of what an acceptable image is based on the original images it has been trained on.” Without strict supervision and careful selection of the training data, there will inevitably be copyrighted material in there, and this isn’t even a secret, says Ortiz. “Even AI companies agree that current AI models copy copyrighted data!”
Of course, the creators of AI generation tools are aware that borrowing copyrighted media for their training data could cause trouble. Ortiz highlights AI music generation tool Harmonai’s own statement on the subject, which claims to use only copyright-free music in their training data, as proof that this issue is well-known to the companies making these kinds of AI:
“Because diffusion models are prone to memorization and overfitting, releasing a model trained on copyrighted data could potentially result in legal issues… keeping any kind of copyrighted material out of training data was a must.”
In machine learning, something is “overfitted” when it sticks too rigidly to its training data — like a child reading “Tom went to the store” on the first page of a book, despite the first page being the author and publisher information, making it clear that the child has just memorised the book and doesn’t actually understand how to read yet. As Ortiz explains, this means that AI companies “admit their AI models cannot escape plagiarizing artists’ work.”
DALL-E’s training data, for example, is described in one of their blogs as “hundreds of millions of captioned images from the internet”, and the engineers discovered that repeated images in that data — multiple photos of the same clock at different times, for example — would lead to the results “reproducing training images verbatim.” To avoid, or at least minimise this risk, they created an extra algorithm for “deduplication”, detecting and removing repeated or similar images, which led to almost a quarter of the dataset being removed.
Even after that, DALL-E’s engineers at OpenAI aren’t sure that they fixed the problem of what they call “memorization”. “While deduplication is a good first step towards preventing memorization, it does not tell us everything there is to learn about why or how models like DALL·E 2 memorize training data,” they conclude at the end of the blog. To put it more simply: Right now, there’s no surefire way to stop an AI from reproducing copyrighted images, as OpenAI themselves admit in their “Risks and Limitations” document.
So, who owns the art?
It is impossible for users to know whether copyright data and/or private data was utilized in generation processes
This unregulated use of source images brings up a number of issues, not least of which is the fact that it’s a legal risk for companies to use the technology. There is also a lack of transparency on the client-facing side, as many AI tools do not have their training data made public. “Even if a company sets strict guidelines to avoid utilizing the name of any kind of copyrighted material as a prompt, due to how AI models are trained and generate imagery, it is impossible for users to know whether copyright data and/or private data was utilized in generation processes,” says Ortiz.
So, who owns the copyright to an AI-generated image that has used an unidentifiable number of potentially copyrighted images to generate something new? That’s a debate that rages on. A recent paper called “Who owns the copyright in AI-generated art?”, by Alain Godement and Arthur Roberts, a trademark attorney and a specialist in software and patents respectively, is unable to provide a concrete answer. This turns out to be at least in part because the ownership of the image is unclear — is it the creator of the software? The curator of the training data? Or the user who came up with the prompt?
They state that the answer will “hopefully be resolved in the next few years,” but that until then, disputes should be “assessed on a case-by-case basis.” Rather than answers, they provide advice to those who are interested in AI art: First, avoid using an artist’s name in the prompt, to avoid any obvious cases of plagiarism. Second, be aware of “what you can and cannot do” with any particular AI tool, by making sure to read the terms of service and licensing agreements.
So, we may not have answers yet, but Roberts and Godement’s paper has made one thing clear: The law surrounding AI art and copyright ownership is murky at best.
Who benefits, and who loses out?
Aside from all the copyright issues — is AI art an actual threat to anyone’s careers in particular? That’s hard to say. The technology doesn’t seem to be in a place where it can be openly and legally used as a creation tool. But not everyone is fastidious about legality.
Hollis sees the use of AI in professional art creation as somewhat of an inevitability. “It seems [likely that] there will be minor usage of the technology in a few subdisciplines in the industry,” he tells me, saying that there could be a “very minor genre of games which are made using AI art,” but that these will look like they were made using AI art, and thus sit in a category all of their own. “There’s really no prospect of fewer people being needed to make video games – the numbers just go up every year.”
There is growing consensus that at the very least we’ll have some job loss, especially in entry level jobs
Ortiz considers AI art a nascent threat to concept artists in particular, but more than anything else, to newcomers to the trade. “There is growing consensus that at the very least we’ll have some job loss, especially in entry level jobs,” she says, and while people of her experience and expertise may not be personally threatened, the loss of junior roles could have repercussions on the whole industry.
“Those entry level jobs are pivotal to the overall health of our creative workforce ecosystem, and to the livelihoods of so many artists,” Ortiz says, noting that the loss would be especially significant in reducing accessibility to the industry. “These entry level jobs are especially important to artists who do not come from wealthy backgrounds.”
“Automation replacing workers tends to only benefit the people who already have too much money,” agrees Rudi. “With how poorly just about everyone else is doing these days economically, I’m definitely feeling a bit uneasy about things that moves that needle further.”
But it’s worse than even that, argues Ortiz, because at least the production lines didn’t literally steal from the workers. “Unlike past technological advancements that displaced workers, these AI technologies utilize artist’s own data to potentially displace those same artists.”
Rudi agrees, envisioning a more specific future scenario. “I’m definitely worried that […] some people who would normally hire an artist they like for commissions (or in the video game world, concept art) will be perfectly happy with a warts-and-all computer generated pastiche of that particular artist’s style instead.”
In fact, one particular area that AI art could feasibly be used is in creating Pokémon designs. Several AI Pokémon generators exist, from Max Woolf’s tweaked version of ruDALL-E, which you can use yourself in his Buzzfeed quiz that generates you a unique Pokémon, to Lambda Labs’ Stable Diffusion-trained generator, which lets you input any text you want — an IKEA desk, Boris Johnson, a half-finished sandwich — and it’ll turn it into a Pokémon.
You can see the training data in the results — an arm of a Gardevoir here, the shape of a Chansey there, plus Ken Sugimori’s trademark style — which just goes to prove that AIs are not creating anything unique as much as they are image-bashing. And although a tool like this certainly wouldn’t put industry veterans like Sugimori out of work, it could replace more junior Pokémon concept designers. After all, Pokémon designs are iterative — there are always evolutions to design, or regional variants, or new forms, and taking something and tweaking it is what AI generation tools excel at.
When a program is mass producing art in the style of another artist […] that needs to be judged as parasitic, damaging and socially unacceptable
Hollis notes that “stealing” is somewhat of a relative term in the art world. “Is it stealing for a human to learn from other artists’ work?” he asks. “We have built up a complex system of ethics around the use of other people’s work in the world of art. At one end we have pure fraud, tapering into shameless imitation and then plagiarism and homage. At the other end, astonishing originality.”
Of course, that doesn’t mean that AI art is at the “originality” end, and Hollis is quick to acknowledge that some uses of the technology are unpleasant. “Naturally when a program is mass producing art in the style of another artist and undermining their livelihood or their legacy, that needs to be judged as parasitic, damaging and socially unacceptable – otherwise we will be doomed to looking at these rehashed microwave dinners of actual artist’s handiwork for at least the medium term.”
Ortiz takes this even further, pointing to one egregious use of AI technology, in which “users take and degrade the work of the recently passed for their own purposes, without permission and disrespecting the wishes of their family.” Following the sudden and tragic passing of respected illustrator Kim Jung Gi in early October, it was just days before someone plugged his art into an AI generator as an “homage” and asked for credit, sparking outrage from fans and friends alike, who considered it an insult to his art and his memory. You cannot, after all, replace a human with an algorithm — but that doesn’t mean that people won’t try.
Where will AI art take us?
Between the ethics and legality of AI art generation tools using copyrighted data in their training models, and the moral implications of what that means for a user — and, indeed, how they choose to use it — it seems like AI art will struggle to find a firm footing in the eyes of many. But just because some choose to boycott the technology, or at the very least, view it with open suspicion, that doesn’t mean that everyone feels the same.
For many, AI art is just a tool to make highly-specific images with disturbing numbers of eyes, pretty anime ladies with gigantic chests, or random mash-ups of pop culture references, to garner likes on social media — and that’s all it is. Not a systematic dismantling of an important industry, or an unethical and non-consensual use of artists’ work. Most people do not know how AI works, after all; they just want to join in on a trend, and the accessibility and low cost of AI art generation tools feeds into that. Perhaps these people would never have commissioned an artist to draw “Pikachu on a date with a swarm of bees in the style of Picasso” in the first place.
But for others, especially those who might be potentially impacted by AI art, the responses are mixed. Some see its application as a tool for humour, others see it as a potentially helpful tool for sparking creativity — but it seems like everyone can agree that the technology leans too heavily on the side of plagiarism, although some disagree about how serious that is.
You can’t really argue that the art is ‘boring’ right now because everyone is talking about it
Hollis thinks it may all just be a passing fad. “I don’t think it really matters if AI artists are ‘good’ or ‘bad’,” he argues. “They are interesting. You can’t really argue that the art is ‘boring’ right now because everyone is talking about it. Give it six months, then it will be ‘boring’ until the next step change and improvement in technology.” The current status of AI art as a hot-button topic is its novelty, he says. “When it stops being novel, then it will have to survive on its merits, which look questionable to me.”
Ortiz’s scepticism about the technology is tempered by a small flicker of hope. “I could see some very interesting use cases for AI,” she agrees, especially in her line of work, where AI art could be useful for references and mood boards. But the technology itself needs to be rebuilt from the ground up for her — and many other artists — to feel comfortable about its use. “These tools are really interesting,” she says. “They just need to be built ethically, and companies who thrive off unethical tools need to be held accountable.”
What is your take on AI art? Is it a dangerous tool in the wrong hands? A useful way of generating creative concepts? A threat to the industry? A fun way of making silly pictures? Or something else entirely? As always, tell us your thoughts and feelings in the comments section.