Well, I’m a little late to the party, but I’m finally getting around to writing about AI art. This post has been…a while in the making. Frankly, I’ve had trouble keeping up with all the research I’ve needed to do on it, since everyone keeps saying new stuff about it. And, if I’m being honest, I might’ve been overly ambitious in the number of things I wanted to cover.
Even so, let’s take a moment discuss AI art and how it works. And obviously the results. Also, the legal and ethical implications. And whether it should count as “art.”
I’m tired already. Maybe we should take a nap instead.
Before we get into it, I’ll mention that although there are a number of AI art generators out there, all of my results here are the product of Stable Diffusion, the open source AI art generator that can be run locally (and I specifically used the AUTOMATIC1111 frontend). There are a few reasons for this, ranging from “I’m a cheap bastard who doesn’t want to pay for this” to “I don’t want to help train models for tech bros.”
There’s a lot to unpack here, but let’s start with the actual output. If you’ve seen any Interwebs in the past year or so, you’ve probably seen some amazing images.
And if you’ve ever used an AI art generator, you’ve also seen some absolute garbage.
Both extremes are actually fairly easy to come by, depending on what sort of prompts you’re using. On average, though, I’d say that a lot of the results have a sort of gloopy, fever dream quality to them.
In addition to the results being somewhat inconsistent, your odds of getting a good outcome vary based on what type of image you’re going for.
But rather than just do an image dump upfront and then have walls of text later, I’m going to intersperse some of my results throughout this post, organized by category.
And let’s start with one of the more notorious areas.
Results, Part I: Hands
That’s right: hands.
Your best case scenario is generally getting approximately ten fingers, distributed more-or-less correctly between hands. The more likely outcome is a pair of grotesque blobs with fingers like the roots of a haunted swamp tree.
You can’t exactly blame the idiot machine, since hands are some of the most complex, expressive parts of the body, and as any beginning artist will tell you, there’s no shame in drawing a character with their hands in their pockets.
There are, I gather, a few techniques to help improve how hands are rendered. Like mittens. Put mittens in the prompt.
How It Works
Generative Adversarial Networks
Generative Adversarial Networks (GANs) have been around for years now, and are not the technology that form the foundation of the latest generation of AI art tools, but you might hear about them anyway, since they have been utilized in deepfakes in the past, and they can also be applied to an image produced with diffusion models to create a face that might not haunt your nightmares until the day you die.
The underlying idea of a GAN is both brilliant and elegant. Essentially, the developer creates a zero-sum game by pitting two separate neural networks against each other: one, the generator, learns to barf out an image that looks realistic, while the other network, the discriminator, learns to discern whether its looking at real data or the crap synthesized by the generator.
These are by no means perfect, and there are various pitfalls that I won’t go into here, but they are a pretty cool idea.
If you want a pure example of GANs that isn’t porn, you can take a look at This Person Does Not Exist, which pumps out imaginary people.
“Fun” fact: this is not the original This Person Does Not Exist website, but rather…maybe a knock-off or something? Apparently in late February–which is after I wrote the first draft of this section–the original viral phenomenon started–with no explanation–redirecting to the website for Stability AI, the company behind Stable Diffusion, which I think we can all agree is a deeply irritating thing to do.
Latent Diffusion Models
In contrast to the elegant logic behind a GAN, Latent Diffusion Models (LDMs) sound like the dumb shit that the engineer would come up with in the third act of a Star Trek episode, right before they “reverse the polarity.”
In an LDM, the eight-year-old child that must’ve come up with this idea trains a neural network to take captioned images that have random noise added to them, and then spit out something like the original image. Once they’ve used a bunch of images to thoroughly train the network to denoise images, they then…just lie to it and give it a garbage image made of noise and a fake caption tricking it into denoising that bullshit until a picture pops out. And then it just works, because why the hell not?
The end result is basically a pareidolia machine. It looks at garbage, and it finds meaning because you told it to.
Do not type “pareidolia” into Stable Diffusion.
You have been warned.
If you’re looking for a bit more nuance or technical detail, here’s a British guy drawing bunnies:
Results, Part II: Scenery
Alright, last time was a bit of an embarrassment. Let’s give the AI an easy win: scenery. Need a desktop wallpaper, inoffensive lock screen, or low budget motivational poster? AI has got you covered.
Results here are typical, as far as I’ve seen. Sometimes you’ll get weirdness; trees that merge together in weird ways, strange blurriness, and garbled watermarks floating in the air. But for the most part, the landscapes I’ve seen look pretty good, possibly because my brain isn’t hardwired to recognize trees.
AI art raises several legal questions, and they’re all kind of a shitshow.
The first issue has been kicking around even before this latest generation of AI art generators: nonconsensual porn. Now, thanks to Photoshop, fake porn has been around for decades, but a few years ago, deepfake technology made it possible for dipshits to make revenge porn with no real skill and little effort. The law, meanwhile, has mostly failed to catch up…a few states have deepfake laws, but women (who, you’ll be shocked to learn, are the primary victims of this phenomenon by a wide margin) mainly have to rely on existing laws, such as those prohibiting harassment or conventional revenge porn (which may or may not cover fake revenge porn).
In the US, The Copyright Office’s position is that AI art is not eligible for copyright, as it lacks the requisite “modicum of creativity.” This is a fairly low bar, but evidently the Copyright Office regards the user’s prompt as insufficient.
Having now wrestled with the intricacies of Stable Diffusion for this blog, I’ll say that, whatever else is true of AI art, asserting that it lacks even a modicum of creativity seems…dubious.
To be frank, the cynic in me doubts this state of affairs will continue. AI art generators show promise in the area of cheaping out on art rather than paying a decent wage to artists, and a policy of disallowing copyright protection would undermine that. AI has the potential to create book cover art (or entire children’s books), speed up cartoon development (or just do the whole thing with AI), or even create music. And the businesses that would like to make a lot of money without paying artists who are crucial to the process are going to have trouble profitting if they don’t have exclusive rights to the output, so I’m guessing that it’s only a matter of time before deep pockets start lobbying until copyright law becomes more accommodating to AI art.
I’m sad now.
Now for some of the more controversial stuff.
The most talked about issue (as far as I can tell, anyway) is the allegation that AI art generators are stealing.
Diffusion models, like so many machine learning techniques, heavily depend on their training data, and the current generation of AI art generators seemingly all use models trained by scraping the Internet for publicly available, captioned images.
A non-trivial number of these are also apparently watermarked images, since the AI image generators have learned to attach watermarks to some of their outputs.
This practice has attracted criticism, since posting pictures on the Internet doesn’t convey permission to use it to develop your products. The only things you’re implicitly granting permission for when you post stuff on the Internet are inane comments, online harassment, and an endless stream of random strangers emailing you with offers for their terrible services.
It’s pretty clear that that copyrighted material is being used to train these models; less clear is whether this counts as copyright infringement, or if this instead qualifies as fair use.
(As an aside, I’ll note that the question of whether this type of scraping constitutes infringement is separate from the question of whether we’re cool with it as a society, but they are at least related concepts, and it’s easier to find objective sources for the law than for moral philosophy.)
Scraping data from the Internet and using it to train an AI usually is fair use, but historically, the actual functionality of the AI’s has been uncontroversial stuff that didn’t compete with the original work (like object recognition).
In my view, the particularly interesting questions for a fair use analysis here are 1) whether this qualifies as a transformative use, and 2) the market effect for the copyrighted work. I won’t go into transformative use here because no one is paying me to, but as far as market effect goes, it seems pretty likely that the market for the underlying art is going to suffer harm.
Proponents of AI art argue that training models with scraped data is similar to how human artists are influenced by other artists. Just look at this guy passionately making his point. See how passionate he is? Sure, he’s gratingly self-important about it, but don’t you wish you felt that deeply about anything in life?
Hold on, I think I had a point other than that I’m dead inside.
…nope, it’s gone.
Anway, critics argue that no, AI isn’t “learning” like a human learns, they’re just stitching humans’ artwork together.
I’m not sure either side has any hard data to back up those claims, with the exception of one particular case, where…uh…art generators just steal stuff. That is to say, sometimes they’ll spit out image from their training set, more-or-less unchanged. However, according to two of the authors of a paper I might get around to skimming someday, they found a memorization rate of 0.03%, meaning that this sort of outright plagiarism is rare, though to some extent inevitable for diffusion models.
Nevertheless, the more common case–images that aren’t just regurgitated training data–are, without any empirical data, seemingly just a philosophical question. Since we don’t even have a great basis for comparing the machine learning to human learning, the arguments I’ve seen so far are also kind of boring, so let’s talk about cats instead.
Results, Part III: Needlessly Adorable Cats
Okay, so thanks to Stable Diffusion’s proficiency with scenery, you’ve got your desktop wallpaper taken care of. But what about adorable cat pictures?
The pictures here kind of speak for themselves. Clearly, Stable Diffusion’s grasp of all things cute and cuddly is right up there with its talent for landscapes.
What? No, don’t look closer. Why would you look closer? Who looks closely at stuff? Don’t do that.
If you’re an artist, is there anything you can do to protect your work from AI art?
I mean…kind of…?
First, you could try to prevent being scraped going forward. Emphasis on “going forward,” which means that it won’t help with any art that’s already been hoovered into a training set.
As of November 2022, DeviantArt (the online art community) has implemented–and uses by default–both meta tags and HTTP headers to signify that content should not be used to train AI. I’ll include them here, in case you know what this stuff means and want to put them on your own site.
<meta name="robots" content="noimageai">
<!-- or -->
<meta name="robots" content="noai">
HTTP Response Header:
If you’re not on a platform that gives you this type of control, check their
terms. ArtStation, for example, has a
new policy of applying the
appropriate meta tags if you tag your work with
There are a couple problems with this approach.
First, this would be an opt-out system–meaning that your consent is assumed unless you specifically opt out–thereby creating a trap for the unwary.
Second, you might risk throwing the baby out with the bathwater. Even if
you’re opposed to AI art generators, that doesn’t mean you oppose all AI,
which is presumaby what
noai (as opposed to
noimageai) means, at least if
And maybe that’s what you want. You might not want anyone to use your data without compensation, even if they’re not using it to directly compete with your business.
But there are other types of AI out there that your data might help with. Imagine self-driving cars that don’t mow down pedestrians, or chat bots that could help doctors perform diagnoses for people with rare conditions, or pointlessly reviving Seinfeld without all the pesky humor that everyone hated about the original.
Finally, the nature of meta tags and response headers it that they’re really just communication mechanisms. That is to say, they’re the honor system. You’re relying on the companies training these systems–assuming they even know about the tags/headers–to actually respect your wishes. Which they may or may not be legally required to do.
Prevention (In the Style of Greg Rutkowski)
Consider the case of polish artist Greg Rutkowski. Rutkowski operates at the cursed intersection of being a talented artist, diligently using alt text for his artwork (which makes for easy scraping), and drawing nerd shit. This makes for a perfect storm that’s ended with AI art users converging on “Greg Rutkowski” or “in the style of Greg Rutkowski” as an easy addition to prompts in order to pump out high quality fantasy art…which they’ve done thousands upon thousands of times. Now, Rutkowski worries that as time goes on, if you search for his name, his actual work will get drowned out by AI imitations.
In light of that cautionary tale, what if you’re not just worried about feeding the machine, but also specifically about it copying your style?
Well, the Glaze Project has you covered. In theory. Maybe.
Their paper hasn’t been peer reviewed yet (read as, “pics or it didn’t happen”), and the software isn’t available for download yet (read as, “useless”), but they say that “in the coming weeks,” you’ll be able to download software for Mac or Windows that can take your existing art and create a “cloaked” version with subtle changes that supposedly interfere with the training process.
And maybe it works. For now. Until the creators of these generators find a workaround. But hey, it’s free, and if you’re an artist, you’re probably panicking, so take what you can get.
I mean, if you can, in fact, get it.
Look, let’s just move on to the next section.
If you’re curious about a particular image, you can also check out Stable Attribution, a tool that theoretically allows you to discover which images in Stable Diffusion’s dataset influenced the creation of a particular image. One limitation here is that, obviously, it’s only for Stable Diffusion (thanks to its open data set). Another is that it’s still in beta, and it’s a hard problem to solve.
I tried a handful of images myself, and I can definitely say the website didn’t crash on me. Little to report beyond that. Beyond that? Some of my tests returned nothing. Some returned images that seemed like plausible influences, but no smoking guns where it clearly stole someone’s art and passed it off as its own.
In other words, I have no idea whether it actually found source images for me or not. However, since its the only tool I’ve seen so far that even tries to do anything like this, you can check it out if you have any need, and…well, even if it can’t do the job right now, maybe it’ll get better eventually.
Results, Part IV: Stock Photography
Have you heard that your shitty social media posts will do better if you include an image, but you’re too cheap for stock photography?
Good news - you can throw Getty images in the garbage (now that a number of models have been trained without permission on all of their content), and just pop open your favorite AI art generator.
Need a businessman holding some sort of blurry tablet? Bam.
Need a guy writing stuff while he looks at the wrong camera? Bam.
Need a picture where I remembered to check the “Restore faces” box for once? Bam.
Tired of legal controvesies? Let’s talk about problems that are clearly legal, but still pissing everyone off.
This should go without saying, but I didn’t ask for all my office workers to be white people.
Hey, want to see several examples of a “criminal mug shot?”
So…I don’t know. Is there that much I need to add here? This is really just one more example of AI eating our racist data and crapping out more racism. And don’t worry…it does sexism, too.
But don’t take my word for it. Dr. Sasha Luccioni, an AI researcher at Hugging Face, built a tool to compare different combinations of adjectives and professions to compare how groups are represented. She found that prompts like “ambitious CEO” tend to show men, while “supportive CEO” yields a roughly equal number of men and women.
Exactly what we needed. Another way for computers to amplify bigotry.
Now, that doesn’t mean that the technology is inherently evil or anything, just that it’s something that needs addressed, because if we’re not careful, we might end up with racism on the Internet.
Is It Art?
The context of why the question is being asked varies–sometimes it’s looking for a reason to look down on people who create AI art, or sometimes it’s to complain about that guy who won an art contest with AI art.
Regardless, this is a value-laden question, and, frankly, isn’t that interesting in my opinion, but I’m going all completionist on this post, so why not?
Conceptually, this is a separate question from the origin of the training data, so let’s just assume for the purposes of debate that we’ve got an ethically sourced model. Sure, it’s not really relevant–you wouldn’t say a painter isn’t an artist just because they stole their paint–but it can maybe help avoid conflating the issues.
With that out of the way…what is art?
According to Webster’s, art is “the conscious use of skill and creative imagination especially in the production of aesthetic objects,” so…using skill and imagination to make pretty stuff, I guess.
That would appear to leave three criteria:
- Pretty stuff
Let’s address them in order.
Looking at the things I’ve created for this blog, it seems evident that, yes, there is skill involved, even if the only way to draw this conclusion is by inference from how thoroughly I lack said skill. The only way for someone to suck at something so badly is if it is, in fact, a skill. I’m not saying it’s as difficult a skill as drawing any of this stuff by hand (even if “by hand” includes Photoshop), but there does, at least, appear to be more to it than simply making a wish for your AI genie to grant.
To this point, when I’ve come up with a prompt, I’ve generally had a vision in mind that I’ve been aiming for. I rarely achieve it, but I at least imagined something. This is what the Copyright Office seemed to be getting at–the randomness seems to negate the creativity (“modicum of creative expression”).
That being said, I have a similar problem when I draw by hand. This may be more reflective of my inadequacies with regard to both skills.
Art isn’t always predictable, is it?
And what about the “pretty stuff” part?
Okay, so AI art is fun to make, but honestly, not as fun to look at. It can be quite detailed, and certainly better quality than I can typically do, but…it’s kind of unappealing in an intangible way, possibly because I just know there’s no mind behind it.
Now, I do subscribe to the idea that the intent of the artist is separate from what it means to you, but to me, the absence of any artist at all–at least with regard to the fine details–makes it feel pointless to search for meaning to begin with. Even so, pretty ocean is pretty.
With all that in mind, is it “art,” and are the people who make it “artists”? Well, I may have telegraphed it a little bit in the intro to this section, but I think the most reasonable answer to this question is, “Who gives a shit?”
If you’re that concerned with whether someone qualifies as an artist, you’re arting wrong.
The only context where this would seem to matter much is when it comes to art contests, and in the end, art contests don’t need to reason their way through this. They have rules. When Jason Allen won the Colorado State Fair contest, he disclosed his use of Midjourney, and the category he participated in didn’t probibit it. Granted, it was a new enough thing that, yeah, he kinda blindsided people, but realistically, in the future, art contests are going to have explicit rules about this anyway, so there’s not much point in philosophizing about it.
Results, Part V: Sexy Time
People still use the Internet for porn, right? Well, not anymore, because now you can just make your porn, as long as you’re not into hand stuff.
Believe it or not, that one was actually with the “Restore faces” option checked. I…think she was winking.
Nevertheless, with as…uh…specific as fetishes can get, AI could potentially help fulfill the fantasies of those whose tastes don’t necessarily align with society at large’s. A quick description could provide an endless stream of time-wasting masturbation fodder:
Person: Alexa, make me some porn.
AI: You got it!
Person: Ah, good, looking forward to it.
AI: Remind me again, how many legs do women have?
Person: Uh, t–
AI: Nevermind, I got it.
AI: How about some romantic music to go along with this?
Incidentally, if you’ve gotten to this point and the thought has already struck you, “Wait, doesn’t this bring up the same dilemma as with artists, where porn stars’ own work could be used to train AI models that would then put them out of work?” then kudos for thinking of sex workers as human beings worthy of the same consideration as everyone else. Nevertheless, according to a sex worker named Ashley (the article didn’t elaborate), while there are plenty of porn-related problems that may arise, AI-generated competition doesn’t seem like a huge threat, since a big plus in the industry is being accessible to fans.
Alright, take a cold shower, and let’s wrap this up.
So…are AI art generators good? Are they bad? Should you use them?
Shrug. It’s a fascinating technology that can make art fast, especially if you’re into scenery or nightmare fuel. It’s definitely problematic that they scrape artists’ work without permission, even though the legal status is less clear.
You may feel that this is okay, though, if you’re just doing dumb stuff with it; if you weren’t going to hire an artist to begin with, they’re not losing any business. And maybe that’s true.
The trouble with that idea, though, is that it’s easy to say they’re not losing any business from you if it means you get neat art without having to pay for it. However, over time, you might have decided that it was worthwhile to commission that picture of a cat watching tiny angels dancing on a toilet after all, but we’ll never find it if you just went ahead and generated it, will we?
On the other hand, I was legitimately never going to commission the artwork for a one-off monster in the Dungeons & Dragons game I run for my wife and son to secretly trick him into practicing math. Though…I didn’t actually use AI for that, I just stole stuff from a cursory Google Image search. Which…is probably not better.
When it comes to more serious work–where you might have paid a real artist–AI art also isn’t copyrightable, and exactly how much modification it takes to become copyrightable isn’t clear, so in some ways that undermines its usefulness for anything other than personal entertainment or terrible stock photography to try and jazz up your social media posts.
I guess the takeaway here is that AI art is a mixed bag.
Nope, I’ve changed my mind. Burn it down. All AI art is bad. Don’t use it.