Debunking AI Art FUD and Superstitions
A significant amount of Fear, Uncertainty and Doubt is spreading in the artist community around AI Art. I realize that convincing somebody they are wrong is a tough challenge (nigh impossible), but I figured I’d try to at least blow some of the smoke out of the room.
I’ll approach this in a FAQ manner:
- AI “steals” other people’s art and puts it into new images
- AI had to steal art to build its models
- If my art is used to train an AI, that means it’ll be re-used in AI-generated art, stealing it from me
- Nobody owns AI art, making copyright unclear
- I didn’t authorize the AI to read art from my website
- AI can re-generate the same images it was trained on! It must be stealing it?
Let’s walk through each of these in detail:
1. AI “steals” other people’s art and puts it into new images — FALSE
AI no more steals art than a student does by studying other people’s art and drawing something inspired by or in the same style as that artist.
AI does not take “parts” of people’s art and assemble them into new pictures. It, instead, uses a synthesis or “diffusion” process to generate something matching the prompts it’s been given.
Somebody argued once, “if that’s true, then why can’t you see the progress of it doing this? At least with an artist, in illustrator, I can see the steps they went through to get to the final piece.”
This is best explained by example. The following series of images came from a prompt to get ideas for a book cover on Midjourney:
cover for dystopian post-apocalyptic book titled “Atom Bomb Baby” about a teenage girl coping with the death of her family
As you can see, it works in iterations, each pass refining the image further.
Also, note the text. Some people have commented that it must be stealing from people’s art because, occasionally, you will see it add signatures to the art. This is because the AI has noticed there are sometimes signatures on reference art, so it makes similar blocks. But it doesn’t understand the actual words. It just knows things of this type and shape will be in places.
It’s fascinating, to be honest.
The key here is that the AI generates art by diffusion. It doesn’t take pieces of other art to make a new picture.
2. AI had to steal art to build its models — FALSE
Stealing art would imply it was taken without permission.
This is best answered with a question:
If a person browses the internet, scanning different artists’ public portfolios and using these for inspiration, is that person stealing the art they see?
No, they are not.
All of the art used in AI models was available publicly, made so by the artists.
As this industry evolves, systems are starting to allow artists to mark something as available for AI use or not (such as Artstation).
The reality, however, is that if, as an artist, you don’t want people or AI to see your art, then don’t make it publicly visible.
There is a concept in the USA around privacy called the “Assumption of Privacy.” I expect a similar concept will extend into this space over time. The idea behind an Assumption of Privacy is that if you are in a public place, such as walking on a sidewalk, you have no expectation of privacy. You may be seen and even recorded, and nobody has to get explicit permission because you have no assumption of privacy in that public space. Similarly, if you are in a private place, you do have an assumption of privacy, so if somebody hides a camera in that place, they are breaking the law.
Similarly, if you post your art online to be visible publicly, you have an assumption that this art will be seen. What somebody does when they see it is up to them. Does this art inspire them to create something else? This is beyond your control.
However, if they take a picture of your art and then use that verbatim to reproduce something, that is a clear copyright violation.
Fortunately, that’s not what happens with AI.
But, the argument goes, the AI had to “copy” the art to process it as part of the model.
Yes. In the same manner, every internet browser must make a copy of your art to make it visible in the browser. There is an assumption that copies are made of your art simply as part of the process of making it visible.
Consider this. If hundreds of thousands of pieces of art are stored in the model verbatim, then you’d expect the database of this model would be gigantic.
In reality, it’s only a few gigabytes.
That’s because it stores no image data. Instead, it’s an AI mathematical data model where it’s distilled the essence of the different pieces of art it’s ingested into abstractions, very similar to how the human brain does the same. If you, as a person, go to a museum and study the works of Rembrandt, you don’t remember the pictures exactly (photographic memory is not a real thing). The AI does a similar process by associating the essence of pictures to concepts a person has connected with that picture, so later, when generating something new through diffusion, it can draw on those for “inspiration” (as it were).
3. If my art is used to train an AI, that means it’ll be re-used in AI-generated art, stealing it from me—FALSE
See point #2.
To add a little more, should you, as an artist, be afraid of your art going into an AI model?
Are you afraid to let other artists look at your art? If the answer is yes, then I’d ask you to consider why that is. And if the answer is no, then I’d suggest the same should be true for AI.
4. Nobody owns AI art, making copyright unclear — FALSE
AI is just a tool. The “uncertainty” of ownership is no more in question than it is with art drawn in Photoshop, or a digital picture taken with a camera. Adobe or Nikon have no claim to the artist’s work any more than the AI model used does. It’s a tool. The person using the tool owns the created result.
5. I didn’t authorize the AI to read art from my website — likely FALSE
This is where it gets really interesting.
If you post your art on a website that doesn’t require a login and ToS, then you effectively DID authorize the use, because it’s a public image, which is publicly accessible without authentication. There is already case-law being explored around this idea, so I won’t explain too much about it, but look at the lawsuit `HIQ LABS V. LINKEDIN` for more details.
The gist of the problem here is that if you make your media accessible without authentication, then even if you think you have applied Terms of Service that restrict general use of your media, you haven’t, because there isn’t a non-reputable way of assuring the reader did see the ToS.
There is an expectation/assumption that by providing your media on a website for free, unauthenticated use, that it will be “generally used/consumed” within common expectations. Those expectations are hard to quantify, so it’s easier to just say what you can’t do without a license— notably re-print/re-use the images in new presentations (such as different websites, print publications, etc).
So essentially by making your media accessible for free and unauthenticated, you put it into a very nebulous state that might actually allow somebody to view it differently (such as with an AI).
If you want to clearly prohibit this, it’s pretty simple: Authenticate access to your media and declare this prohibition in your terms of use.
In the future I expect this idea of general use will be clarified, but until then, the best option is just to authenticate.
6. AI can re-generate the same images it was trained on! It must be stealing it?
There are articles people and researchers have posted where they’ve managed to get the AI to re-generate something remotely similar to one of the images it was trained on. And they use this as a damning argument against the abstractions of the models.
However, look at this in a larger context:
- They spent SIGNIFICANT effort to get the AI engines to re-generate these images, trying over, and over, and over, re-crafting prompts and even, in some cases, using the original image as reference! This isn’t an accidental thing that can happen in regular occurances. Furthermore, it’s really no different than a human looking at a piece of art, then going and re-drawing the same thing from memory. That human can also recreate something very similar! See the above arguments.
- The AI engine creators really do not like this latent memory, and work to remove it wherever possible! The likelihood of this happening is extremely rare, and its getting more and more rare the more the models are refined.
—
I hope this helped clear up things. I see no moral ambiguity in this burgeoning space, just a growing need for clarity.
If anything, these tools should help an artist elevate their work. It’s invaluable as a thumbnail process, or helping if you are stuck on something.
—
Also, if you enjoy reading dystopian fiction, take a moment to consider the book Atom Bomb Baby.