Explore the complex intersection of AI-generated content and copyright laws, and discover key considerations for businesses utilizing generative AI. Learn about privacy, intellectual property, and practical usage guidelines to mitigate risks when working with AI.
Key Insights
- Understand that AI-generated artwork and content currently cannot be copyrighted in the United States due to the required human authorship; this impacts businesses, for example, publishers avoiding AI-generated art for book jackets to protect their intellectual property.
- Be mindful of confidentiality and data security risks, highlighted by Samsung's incident where confidential company code inputted into AI tools was inadvertently exposed, stressing the importance of using business accounts with training turned off.
- Consider ethical implications and potential legal challenges of AI trained on copyrighted materials, such as Midjourney facing backlash for using Getty Images without permission, and Adobe Firefly's approach of ethically sourcing training data to ensure commercially usable imagery.
Note: These materials offer prospective students a preview of how our classes are structured. Students enrolled in this course will receive access to the full set of materials, including video lectures, project-based assignments, and instructor feedback.
This is a lesson preview only. For the full lesson, purchase the course here.
So what are the things that we need to consider essential downsides? First of all, we've already talked about this, that AI can be wrong. So I don't think we need to continue talking about that, but you always must verify the answers. They can be right, they can be so, so wrong, or they could be somewhere in between, but it's up to you to verify the answers.
As far as copyright legal stuff, first of all, I'm not a lawyer. I don't pretend to be a lawyer. So, but I want to talk about some general ideas.
If you need legal advice, you should seek counsel. I am not your legal counsel. But what are some things to think about? First of all, copyright.
So this image here, this was generated by AI. Looks like a painting, doesn't it? Was not created by a human painter. It's created by AI.
In general, what most governments across the world have ruled, and in particular, the United States government has ruled that human authorship is required for copyright. If a human didn't create it, it's not copyrightable. Actually, there was a, not just on AI, I think there was a chimpanzee, I think it was, took a picture with a photographer's camera, and the photographer was trying to say it was his, but then I think it was PETA was trying to get involved and say, no, it's a chimpanzee, it was his photo, so he should get all the money.
And the photographer was like, I was there, it was my camera, like, it's my photo. So it was this whole thing of like, well, technically, a human didn't author it. So get into interesting debates, because one could debate that you wrote the prompt that generated the art, so was a human involved? Yes, but did a human create it? No.
So as of right now, and this is always subject to change, technology always precedes law, so we're still waiting for laws to catch up, but as of right now, AI art, AI-generated art cannot be copyrighted, because it is not created by a human. Now, that might not be a concern for you, or that might be a huge problem for you, because, for example, I was talking to publishers where they're creating artwork for their book jackets, and to them, a book jacket is the visual for a book, and it's very much tied to the identity of that book. A lot of design goes into creating those book jackets, and the art that represents that book, and they want to definitely copyright that.
They don't want somebody else taking that art, doing whatever they want, diluting the brand of that book. So they very much want to copyright it, so they will not use AI-generated art on their book covers. That's important for them.
Now, if I'm writing a blog post, and I want to create some visuals for that blog post, do I have to copyright every single one of those? Probably not. Now, it depends. I'm not saying you don't want to, but maybe that's not important for me.
So just because you can't copyright it doesn't mean you can't use it. It just means you don't own that, so somebody technically could go and do something else with it, and you don't own it, so nothing's preventing them from doing it. I think it was a book that was, an author was trying to copyright, I think it was a children's book if I remember right, they were trying to copyright the book, and all the art was created with AI, but the writing was her writing.
And so they were like, well, we'll let you copyright the writing, but not the art. So, because you didn't generate the art. So, just something to consider, at least that as of right now, you can't copyright that stuff.
So, would I use AI to generate my logo for my company? Absolutely not, because I'd want to definitely copyright the logo from, imagine another company being able to use the same logo, because you can't copyright it, because it was generated with AI. Yeah, that would not fly whatsoever. So, we talked about this before, but in terms of confidentiality, if you put confidential information into AI stuff, and they're training on that data, for example, maybe you're using a paid or a plus account, and you didn't turn that training off, if they were to train on it, then potentially that could be absorbed into the general knowledge of the AI, and could be exposed to somebody else.
And I think it was Samsung actually, that they were taking like source code or something, and putting it into something like chat GPD, I don't know if it was chat GPD or something else, and then company secrets were being exposed, because, oh, that's how you wrote it? Well, I could write that for somebody else. So, companies now have to make this something that's important for them, because for example, if they're not implementing AI, like if they're not giving people AI, accounts, if they're letting people bring their own accounts, how are they making sure that those people are securing those accounts? Let's say they're like, oh, just use a free ChatGPT, that doesn't default to having training off, it defaults to having training on, or they're like, oh, you have a paid account? Okay, you could just use your paid account, but see if you have a business account, and they're all under your company team, those team accounts default to training be off. So, we know for sure that if we give them a company account, the training is gonna be off by default.
So, just something to be considered about. Also, if you are under a nondisclosure agreement or something, better be careful that you're not putting that into an AI, training the AI to know something it shouldn't know. Also, it's possible that depending on how the AI is developed, some messages might be stored on the servers, maybe they're accessible to employees of that company maybe, maybe that violates terms of your nondisclosure agreement.
You just have to look at the data retention policies of whatever company you're using, whether it's ChatGPT or somebody else. So, I linked out to these links here so you can read more if that's something that's important for you, but just things you need to consider whether you're using ChatGPT or any generative AI is how are they training, are they training, for example, Microsoft Copilot, because it's targeted for businesses, they never train on any of your data. Just blanket, they just don't do it, not even in a free account.
But Microsoft is catering more towards a business audience and they know that's important, so they don't even want the bad aura of possibly training on your data. When it comes to ownership, even let's say an article that's written, when you think about who owns that article, since you can't copyright that article technically, is that important to you? Would anybody know that you wrote it with AI? It brings up interesting kind of publishing, interesting ideas too, because if AI is trained on articles, publishing, and then they start training on AI written articles, like the AI trains on itself, and also, so okay, so philosophically here, think like, so humans wrote all of this information, and now we don't get any benefit from that financially, but yet China GPT can absorb all of this information, Google can absorb all this information and just start giving it to others, and we're not making any money off of it, we generated this, so now moving forward, what's the benefit of creating some of that content when AI is just gonna take it and not give it to us? They're not gonna send people to our website unless they link out, which maybe they do, maybe they don't, but AI is kind of replacing websites in a way, because they're giving the answers, so it's an interesting idea, like are people gonna keep writing stuff on their own? Do we write it just to teach the AI? Like will there be people, are people gonna be just teaching AI stuff? I don't know, interesting to think about though, but maybe that's worth, imagine an AI that basically has all of human knowledge that you can ask me questions of, could be interesting, could be scary, who knows, I don't know. So speaking of training, what if that AI was trained on copywritten information, copywritten images, for example, and what if that AI was trained on and what if they then regurgitate some of that stuff into your product and then you use that? For example, not ChatGPTea, but some other ones, I think it was Midjourney, I think it was sued, Midjourney does image generation and you can do all sorts, they were very early on in advanced image generation.
I'm pretty sure it was Midjourney that trained on Getty Images and they were actually outputting images with a Getty Images watermark on it in their AI, because of course they were mixing and matching stuff and they're like, oh, there's a watermark, let's put it in there. Don't know what that thing is, good, but let's put it in there and it wasn't a Getty Image, but it was obviously trained because they're like, how would our Getty Image watermark be in your images unless you were trained on our copywritten images? We did not give you permission, so you're violating our copyright. What if that got put into yours? A lot of this stuff, as far as I know, has not been litigated, has not been thought through, but I just bring up the question of, what if? I don't know.
For the most part, when we're generating images, as we're gonna soon see, they are complete new fabrications of things that have not existed yet, but could there be part of it that is very similar to something else to the point where the artist comes back and says, hey, that's really similar to something I did and I think you sold that for me. Yeah, and there's been quite a lot of that in the contemporary art world. And even in that image that we just showed, that must have been trained on the conventions of kind of 17th-century.
Right, yeah, they're gonna be trained on image style, painting styles, photography stuff. So, yeah, it's, I haven't seen a lot of lawsuits on this topic yet, but again, the law always follows technology, so when you're on the bleeding edge of technology, sometimes you're on the bleeding edge of technology, you never know. So I'm just here to kind of point out the questions of things, not saying you shouldn't use the things, but just still got to bring up the things that you should be considering, if that's something.
I knew the same earlier about noticing all the same, right, a lot of people using react generating function, but I think that it was like, initial draft and started AI and then. Right, yeah, if you start with AI as like a first draft, and then you go back and you refine it and you change it, you're transforming that into your own work. So then it becomes yours, right? Now, if you go in and you're like, oh, I changed the word, it's mine.
No, I'm sorry, that's generally not transformative enough. But at some point, it does become yours because you put in enough, or if you're using it to generate outlines for you to write from, if you're afraid of letting it do the work, maybe you do it as inspiration or outlines or ideas and use it for concepting stuff, but maybe not the actual doing of it, if you're afraid of those issues. Also in AI in general, they will have some sort of licensing agreement, things you're allowed to do, things you're not allowed to do.
So you always wanna read the terms of use for any AI. For example, can you use those images commercially? One of the things that Adobe does, Adobe has their Adobe Firefly for doing image generation. And so they say, we've ethically trained on images and graphics that people have given us permission to do.
So they say, the people who granted us permission, like they opted in, like they said, this is okay. We trained on people who it allowed this. We can just go out there and take people's stuff that didn't allow us, and then we're producing stuff that you can use commercially.
So because we've trained it from people who gave us permission, so you know that the images we're creating are created from stuff that we have permission to use. Not all image generators will do that. And also, for example, Adobe Firefly is not going to create an image of a known person.
You can't say some celebrity, because again, they have rules about that sort of stuff. Not all things will do that. Some things don't have those constraints, and they will just let you do anything.
And that doesn't mean it's illegal to do that stuff. So for example, if you use another image generator that allows you to create an image of a known celebrity and say, look, this person endorses me, they can come and sue you, as is probably gonna happen with Taylor Swift suing Donald Trump because he used an AI-generated image, saying, look, Taylor Swift supports me. She's like, no, I don't.
And so, I mean, she probably can't sue because he's gonna be president, but once he's done with being president, she might sue him for saying, you used an AI image to indicate that I supported you, and I didn't. That's not legally allowed to be done. Now, Chats with Me avoids that issue because they just don't let you use people.
Like, you can't upload an image and say, use this person and make a photo of them doing this. It'll make a person who might be, oh, well, that's a guy, so I'll do another guy. I might do another guy with a similar hairstyle, but you're not gonna confuse them and think it's that person because the face is gonna be different.
We will have to kind of wait and see how some of the things litigate out, but just always consider what you're doing with AI. At least Chats with Me does have guidelines that help you to stay within the legal areas, unlike some other AIs, which are not restricted, but it's still something to be considerate of and just thinking about how we're using what we're generating through Chats with Me.