Responses to “AI’s Take on AI vs Copyright”
This approach to the blog post was super interesting! I really enjoyed seeing what AI itself had to say about these super timely issues. I found it most interesting that even AI admitted that users must be careful to check that AI doesn’t infringe on copyright, which shifts all of the responsibility from the program to the user.
Isn’t it interesting that ChatGPT said that it cites its sources, but that the liability falls on the user. Like you said, I also haven’t seen it cite real sources. Also, the idea that it is creating original content doesn’t seem completely true if it is using other sources and things to create its content and it doesn’t cite it. Its content isn’t original really because it can’t come up with new ideas.
I also don’t believe that we’re likely to overuse AI to the point of “lackadaisical stupor”, as you put it. Not just because AI technology isn’t reliable enough yet, but because even as the creative culture changes the human need to create still exists. In his day, Sousa was worried about the loss of amateur creativity. Now the creative landscape is very different, and I’m sure there have been some fluctuations in the numbers of amateur creatives over time, but there are still many, many people who create (whether that be music, visual art, or something else) simply because they want to.
It’s interesting to see ChatGPT contradict itself in its first and fourth points. Such is life.
I read Lessig as arguing something different about Sousa’s thoughts on copyright and culture than what you suggest in your second paragraph, Elijah. I’ll try to tease out the difference during our class today.
I also think the arguments that ChatGPT forwarded about copyright and LLMs are rather interesting. In my experience, ChatGPT does an awful job at citing sources, and as seen by many of our blog post, often gets things wrong even given a specific source to pull from. I can ask ChatGPT for something for which information doesn’t exist or wouldn’t be within the scope of it’s knowledge, and 99% of the time it will just make up random things instead of saying that’s outside it’s scope. I also think point 2, fair and transformative use, is interesting, as if you currently google something like “are ai language models fair use?”, the first hit, from the US Patent and Trademark Office, says it is, however actually clicking on the source reveals that this was in a submission by OpenAI, the creators of ChatGPT, arguing this to the USPTO, which they have not approved. Conversely, recent court cases have actually ruled in ways that undermine OpenAI’s argument on fair and transformative use, such as the 2022 case Andy Warhol Foundation v. Goldsmith, which raised the bar for how transformative something must be to be considered fair use if the purpose is commercial, which AI models often are.
I think you make an interesting point about the use and possible over use of AI. I think that with time the use of AI will become more standardized and less people will be fearful of it being used in place of real creativity and work. Hopefully then it will be viewed as tool, as it was intended to be.