Type B: First ChatGPT, Then Me


Prompt: Can you write a 250-word blog post about the copyright issues that Lawrence Lessig covers in his book Remix, including a commentary on how Lessig might respond to the use of AI LLMs? Then can you explain how LLMs work with copyright?

Lawrence Lessig’s influential book, Remix, provides a thought-provoking exploration of the copyright landscape in the digital age, challenging our understanding of intellectual property and creative expression. A central theme in “Remix” is the tension between traditional copyright laws and the evolving nature of the internet and digital culture.

Lessig argues that the rigid application of copyright can stifle creativity, particularly in the context of remix culture. He emphasizes the importance of balance between protecting artists’ rights and promoting a culture of innovation. In a world where individuals can effortlessly remix and reinterpret existing works, strict copyright enforcement can be counterproductive, limiting the creative potential of the digital generation.

Considering the rise of AI-Language Models (LLMs), Lessig might recognize their potential to further democratize creative expression. LLMs can assist content creators, offering tools to generate and reimagine text. They align with the remix culture’s ethos of reinterpreting and building upon existing works. However, their use also raises questions about authorship, originality, and copyright.

In the realm of copyright, LLMs generate content that can be considered original, as it doesn’t directly copy or replicate copyrighted materials. They operate under the principles of fair use and transformative use, creating new content from existing knowledge, which is generally allowed within the bounds of copyright law.

Here’s how LLMs work with copyright:

  1. Generating Original Content: LLMs are trained on vast datasets that include text from various sources, but they don’t replicate or directly copy specific copyrighted materials. When they generate text, it’s typically considered an original work produced by the AI.
  2. Fair Use and Transformative Use: When LLMs provide information, they often do so in the context of creating new content, explanations, or summaries. This falls under the “fair use” doctrine in copyright law, which allows for limited use of copyrighted material for purposes like commentary, criticism, news reporting, and education. Transformative use, where the original content is modified and repurposed, is generally seen as fair use as well.
  3. Citing Sources: LLMs are designed to provide information and, when appropriate, cite the source of their knowledge. This practice acknowledges the original creators of the information and promotes the ethical use of knowledge.
  4. Plagiarism Checks: Users must be cautious when using LLM-generated content to ensure it doesn’t infringe on copyright. If you’re creating content for publication, it’s advisable to run it through plagiarism detection tools to ensure it doesn’t resemble or directly copy copyrighted material.
  5. Legal Responsibility: Ultimately, the responsibility for ensuring that any content generated by LLMs complies with copyright laws falls on the user. If you have questions or concerns about copyright issues in your usage of AI-generated content, it’s advisable to consult with a legal expert.

In the evolving landscape of digital creativity, Remix continues to offer vital insights, and the use of AI LLMs adds a new layer to the conversation about copyright. These tools can empower individuals to engage in creative processes, provided that they do so with respect for copyright laws and an understanding of how to navigate the intricate web of intellectual property in the digital age.

My Response:

In the chapter “Cultures of Our Past,” Lessig talks about the American musician, John Philip Sousa’s fear regarding the current state of intellectual property in America. Lessig summarized Sousa’s concern that the current culture would be replaced by “a culture less practiced in performance, or amateur creativity, and more comfortable (think: couch) with simple consumption.”

Sousa believed that if anyone was able to use anything made by anyone, the level of creativity among the population would decrease. If he was worried about this in 1906, what would his reaction be to LLMs like ChatGPT, who don’t create in the same way humans do, but simply mix given information into entertaining packages?

If we were to take things to the extremes, I think that a danger of overusing AI to generate text may give way to a world in which Sousa’s fears are realized—a world where humans lounge in a lackadaisical stupor. However, I don’t think that’s a likely outcome. For the moment, at least, LLMs like Chat GPT simply aren’t good enough to replace actual human creators.

For example, in the section above, I asked ChatGPT to talk about how LLMs work with copyright laws. But even in the list it gave me, I see flaws. In point three, for example, it said that LLMs cite original authors of works used, but I have rarely found this to be the case. Instead, I’ve seen that LLMs will often fabricate citations—with either bogus works or just completely fake quotations.

Image by DALL-E given the prompt: “A photo of a robot writing a paper”



Leave a Reply

Your email address will not be published. Required fields are marked *


Responses to “AI’s Take on AI vs Copyright”

  1. Shelby Moore Avatar

    This approach to the blog post was super interesting! I really enjoyed seeing what AI itself had to say about these super timely issues. I found it most interesting that even AI admitted that users must be careful to check that AI doesn’t infringe on copyright, which shifts all of the responsibility from the program to the user.

  2. Elizabeth Thomas Avatar
    Elizabeth Thomas

    Isn’t it interesting that ChatGPT said that it cites its sources, but that the liability falls on the user. Like you said, I also haven’t seen it cite real sources. Also, the idea that it is creating original content doesn’t seem completely true if it is using other sources and things to create its content and it doesn’t cite it. Its content isn’t original really because it can’t come up with new ideas.

  3. Catherine Eyre Avatar
    Catherine Eyre

    I also don’t believe that we’re likely to overuse AI to the point of “lackadaisical stupor”, as you put it. Not just because AI technology isn’t reliable enough yet, but because even as the creative culture changes the human need to create still exists. In his day, Sousa was worried about the loss of amateur creativity. Now the creative landscape is very different, and I’m sure there have been some fluctuations in the numbers of amateur creatives over time, but there are still many, many people who create (whether that be music, visual art, or something else) simply because they want to.

  4. Brian Croxall Avatar

    It’s interesting to see ChatGPT contradict itself in its first and fourth points. Such is life.

    I read Lessig as arguing something different about Sousa’s thoughts on copyright and culture than what you suggest in your second paragraph, Elijah. I’ll try to tease out the difference during our class today.

  5. Garrett Mast Avatar
    Garrett Mast

    I also think the arguments that ChatGPT forwarded about copyright and LLMs are rather interesting. In my experience, ChatGPT does an awful job at citing sources, and as seen by many of our blog post, often gets things wrong even given a specific source to pull from. I can ask ChatGPT for something for which information doesn’t exist or wouldn’t be within the scope of it’s knowledge, and 99% of the time it will just make up random things instead of saying that’s outside it’s scope. I also think point 2, fair and transformative use, is interesting, as if you currently google something like “are ai language models fair use?”, the first hit, from the US Patent and Trademark Office, says it is, however actually clicking on the source reveals that this was in a submission by OpenAI, the creators of ChatGPT, arguing this to the USPTO, which they have not approved. Conversely, recent court cases have actually ruled in ways that undermine OpenAI’s argument on fair and transformative use, such as the 2022 case Andy Warhol Foundation v. Goldsmith, which raised the bar for how transformative something must be to be considered fair use if the purpose is commercial, which AI models often are.

  6. Traci Bullock Avatar
    Traci Bullock

    I think you make an interesting point about the use and possible over use of AI. I think that with time the use of AI will become more standardized and less people will be fearful of it being used in place of real creativity and work. Hopefully then it will be viewed as tool, as it was intended to be.