this post was submitted on 10 Jul 2023
414 points (94.6% liked)

Technology

34429 readers
259 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (2 children)

My understanding is that the copyright applies to reproductions of the work, which this is not. If I provide a summary of a copyrighted summary of a copyrighted work, am I in violation of either copyright because I created a new derivative summary?

[–] [email protected] 4 points 1 year ago

Aren't summaries and reviews covered under fair use? Otherwise Newspapers have been violating copyrights for hundreds of years.

[–] [email protected] 4 points 1 year ago (2 children)

Not a lawyer so I can't be sure. To my understanding a summary of a work is not a violation of copyright because the summary is transformative (serves a completely different purpose to the original work). But you probably can't copy someone else's summary, because now you are making a derivative that serves the same purpose as the original.

So here are the issues with LLMs in this regard:

  • LLMs have been shown to produce verbatim or almost-verbatim copies of their training data
  • LLMs can't figure out where their output came from so they can't tell their user whether the output closely matches any existing work, and if it does what license it is distributed under
  • You can argue that by its nature, an LLM is only ever producing derivative works of its training data, even if they are not the verbatim or almost-verbatim copies I already mentioned
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

LLMs have been shown to produce verbatim or almost-verbatim copies of their training data

That's either overfitting and means the training went wrong, or plain chance. Gazillions of bonkers court cases over "did the artist at some point in their life hear a particular melody" come to mind. Great. Now that's flanked with allegations of eidetic memory we have reached peak capitalism.

[–] [email protected] 1 points 1 year ago

Don't all three of those points apply to humans?