this post was submitted on 29 Sep 2023
438 points (93.5% liked)

Technology

59672 readers
2920 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 25 points 1 year ago (4 children)

I hope they can at least get compensated.

[–] [email protected] 7 points 1 year ago (2 children)

So where can I check to see if my book was used? I published a book.

[–] [email protected] 3 points 1 year ago (1 children)

Did you ever comment on Reddit before 2015? If so, your copyrighted material was used to train the modern LLMs even if your published book wasn't used at all.

[–] [email protected] 1 points 1 year ago

Yes I did my account is almost 11years old on Reddit. But I was talking about my novel that was never on Reddit.

[–] [email protected] 2 points 1 year ago

The database is here. You'll have to sign up for a free trial if you're not a subscriber to The Atlantic already. https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/

[–] [email protected] 2 points 1 year ago

What about my Reddit history?

Arguably there's more of my text there that was used to train these LLMs than most authors in that list.

The comment elsewhere in this thread about models built on broad public data needing to be public in turn is a salient one.

IP laws were designed to foster innovation, not hold it back.

I'd much rather see a world where we have open access models trained broadly and accelerating us towards greener pastures than one where book publishers get a few extra cents from less capable closed models that take longer for us to reach the heyday where LLMs can do things like review the past 20 years of cancer research in order to identify promising trends in allocation of future resources.

OpenAI should probably rightfully be dinged for downloading copyrighted media the same way any average user would be sued when caught doing the same.

But the popular arguments these days for making training infringement are ass backwards and a slippery slope to a far more dystopian future than the alternative.

[–] [email protected] -1 points 1 year ago (2 children)

they were compensated when the company using the book, purchased the book. you can't tell me what to do with the words written in the book once I've purchased it. nor do you own the ideas or things I come up with as a result of your words in your book. of course this argument only holds up if they purchased the book. if it was "stolen" then they are entitled to the $24.95 their book costs.

[–] [email protected] 2 points 1 year ago (1 children)

That's the thing -- they weren't.

The case has two prongs.

One is that training the AI on copyrighted material is somehow infringement, which is total BS and a dangerous path for the world to go down.

The other is that copyrighted material was illegally downloaded by OpenAI, which is pretty much an open and shut case, as they didn't buy up copies of 100k books, they basically torrented them.

And because of ridiculous IP laws bought by industry lobbyists in the dawn of the digital age, the damages are more like $250,000 per book if willful infringement, not $24.95.

Had they purchased them, these cases would very likely be headed for the dumpster heap.

That said, there's a certain irony to Lemmy having pirate subs as one of the most popular while also generally being aggressively pro-enforcement on IP infringement.

[–] [email protected] -1 points 1 year ago (1 children)

Training AI on copyrighted material is infringement and I’ll die on that hill. It’s use of copyrighted material to create a commercial product. Doesn’t get any more clear cut than that.

I know as an artist/musician/photographer I’d rather not put my creations out there at all if it means some corporation is going to be able to steal it.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Courts look at how the party claiming fair use is using the copyrighted work, and are more likely to find that nonprofit educational and noncommercial uses are fair.

This does not mean, however, that all nonprofit education and noncommercial uses are fair and all commercial uses are not fair; instead, courts will balance the purpose and character of the use against the other factors below.

Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work.

You can stand wherever you like on any hill you'd like, but the question of nonprofit use vs commercial use is only one part of determining fair use, and where your stance is going to have serious trouble is the fact that the result of the training is extremely transformed from the training data, with an entirely different purpose and character and cannot even reproduce any of the works used in training in their entirety. And the areas where they can reproduce in part are likely not even the direct result of using the work itself in training, but additional reinforcement from other additional secondary uses and quotations of the reproducible parts of works in question.

And don't worry. Within about a year or so (by the time any legal decision gets finalized or new legislation is passed) no one is going to care about 'stealing' your or anyone else's creations, as training is almost certainly moving towards using primarily synthetic data and curated content creation to balance out edge cases.

Use of preexisting works was a stepping stone hack that acted like jumper cables starting the engine. Now that it's running, there's a rapidly diminishing need for the other engine.

Edit: And you'd have a very hard time convincing me that StableDiffusion using Studio Ghibli movies to train a neural network that can produce new and different images in that style is infringement while Weiden+Kennedy commercially making money off of producing this ad is not.

[–] [email protected] 1 points 1 year ago

Good point. I guess this aspect is much different from the AI Art scene, where the producers of the dataset are usually not compensated for their drawings.