this post was submitted on 28 Jun 2024
580 points (95.0% liked)

Technology

57435 readers
3326 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 34 points 1 month ago (4 children)
[–] [email protected] 49 points 1 month ago* (last edited 1 month ago) (3 children)

"Copying is theft" is the argument of corporations for ages, but if they want our data and information, to integrate into their business, then, suddenly they have the rights to it.

If copying is not theft, then we have the rights to copy their software and AI models, as well, since it is available on the open web.

They got themselves into quite a contradiction.

[–] [email protected] 6 points 1 month ago

You realize that half of Lemmy is tying themselves in inconsistent logical knots trying to escape the reverse conundrum?

Copying isn't stealing and never was. Our IP system that artificially restricts information has never made sense in the digital age, and yet now everyone is on here cheering copyright on.

[–] [email protected] 4 points 1 month ago

You wouldn't download a car!

[–] [email protected] 18 points 1 month ago* (last edited 1 month ago) (3 children)

Yeah, I'm not a fan of AI but I'm generally of the view that anything posted on the internet, visible without a login, is fair game for indexing a search engine, snapshotting a backup (like the internet archive's Wayback Machine), or running user extensions on (including ad blockers). Is training an AI model all that different?

[–] [email protected] 6 points 1 month ago (1 children)

You can't be for piracy but against LLMs fair the same reason

And I think most of the people on Lemmy are for piracy,

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

I'm not in favor of piracy or LLMs. I'm also not a fan of copyright as it exists today (I think we should go back to the 1790 US definition of copyright).

I think a lot of people here on lemmy who are "in favor of piracy" just hate our current copyright system, and that's quite understandable and I totally agree with them. Having a work protected for your entire lifetime sucks.

[–] [email protected] 5 points 1 month ago (1 children)

The problem with copyright has nothing to do with terms limits. Those exacerbate the problem, but the fundamental problem with copyright and IP law is that it is a system of artificial scarcity where there is no need for one.

Rather than reward creators when their information is used, we hamfistedly try and prevent others from using that information so that people have to pay them to use it sometimes.

Capitalism is flat out the wrong system for distributing digital information, because as soon as information is digitized it is effectively infinitely abundant which sends its value to $0.

[–] [email protected] 2 points 1 month ago

Copyright is not a capitalist idea, it's collectivist. See copyright in the Soviet Union, the initial bill of which was passed in 1925, right near the start of the USSR.

A pure capitalist system would have no copyright, and works would instead be protected through exclusivity (I.e. paywalls) and DRM. Copyright is intended to promote sharing by providing a period of exclusivity (temporary monopoly on a work). Whether it achieves those goals is certainly up for debate.

Long terms go against any benefit to society that copyright might have. I think it does have a benefit, but that benefit is pretty limited and should probably only last 10-15 years. I think eliminating copyright entirely would leave most people worse off and probably mostly benefit large orgs that can afford expensive DRM schemes in much the same way that our current copyright duration disproportionately benefits large orgs.

[–] [email protected] 3 points 1 month ago (2 children)

Yes, it kind of is. A search engine just looks for keywords and links, and that's all it retains after crawling a site. It's not producing any derivative works, it's merely looking up an index of keywords to find matches.

An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues. Whether a particular generated result violates copyright depends on the license of the works it's based on and how much of those works it uses. So it's complicated, but there's very much a copyright argument there.

[–] [email protected] 7 points 1 month ago (1 children)

My brain also takes information and creates derivative works from it.

Shit, am I also a data thief?

[–] [email protected] 2 points 1 month ago

That depends, do you copy verbatim? Or do you process and understand concepts, and then create new works based on that understanding? If you copy verbatim, that's plagiarism and you're a thief. If you create your own answer, it's not.

Current AI doesn't actually "understand" anything, and "learning" is just grabbing input data. If you ask it a question, it's not understanding anything, it just matches search terms to the part of the training data that matches, and regurgitates a mix of it, and usually omits the sources. That's it.

It's a tricky line in journalism since so much of it is borrowed, and it's likewise tricky w/ AI, but the main difference IMO is attribution, good journalists cite sources, AI rarely does.

[–] [email protected] 6 points 1 month ago (1 children)

An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues.

Derivative works are not copyright infringement. If LLMs are spitting out exact copies, or near-enough-to-exact copies, that’s one thing. But as you said, the whole point is to generate derivative works.

[–] [email protected] 2 points 1 month ago

Derivative works are not copyright infringement

They absolutely are, unless it's covered by "fair use." A "derivative work" doesn't mean you created something that's inspired by a work, but that you've modified the the work and then distributed the modified version.

[–] [email protected] 3 points 1 month ago

None of those things replace that content, though.

Look, I dunno if this is legally a copyrights issue, but as a society, I think a lot of people have decided they're willing to yield to social media and search engine indexers, but not to AI training, you know? The same way I might consent to eating a mango but not a banana.

[–] [email protected] 18 points 1 month ago

Issue is power imbalance.

There's a clear difference between a guy in his basement on his personal computer sampling music the original musicians almost never seen a single penny from, and a megacorp trying to drive out creative professionals from the industry in the hopes they can then proceed to hike up the prices to use their generative AI software.

[–] [email protected] 6 points 1 month ago (1 children)

Didnt you hear? We stan draconian IP laws now because AI bad.

[–] [email protected] 14 points 1 month ago* (last edited 1 month ago) (3 children)

Is it that or is it that the laws are selectively applied on little guys and ignored once you make enough money? It certainly looks that way. Once you've achieved a level of "fuck you money" it doesn't matter how unscrupulously you got there. I'm not sure letting the big guys get away with it while little guys still get fucked over is as big of a win as you think it is?


Examples:

The Pirate Bay: Only made enough money to run the site and keep the admins living a middle class lifestyle.

VERDICT: Bad, wrong, and evil. Must be put in jail.

OpenAI: Claims to be non-profit, then spins off for-profit wing. Makes a mint in a deal with Microsoft.

VERDICT: Only the goodest of good people and we must allow them to continue doing so.


The IP laws are stupid but letting fucking rich twats get away with it while regular people will still get fucked by the same rules is kind of a fucking stupid ass hill to die on.

But sure, if we allow the giant companies to do it, SOMEHOW the same rules will "trickle down" to regular people. I think I've heard that story before... No, they only make exceptions for people who can basically print money. They'll still fuck you and me six ways to Sunday for the same.

I mean, the guys who ran Jetflicks, a pirate streaming site, are being hit with potentially 48 year sentences. Longer than a lot of way more serious fucking crimes. I've literally seen murderers get half that.

But yeah, somehow, the same rules will end up being applied to us? My ass. They're literally jailing people for it right now. If that wasn't the case, maybe this argument would have legs.

But AI companies? Totes okay, bro.

[–] [email protected] 6 points 1 month ago

The laws are currently the same for everyone when it comes to what you can use to train an AI with. I, as an individual, can use whatever public facing data I wish to build or fine tune AI models, same as Microsoft.

If we make copyright laws even stronger, the only one getting locked out of the game are the little guys. Microsoft, google and company can afford to pay ridiculous prices for datasets. What they don't own mainly comes from aggregators like Reddit, Getty, Instagram and Stack.

Boosting copyright laws essentially kill all legal forms of open source AI. It would force the open source scene to go underground as a pirate network and lead to the scenario you mentioned.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Yes, it is a travesty that people are being hounded for sharing information, but the solution to that isn't to lock up information tighter by restricting access to the open web and saying if you download something we put up to be freely accessed and then use it in a way we don't like you owe us.

The solution to bad laws being applied unevenly isn't to apply the bad laws to everyone equally, its to get rid of the bad laws.

[–] [email protected] 1 points 1 month ago

letting fucking rich twats get away with it

That's law in general...