218
this post was submitted on 02 Nov 2023
218 points (100.0% liked)
Technology
37742 readers
864 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah it is. Even assuming fair use applied, fair use is largely a question of how much a work is transformed and (a billion images) -> AI model is just about the most transformative use case out there.
And this assumes this matters when they're literally not copying the original work (barring over fitting). It's a public internet download. The "copy" is made by Facebook or whoever you uploaded the image to.
The model doesn't contain the original artwork or parts of it. Stable diffusion literally has one byte per image of training data.
I never understood why so many from the more techbro political alignment find this argument so convincing.
It doesn't really matter whether the original data is present in the model or if it was reduced to such an abstract form that we cannot find it anymore. The model only can exist because of the original data being used to make it, and it was used without proper license. It doesn't matter how effective nor how lossy your compression is, mere compression is not transformation and does not wash away copyright.
The argument that it is in some way transformative is more relevant. But it's also got a pretty heavy snort of "thinking like a cop" in it, fundamentally. Yes, the law protects transformative works, so if we only care what the written rules of the law says, then if we can demonstrate that what the AI does is transformative, the copyright issues go away. This isn't a slam dunk argument that there's nothing wrong with what an AI does even if we grant it is transformative. It may also simply be proving that the copyright law we have fails to protect artists in the new era of AI.
In a truly ideal world, we wouldn't have copyright. At all. All these things would be available and offered freely to everyone. All works would be public domain. And artists who contributed to the useful arts and sciences would be well-fed, happy, and thriving. But we don't live in that ideal world, so instead we have copyright law. The alternative is that artists cannot earn a living on their works.
Yeah it does. One of the arguments people make is that AI models are just a form of compression, and as a result distributing the model is akin to distributing all the component parts. This fact invalidates that argument.
If we change the law to make it illegal it's illegal.
The number of bytes per image doesn't necessarily mean there's no copying of the original data. There are examples of some images being "compressed" (lossily) by Stable Diffusion; in that case the images were specifically sought out, but I think it does show that overfitting is an issue, even if the model is small enough to ensure it doesn't overfit for every image.
Over fitting is an issue for the images that were overfit. But note in that article that those images mostly appeared many times in the data set.
People who own the rights to one of those images have a valid argument. Everyone else doesn't.