this post was submitted on 09 Aug 2023
379 points (100.0% liked)

Technology

37585 readers
307 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 116 points 1 year ago (2 children)

I agree with google, only I go a step further and say any AI model trained on public data should likewise be public for all and have its data sources public as well. Can't have it both ways Google.

[–] [email protected] 49 points 1 year ago

To be fair, Google releases a lot of models as open source: https://huggingface.co/google

Using public content to create public models is also fine in my book.

But since it's Google I'm also sure they are doing a lot of shady stuff behind closed doors.

load more comments (1 replies)
[–] [email protected] 58 points 1 year ago (2 children)

Copyright law already allows generative AI systems to scrape the internet. You need to change the law to forbid something, it isn't forbidden by default. Currently, if something is published publicly then it can be read and learned from by anyone (or anything) that can see it. Copyright law only prevents making copies of it, which a large language model does not do when trained on it.

[–] [email protected] 35 points 1 year ago (9 children)

A lot of licensing prevents or constrains creating derivative works and monetizing them. The question is for example if you train an AI on GPL code, does the output of the model constitute a derivative work?

If yes, Github Copilot is illegal as it produces code that should comply to multiple conflicting license requirements. If no, I can write some simple AI that is "trained" to regurgitate its output on a prompt, and run a leaked copy of Windows through it, then go around selling Binbows and MSFT can't do anything about it.

The truth is mostly between the two, this is just piracy, which always has been a gray area because of the difficulty of prosecuting it, previously because the perpetrators were many and hard to find, now it's because the perpetrators are billion dollar companies with expensive lawyer teams.

[–] [email protected] 22 points 1 year ago (2 children)

The question is for example if you train an AI on GPL code, does the output of the model constitute a derivative work?

This question is completely independent of whether the code was generated by an AI or a human. You compare code A with code B, and if the judge and jury agree that code A is a derivative work of code B then you win the case. If the two bodies of work don't have sufficient similarities then they aren't derivative.

If no, I can write some simple AI that is “trained” to regurgitate its output on a prompt

You've reinvented copy-and-paste, not an "AI." AIs are deliberately designed to not copy-and-paste. What would be the point of one that did? Nobody wants that.

Filtering the code through something you call an AI isn't going to have any impact on whether you get sued. If the resulting code looks like copyrighted code, then you're in trouble. If it doesn't look like copyrighted code then you're fine.

[–] [email protected] 13 points 1 year ago (1 children)

AIs are deliberately designed to not copy-and-paste.

AI is a marketing term, not a technical one. You can call anything "AI", but it's usually predictive models that get called that.

AIs are deliberately designed to not copy-and-paste. What would be the point of one that did? Nobody wants that.

For example if the powers that be decided to say licenses don't apply once you feed material through an "AI", and failed to define AI, you could say you wrote this awesome OS using an AI that you trained exclusively using Microsoft proprietary code. Their licenses and copyright and stuff doesn't apply to AI training data so you could sell that new code your AI just created.

It doesn't even have to be 100% identical to Windows source code. What if it's just 80%? 50%? 20%? 5%? Where is the bar where the author can claim "that's my code!"?

Just to compare, the guys who set out to reimplement Win32 APIs for use in Linux (the thing that made it into MacOS as well now) deliberately would not accept help from anyone who ever saw any Microsoft source code for fear of being sued. The bar was that high when it was a small FOSS organization doing it. It was 0%, proven beyond a doubt.

Now that Microsoft is the author, it's not a problem when Github Copilot spits out GPL code word for word, ironically together with its license.

load more comments (1 replies)
[–] [email protected] 10 points 1 year ago

If the resulting code looks like copyrighted code, then you’re in trouble. If it doesn’t look like copyrighted code then you’re fine.

^^ Very much this.

Loads of people are treating the process of AI creating works as either violating copyright or not. But that is not how copyright works. It applies to the output of a process not the process itself. If someone ends up writing something that happens to be a copy of something they read before - that is a violation of copy write laws. If someone uses various works and creates something new and unique then that is not a violation. It does not - at this point in time at least - matter if that someone is a real person or an AI.

AI can both violate copy write on one work and not on another. Each case is independent and would need to be legislated differently. But AI can produce so much content so quickly that it creates a real problem for a case by case analysis of copy write infringement. So it is quite likely the laws will need to change to account for this and will likely need to treat AI works differently from human created works. Which is a very hard thing to actually deal with.

Now, one could also argue the model itself is a violation of copyright. But that IMO is a stretch - a model is nothing like the original work and the copyright law also does not cover this case. It would need to be taken to court to really decide on if this is allowed or not.

Personally I don't think the conversation should be on what the laws currently allow - they were not designed for this. But instead what the laws should allow. So we can steer the conversation towards a better future. Lots of artists are expressing their distaste for AI models to be trained on their works - if enough people do this laws can be crafted to backup this view.

load more comments (8 replies)
[–] [email protected] 10 points 1 year ago (14 children)

An AI model is a derivative work of its training data and thus a copyright violation if the training data is copyrighted.

[–] [email protected] 8 points 1 year ago (1 children)

Derivative works are only copyright violations when they replicate substantial portions of the original without changes.

The entirety of human civilization is derivative works. Derivative works aren't infringement.

[–] [email protected] 8 points 1 year ago (1 children)
load more comments (1 replies)
load more comments (12 replies)
[–] [email protected] 49 points 1 year ago* (last edited 1 year ago) (19 children)

It’s not turning copyright law on its head, in fact asserting that copyright needs to be expanded to cover training a data set IS turning it on its head. This is not a reproduction of the original work, its learning about that work and and making a transformative use from it. An generative work using a trained dataset isn’t copying the original, its learning about the relationships that original has to the other pieces in the data set.

[–] [email protected] 15 points 1 year ago (3 children)

This is artificial pseudointelligence, not a person. It doesn't learn about or transform anything.

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago)

Im not the one anthropomorphising the technology here.

load more comments (2 replies)
[–] [email protected] 13 points 1 year ago (11 children)

The lines between learning and copying are being blurred with AI. Imagine if you could replay a movie any time you like in your head just from watching it once. Current copyright law wasn’t written with that in mind. It’s going to be interesting how this goes.

[–] [email protected] 11 points 1 year ago (8 children)

Imagine being able to recall the important parts of a movie, it's overall feel, and significant themes and attributes after only watching it one time.

That's significantly closer to what current AI models do. It's not copyright infringement that there are significant chunks of some movies that I can play back in my head precisely. First because memory being owned by someone else is a horrifying thought, and second because it's not a distributable copy.

[–] [email protected] 8 points 1 year ago (2 children)

the thought of human memory being owned is horrifying. We’re talking about AI. This is a paradigm shift. New laws are inevitable. Do we want AI to be able to replicate small creators work and ruin their chances at profitability? If we aren’t careful, we are looking at yet another extinction wave where only the richest who can afford the AI can make anything. I don’t think it’s hyperbole to be concerned.

load more comments (2 replies)
load more comments (7 replies)
load more comments (10 replies)
load more comments (17 replies)
[–] [email protected] 47 points 1 year ago (3 children)

To be honest I'm fine with it in isolation, copyright is bullshit and the internet is a quasi-socialist utopia where information (an infinitely-copyable resource which thus has infinite supply and 0 value under capitalist economics) is free and humanity can collaborate as a species. The problem becomes that companies like Google are parasites that take and don't give back, or even make life actively worse for everyone else. The demand for compensation isn't so much because people deserve compensation for IP per se, it's an implicit understanding of the inherent unfairness of Google claiming ownership of other people's information while hoarding it and the wealth it generates with no compensation for the people who actually made that wealth. "If you're going to steal from us, at least pay us a fraction of the wealth like a normal capitalist".

If they made the models open source then it'd at least be debatable, though still suss since there's a huge push for companies to replace all cognitive labor with AI whether or not it's even ready for that (which itself is only a problem insofar as people need to work to live, professionally created media is art insofar as humans make it for a purpose but corporations only care about it as media/content so AI fits the bill perfectly). Corporations are artificial metaintelligences with misaligned terminal goals so this is a match made in superhell. There's a nonzero chance corporations might actually replace all human employees and even shareholders and just become their own version of skynet.

Really what I'm saying is we should eat the rich, burn down the googleplex, and take back the means of production.

[–] Ubermeisters 12 points 1 year ago (1 children)

Okay so I took back the means of production but it says it's a subscription basis now

[–] [email protected] 10 points 1 year ago (1 children)

That's late-stage capitalism for you – even revolution comes with a subscription fee

load more comments (1 replies)
[–] [email protected] 10 points 1 year ago (1 children)

Or, if it was some non-profit doing the work for the good of everyone :')

[–] [email protected] 9 points 1 year ago

If only there were some kind of open AI research lab lmao. In all seriousness Anthropic is pretty close to that, though it appears to be a public benefit corporation rather than a nonprofit. Luckily the open source community in general is really picking up the slack even without a centralized organization, I wouldn't be surprised if we get something like the Linux Foundation eventually.

[–] [email protected] 40 points 1 year ago* (last edited 1 year ago) (8 children)

Copyright law is gaslighting at this point. Piracy being extremely illegal but then this kind of shit being allowed by default is insane.

We really are living under the boot of the ruling classes.

load more comments (8 replies)
[–] [email protected] 35 points 1 year ago* (last edited 1 year ago) (2 children)

Can we get some young politicians elected who has a degree in IT ? Boomers dont understand technology that's why these companies keeps screwing the people.

[–] [email protected] 15 points 1 year ago (2 children)

It's because they're corrupt and young people are just as susceptible to lobbyists bribes, unfortunately. The gerontocracy doesn't make things better though, that's for sure.

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)

True but that doesn't mean it wouldn't be better to have politicians who have a better understanding of the systems they're legislating. "People can be bribed" isn't a good excuse to not change anything.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 28 points 1 year ago (16 children)

Personally I’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single company who’s built a thing on the internet.

[–] [email protected] 18 points 1 year ago (3 children)

I just take comfort in the fact that my art will never be good enough for a generative Ai to steal.

load more comments (3 replies)
load more comments (15 replies)
[–] [email protected] 27 points 1 year ago (1 children)

Books will start needing to add a robots.txt page to the back of the book

[–] [email protected] 21 points 1 year ago (1 children)

Which will be ignored by search engines, as is tradition?

[–] [email protected] 9 points 1 year ago

… which was the style at the time.

[–] [email protected] 26 points 1 year ago (8 children)

OK, so I shall create a new thread, because I was harassed. Why bother publishing anything if it's original if it's just going to be subsumed by these corporations? Why bother being an original human being with thoughts to share that are significant to the world if, in the end, they're just something to be sucked up and exploited? I'm pretty smart. Keeping my thoughts to myself.

load more comments (8 replies)
[–] [email protected] 25 points 1 year ago

With each day I hate the internet and these fucking companies even more.

[–] [email protected] 20 points 1 year ago (1 children)

Google can go suck on a lemon!

load more comments (1 replies)
[–] [email protected] 18 points 1 year ago (3 children)

Worth considering that this is already the law in the EU. Specifically, the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market has exceptions for text and data mining.

Article 3 has a very broad exception for scientific research: "Member States shall provide for an exception to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, and Article 15(1) of this Directive for reproductions and extractions made by research organisations and cultural heritage institutions in order to carry out, for the purposes of scientific research, text and data mining of works or other subject matter to which they have lawful access." There is no opt-out clause to this.

Article 4 has a narrower exception for text and data mining in general: "Member States shall provide for an exception or limitation to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, Article 4(1)(a) and (b) of Directive 2009/24/EC and Article 15(1) of this Directive for reproductions and extractions of lawfully accessible works and other subject matter for the purposes of text and data mining." This one's narrower because it also provides that, "The exception or limitation provided for in paragraph 1 shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online."

So, effectively, this means scientific research can data mine freely without rights' holders being able to opt out, and other uses for data mining such as commercial applications can data mine provided there has not been an opt out through machine-readable means.

[–] [email protected] 31 points 1 year ago (9 children)

I think the key problem with a lot of the models right now is that they were developed for "research", without the rights holders having the option to opt out when the models were switched to for-profit. The portfolio and gallery websites, from which the bulk of the artwork came from, didn't even have opt out options until a couple of months ago. Artists were therefore considered to have opted in to their work being used commercially because they were never presented with the option to opt out.

So at the bare minimum, a mechanism needs to be provided for retroactively removing works that would have been opted out of commercial usage if the option had been available and the rights holders had been informed about the commercial intentions of the project. I would favour a complete rebuild of the models that only draws from works that are either in the public domain or whose rights holders have explicitly opted in to their work being used for commercial models.

Basically, you can't deny rights' holders an ability to opt out, and then say "hey, it's not our fault that you didn't opt out, now we can use your stuff to profit ourselves".

[–] [email protected] 8 points 1 year ago (1 children)

Common sense would surely say that becoming a for-profit company or whatever they did would mean they've breached that law. I assume they figured out a way around it or I've misunderstood something though.

[–] [email protected] 10 points 1 year ago

I think they just blatantly ignored the law, to be honest. The UK's copyright law is similar, where "fair dealing" allows use for research purposes (legal when the data scrapes were for research), but fair dealing explicitly does not apply when the purpose is commercial in nature and intended to compete with the rights holder. The common sense interpretation is that as soon as the AI models became commercial and were being promoted as a replacement for human-made work, they were intended to be a for profit competition to the rights holders.

If we get to a point where opt outs have full legal weight, I still expect the AI companies to use the data "for research" and then ship the model as a commercial enterprise without any attempt to strip out the works that were only valid to use for research.

load more comments (8 replies)
load more comments (2 replies)
[–] [email protected] 13 points 1 year ago* (last edited 1 year ago)

This is like the beginning of a Hitchhiker's Guide to the Galaxy, where they put the responsibility on the main character to go to the department of transportation basement and see that they had posted a notice that they're going to destroy his house. No Google, you don't get to dictate that people come to your dark pattern website and tell you you're not allowed to use their content. Disapproval is implied until people OPT-IN! It's a good thing Google changed their motto from Don't Be Evil or we'd have quite the conundrum.

load more comments
view more: next ›