this post was submitted on 09 Jan 2024
527 points (98.2% liked)

Technology

58012 readers
3059 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 8 months ago (6 children)

Every work is protected by copyright, unless stated otherwise by the author.
If you want to create a capable system, you want real data and you want a wide range of it, including data that is rarely considered to be a protected work, despite being one.
I can guarantee you that you're going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that's compiled with permission of every copyright holder involved.

[–] [email protected] 31 points 8 months ago (1 children)

How hard it is doesn't matter. If you can't compensate people for using their work, or excluding work people don't want users, you just don’t get that data.

There's plenty of stuff in the public domain.

[–] [email protected] -3 points 8 months ago (1 children)

And artists are being compensated now fairly?

[–] [email protected] 1 points 8 months ago (1 children)

Previous wrongs don't make this instance right.

[–] [email protected] 24 points 8 months ago

Sounds like a OpenAI problem and not an us problem.

[–] [email protected] 17 points 8 months ago* (last edited 8 months ago)

I never said it was going to be easy - and clearly that is why OpenAI didn't bother.

If they want to advocate for changes to copyright law then I'm all ears, but let's not pretend they actually have any interest in that.

[–] [email protected] 7 points 8 months ago

I can guarantee you that you're going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that's compiled with permission of every copyright holder involved.

You make this sound like a bad thing.

[–] [email protected] 5 points 8 months ago (1 children)

And why is that a bad thing?

Why are you entitled to other peoples work, just because “it’s hard to find data”?

[–] [email protected] -3 points 8 months ago (1 children)

Why are you entitled to other peoples work?

Do you really think you've never consumed data that was not intended for you? Never used copyrighted works or their elements in your own works?

Re-purposing other people's work is literally what humanity has been doing for far longer than the term "license" existed.

If the original inventor of the fire drill didn't want others to use it and barred them from creating a fire bow, arguing it's "plagiarism" and "a tool that's intended to replace me", we wouldn't have a civilization.

If artists could bar other artists from creating music or art based on theirs, we wouldn't have such a thing as "genres". There are genres of music that are almost entirely based around sampling and many, many popular samples were never explicitly allowed or licensed to anyone. Listen to a hundred most popular tracks of the last 50 years, and I guarantee you, a dozen or more would contain the amen break, for example.

Whatever it is you do with data: consume and use yourself or train a machine learning model using it, you're either disregarding a large number of copyright restrictions and using all of it, or exist in an informational vacuum.

[–] [email protected] 2 points 8 months ago (1 children)

People do not consume and process data the same way an AI model does. Therefore it doesn’t matter about how humans learn, because AIs don’t learn. This isn’t repurposing work, it’s using work in a way the copyright holder doesn’t allow, just like copyright holders are allowed to prohibit commercial use.

[–] [email protected] -3 points 8 months ago* (last edited 8 months ago) (1 children)

It's called "machine learning", not "AI", and it's called that for a reason.

"AI" models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves, due to their complexity.

For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. For the device to be useful, however, the creator will have to analyze mathematical works of other people (to figure out how math works to begin with) and to test their creation against them. That is, they'd run formulas derived and solved by other people to verify that the results are correct.

With "AI" instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system. Such as languages or images.

If we were training a regular calculator this way, we might feed it things like "2+2=4", "3x3=9", "10/5=2", etc.

If, after we're done, the model can only solve those three expressions - we have failed. The model didn't learn the mathematical system, it just memorized the examples. That's called overfitting and that's what every single "AI" company in the world is trying to avoid. (And to do so, they need a lot of diverse data)

Of course, if instead of those expressions the training set consisted of Portrait of Dora Maar, Mona Lisa, and Girl with a Pearl Earring, the model would only generate those tree paintings.

However, if the training was successful, we can ask the model to solve 3x10/5+2 - an expression it has never seen before - and it'd give us the correct result - 8. Or, in case of paintings, if we ask for a "Portrait of Mona List with a Pearl Earring" it would give us a brand new image that contains elements and styles of the thee paintings from the training set merged into a new one.

Of course the architecture of a machine learning model and the architecture of the human brain doesn't match, but the things both can do are quite similar. Creating new works based on existing ones is not, by any means, a new invention. Here's a picture that merges elements of "Fear and Loathing in Las Vegas" and "My Little Pony", for example.

The major difference is that skills and knowledge of individual humans necessary to do things like that cannot be transferred or lend to other people. Machine learning models can be. This tech is probably the closest we'll even be to being able to shake skills and knowledge "telepathically", so to say.

[–] [email protected] 3 points 8 months ago

I’m well aware of how machine learning works. I did 90% of the work for a degree in exactly it. I’ve written semi-basic neural networks from scratch, and am familiar with terminology around training and how the process works.

Humans learn, process, and most importantly, transform data in a different manner than machines. The sum totality of the human existence each individual goes through means there is a transformation based on that existence that can’t be replicated by machines.

A human can replicate other styles, as you show with your example, but that doesn’t mean that is the total extent of new creation. It’s been proven in many cases that civilizations create art in isolation, not needing to draw from any previous art to create new ideas. That’s the human element that can’t be replicated in anything less than true General AI with real intelligence.

Machine Learning models such as the LLMs/GenerativeAI of today are statistically based on what it has seen before. While it doesn’t store the data, it does often replicate it in its outputs. That shows that the models that exist now are not creating new ideas, rather mixing up what they already have.