this post was submitted on 09 Jul 2023
505 points (97.2% liked)
Technology
59677 readers
3193 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This makes perfect sense. Why aren’t they going about it this way then?
My best guess is that maybe they just see openAI being very successful and wanting a piece of that pie? Cause if someone produces something via chatGPT (let’s say for a book) and uses it, what are they chances they made any significant amount of money that you can sue for?
It's hard to guess what the internal motivation is for these particular people.
Right now it's hard to know who is disseminating AI-generated material. Some people are explicit when they post it but others aren't. The AI companies are easily identified and there's at least the perception that regulating them can solve the problem, of copyright infringement at the source. I doubt that's true. More and more actors are able to train AI models and some of them aren't even under US jurisdiction.
I predict that we'll eventually have people vying to get their work used as training data. Think about what that means. If you write something and an AI is trained on it, the AI considers it "true". Going forward when people send prompts to that model it will return a response based on what it considers "true". Clever people can and will use that to influence public opinion. Consider how effective it's been to manipulate public thought with existing information technologies. Now imagine large segments of the population relying on AIs as trusted advisors for their daily lives and how effective it would be to influence the training of those AIs.