this post was submitted on 29 Sep 2023
438 points (93.5% liked)

Technology

59672 readers
2988 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 1 year ago (1 children)

Yeah, that’s just flat out wrong

Hallucinations happen when there’s gaps in the training data and it’s just statistically picking what’s most likely to be next. It becomes incomprehensible when the model breaks down and doesn’t know where to go. However, the model doesn’t see a difference between hallucinating nonsense and a coherent sentence. They’re exactly the same to the model.

The model does not learn or understand anything. It statistically knows what the next word is. It doesn’t need to have seen something before to know that. It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.

I have formal training in AI and 90%+ of what I see people claiming AI can do is a complete misunderstanding of the tech.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I have formal training in AI

Than why do you keep talking such bullshit? You sound like you never even tried ChatGPT.

It statistically knows what the next word is.

Yes, that's understanding. What do you think your brain does differently? Please define whatever weird definition you have of "understand".

You are aware of Emergent World Representations? Or have a listen to what Ilya Sutskever has to say on the topic, one of the people behind GPT-4 and AlexNet.

It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.

Which is obviously nonsense, as I can ask it questions about its output. It can find mistakes in its own output and all that. It obviously understands what it is doing.