this post was submitted on 21 May 2024
134 points (95.9% liked)
Technology
59415 readers
2886 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Youd be surprised at the level of unthinking hatred around them, but even discarding that Ive seen it said often that LLMs have no internal model of what they are talking about as they are just next word generators. This quite clearly contradicts that interpretation.
You used both phrases in this thread, but those are two very different things. It's a stretch to say this research supports the latter.
Yes, LLMs are still next-token generators. That is a descriptive statement about how they operate. They just have embedded knowledge that allows them to generate sometimes meaningful text.