this post was submitted on 25 Jul 2024
1142 points (98.5% liked)

memes

10686 readers
1881 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 5 months ago (1 children)

The layman's explanation of how an LLM works is it tries to predict the most likely word, or sequence of words, that follow from the last. This is based all on the input training set, which is compiled into a big bucket of probabilities. All text input influences those internal probabilities which in turn generates likely output. This is also why these things are error-prone because it's really just hyper-sophisticated predictive text, and is doing its best to "play the odds."

You can also view an LLM as one fiendishly massive if/else statement that chews on text tokens. There's also some random seeding thrown in for more variation in output, but these things are 100% repeatable if you use the same seed every time; it's just compiled logic.

[–] [email protected] 3 points 5 months ago (1 children)

Hehe best illustration. "big bucket of probabilities" ...hell yeah

[–] [email protected] 3 points 5 months ago

Yup. I had this in my head at the time: