this post was submitted on 29 Feb 2024
93 points (100.0% liked)

technology

23118 readers
291 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 32 points 5 months ago (1 children)

LLMs are text prediction engines. They predict what comes after the previous text. They were trained on a large corpus of raw unfiltered internet, because that's the only thing available that actually has enough data (there is no good training set), then fine-tuned on smaller samples of hand-written and curated question/answer format "as an AI assistant boyscout" text. When the previous text gets too weird for the hand-curated stuff to be relevant to its predictions, it essentially reverts to raw internet. The most likely text to come after weird poorly written horror copypasta is more weird poorly written horror copypasta, so it predicts more, and then it's fed its previous output and told to predict what comes next, and it spirals into more of that.

[–] [email protected] 17 points 5 months ago (1 children)

The scary thing about LLMs isn't them "thinking", it's them being a reflection of everything we've said.

[–] [email protected] 6 points 5 months ago

A Social Narcissus