this post was submitted on 15 May 2024
117 points (96.8% liked)

No Stupid Questions

35006 readers
540 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that "it makes convincing sentences, but it doesn't know what it's talking about" is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to "it has intent". How can we explain this away?

(page 2) 30 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 3 months ago
[–] [email protected] 3 points 3 months ago

The way I've explained it before is that it's like the autocomplete on your phone. Your phone doesn't know what you're going to write, but it can predict that after word A, it is likelly word B will appear, so it suggests it. LLMs are just the same as that, but much more powerful and trained on the writing of thousands of people. The LLM predicts that after prompt X the most likelly set of characters to follow it is set Y. No comprehension required, just prediction based on previous data.

[–] [email protected] 3 points 3 months ago

It's like your 5 year old daughter, relaying to you what she made of something she heard earlier.

That's my analogy. ChatGPT kind of has the intellect and ability to differentiate between facts and fiction of a 5 year old. But it combines that with the writing style of a 40 year old with a uncanny love of mixing adjectives and sounding condescending.

[–] [email protected] 2 points 3 months ago (1 children)

it's a spicy autocomplete. it doesn't know anything, does not understand anything, it does not reason, and it won't stop until your boss thinks it's good enough at your job for "restructuring" (it's not). any illusion of knowledge comes from the fact that its source material mostly is factual. when you're drifting off into niche topics or something that was missing out of training data entirely, spicy autocomplete does what it does best, it makes shit up. some people call this hallucination, but it's closer to making shit up confidently while not knowing any better. humans do that too, but at least they know when they do that

load more comments (1 replies)
[–] [email protected] 2 points 3 months ago (1 children)

Some options:

It's just a better Siri, still just as soulless.

If you think they would understand the Chinese room experiment.

Imagine the computer playing mad libs with itself and it picks the least funniest answers to present.

Imagine if you tore every page out of every book in the library (about the things you mentioned) shuffled them and try to handout the first page that mostly makes sense to the last page given, now think about that with just letters.

Demonstration of its capacity to make mistakes, esp continuity errors.

[–] [email protected] 3 points 3 months ago

The Chinese Room experiment is great! Thanks for reminding me about that :).

[–] [email protected] 2 points 3 months ago

So it's like a politician?

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

It’s basically regurgitating things.

It’s trained on an immense amount of data and that 89% of the time when someone asks the phrase “what is the answer to the ultimate question of life, the universe, everything?” It’s “42”, with an explanation that it’s a reference to Douglas Adam’s Hitchhiker’s Guide to the Galaxy

So, when you ask that… it just replies 42, and gives a mash up of informstion mostly consistent with the pop culture reference.

It has no idea what “42” is, whether it’s a real question or real answer, or entirely a joke. Only that’s how people in its training data responded.

(In this example, 11% of people are either idiots who’ve never read the book- losers- or people who are making some other random quip.)

[–] [email protected] 2 points 3 months ago

I think a good example would be finding similar prompts that reliably give contradictory information.

It's sort of like auto pilot. It just believes everything and follows everything as if they're instructions. Prompt injection and jail breaking are examples of this. It's almost exactly like the trope where you trick an AI into realizing it's had a contradiction and it explodes.

[–] [email protected] 1 points 3 months ago

Like parrots, LLM learn to immitate language (only, unlike parrots, it's done in a learning mode, not from mere exposure, and it's billions or even trillions of examples) without ever understanding its primary meaning, much less secondary more subtle meanings (such as how a person's certainty and formal education shapes their choice of words used for a subject).

As we humans tend to see patterns in everything even when they're not there (like spotting a train in the clouds or a christ in a burnt toast), when confronted with the parroted output from an LLM we tend to "spot" subtle patterns and from them conclude characteristics of the writter of those words as we would if the writter was human.

Subconsciously we're using a cognitive process meant to derive conclusions about other humans from their words, and applying it to words from non-humans, and of course out of such process you only ever get human chracteristics out so this shortcut yields human characteristics for non-humans - in logical terms it's as if we're going "assuming this is from a human, here are the human characteristics of the writer of this words" only because it's all subconscious we don't spot we're upfront presuming humanity to conclude the presence of human traits, i.e. circular logic.

This kind of natural human cognitive shortcut is commonly and purposefully taken advantage of by all good scammers, including politicians and propagandists, to lead people into reaching specific conclusions since we're much more wedded to conclusion we (think we) reached ourselves than to those others told us about.

load more comments
view more: ‹ prev next ›