this post was submitted on 28 Jun 2024
934 points (98.7% liked)

Science Memes

11189 readers
2307 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 4 months ago* (last edited 4 months ago) (3 children)

This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT outputs is probably bullshit.

Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.

[–] [email protected] 5 points 4 months ago

It's even more than just "accepting everything as true" the machines have no concept of true. The machine doesn't think. It's a combination of three processes: prediction algorithm for the next word, algorithm that compares grammar and sentence structure parity, and at least one algorithm to help police the other two for problematic statements.

Clearly the problem is with that last step, but the solution would be a human or a general intelligience, meaning the current models in use will never progress beyond this point.

[–] [email protected] 1 points 4 months ago (1 children)

This is very different from how our minds work.

Childrens' minds work similarly.

[–] [email protected] 5 points 4 months ago (1 children)

Why do you even think that? Children don’t ask questions? Don’t try to find answers?

[–] [email protected] 0 points 4 months ago (1 children)

Sure they do. But they also trust adults a lot. Children try to find answers only because they have stimulus other than humans telling them things, but if that stimulus is missing, they will believe the adult. The environments that AI "grow up" in are different, but they are very similar from a mental perspective.

How many times have you heard the story of something hearing something false from a family member and holding it close to their heart for years?

[–] [email protected] 1 points 4 months ago

Now that I think about children develop critical thinking at around the age of 10. Perhaps you are right. But, the question remains, will LLMs develop such critical thinking on it’s own or are we still missing something?

[–] [email protected] 1 points 4 months ago (1 children)

Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.

[–] [email protected] 2 points 4 months ago (1 children)

Does using authoritative sources is fool proof? For example, is everything written in Wikipedia factually correct? I don’t believe so unless I actually check it. Also, what about reddit or stack overflow? Can they be considered factually correct? To some extent, yes. But not completely. That is why most of these LLMs give such arbitrary answers. They extrapolate on information they have no way knowing or understanding.

[–] [email protected] 0 points 4 months ago

I don’t quite understand what you mean by extrapolate on information. LLMs have no model of what an information or the truth is. However, factual information can be passed into the context, the way Bing does it.