this post was submitted on 22 Apr 2024
21 points (76.9% liked)

Asklemmy

43290 readers
784 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

A "natural language query" search engine is what I need sometimes.

Edit: directly reachable with the !ai bang

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -1 points 4 months ago* (last edited 4 months ago) (1 children)

Every current LLM is built this way so it is a hard and fast rule.

No, that is a trend, not a rule, and the former of which I would argue is not even 100%. Claude in my experience of using it seems to be designed to be more conversational and factual, not strictly entertaining.

And right now its incredibly foolish to believe what an LLM tells you. They lie, like a lot.

I never said you should believe everything an LLM says. Of course a critical mind is important, but one can't necessarily just assume any answer they give is wrong either just because they're an LLM. Especially in this stage of LLM development; the technology is still maturing, still in its infancy.

I’m only talking about current iterations. No one here knows what the next iterations will be so we can’t comment on it.

Generally the more a technology matures out of its infancy the better it becomes at the job it's designed for. If an AI is designed to be entertaining, then yes it will be better at that in time; but likewise also if it's designed for factuals. And I already said what I think about the current state of development in regards to that.

Therefore, I think it's a reasonable assumption that as time goes on, the frequency of hallucinations will go down. We're still working out the kinks, as it is.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago) (1 children)

Rule or trend, whatever word you use is semantics at this point. And your experience is irrelevant to the facts of how all current LLM's are built. They are all built the same way. We have proof they are all built the same way.

If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?

We can sit down and speculate all day about what could be but that has no bearing on what is which is the entire point of this discussion.

[–] [email protected] -1 points 4 months ago* (last edited 4 months ago) (1 children)

Rule or trend, whatever word you use is semantics at this point.

Hardly. There is a very clear distinction between a rule & a trend.

And your experience is irrelevant to the facts of how all current LLM’s are built. They are all built the same way. We have proof they are all built the same way.

They are not all built the same, though. Claude, for instance, is built with a framework of values called "Constitutional AI". It's not perfect, as the developers even state, but it is a genuine step in the right direction compared to many of its contemporaries in the AI space.

If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?

Humans are not tools that can be improved upon. They are sentient beings that have conscious choice. LLMs are the former, and are not the latter.

They are not 1:1 comparisons as you claim.

[–] [email protected] 1 points 4 months ago (1 children)

You are wrong and tiresome. Goodbye.

[–] [email protected] 1 points 4 months ago* (last edited 4 months ago)

And yet I've provided sources for each and every one of my assertions, while you have not.

Good day.