this post was submitted on 17 May 2024
502 points (94.8% liked)

Technology

57455 readers
5064 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 15 points 3 months ago (5 children)

Why do tech journalists keep using the businesses' language about AI, such as "hallucination", instead of glitching/bugging/breaking?

[–] [email protected] 43 points 3 months ago (2 children)

hallucination refers to a specific bug (AI confidently BSing) rather than all bugs as a whole

[–] [email protected] 17 points 3 months ago

Honestly, it's the most human you'll ever see it act.

It's got upper management written all over it.

[–] [email protected] -3 points 3 months ago* (last edited 3 months ago) (1 children)

(AI confidently BSing)

Isn't it more accurate to say it's outputting incorrect information from a poorly processed prompt/query?

[–] [email protected] 31 points 3 months ago (1 children)

No, because it's not poorly processing anything. It's not even really a bug. It's doing exactly what it's supposed to do, spit out words in the "shape" of an appropriate response to whatever was just said

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

When I wrote "processing", I meant it in the sense of getting to that "shape" of an appropriate response you describe. If I'd meant this in a conscious sense I would have written, "poorly understood prompt/query", for what it's worth, but I see where you were coming from.

[–] [email protected] 36 points 3 months ago (2 children)

It's not a bug, it's a natural consequence of the methodology. A language model won't always be correct when it doesn't know what it is saying.

[–] [email protected] 9 points 3 months ago (1 children)

Yeah, on further thought and as I mention in other replies, my thoughts on this are shifting toward the real bug of this being how it's marketed in many cases (as a digital assistant/research aid) and in turn used, or attempted to be used (as it's marketed).

[–] [email protected] 8 points 3 months ago

I agree, it's a massive issue. It's a very complex topic that most people have no way of understanding. It is superb at generating text, and that makes it look smarter than it actually is, which is really dangerous. I think the creators of these models have a responsibility to communicate what these models can and can't do, but unfortunately that is not profitable.

[–] [email protected] 9 points 3 months ago (2 children)

it never knows what it's saying

[–] [email protected] 2 points 3 months ago

That was what I was trying to say, I can see that the wording is ambiguous.

[–] [email protected] -1 points 3 months ago

Oh, at some point it will lol

[–] [email protected] 20 points 3 months ago (1 children)

Because hallucinations pretty much exactly describes what's happening? All of your suggested terms are less descriptive of what the issue is.

The definition of hallucination:

A hallucination is a perception in the absence of an external stimulus.

In the case of generative AI, it's generating output that doesn't match it's training data "stimulus". Or in other words, false statements, or "facts" that don't exist in reality.

[–] [email protected] 2 points 3 months ago (1 children)

perception

This is the problem I take with this, there's no perception in this software. It's faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.

[–] [email protected] -1 points 3 months ago* (last edited 3 months ago)

I have adopted the philosophy that human brains might not be as special as we've thought, and that the untrained behavior emerging from LLMs and image generators is so similar to human behaviors that I can't help but think of it as an underdeveloped and handicapped mind.

I hypothesis that a human brain, who's only perception of the world is the training data force fed to it by a computer, would have all the same problems the LLMs do right now.

To put it another way... The line that determines what is sentient and not is getting blurrier and blurrier. LLMs have surpassed the Turing test a few years ago. We're simulating the level of intelligence of a small animal today.

[–] [email protected] 20 points 3 months ago (1 children)

https://en.m.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

The term "hallucinations" originally came from computer researchers working with image producing AI systems. I think you might be hallucinating yourself 😉

[–] [email protected] 1 points 3 months ago

Fun part is, that article cites a paper mentioning misgivings with the terminology: AI Hallucinations: A Misnomer Worth Clarifying. So at the very least I'm not alone on this.

[–] [email protected] 0 points 3 months ago (1 children)

Ty. As soon as I saw the headline, I knew I wouldn't be finding value in the article.

[–] [email protected] 3 points 3 months ago

It's not a bad article, honestly, I'm just tired of journalists and academics echoing the language of businesses and their marketing. "Hallucinations" aren't accurate for this form of AI. These are sophisticated generative text tools, and in my opinion lack any qualities that justify all this fluff terminology personifying them.

Also frankly, I think students have one of the better applications for large-language model AIs than many adults, even those trying to deploy them. Students are using them to do their homework, to generate their papers, exactly one of the basic points of them. Too many adults are acting like these tools should be used in their present form as research aids, but the entire generative basis of them undermines their reliability for this. It's trying to use the wrong tool for the job.

You don't want any of the generative capacities of a large-language model AI for research help, you'd instead want whatever text-processing it may be able to do to assemble and provide accurate output.