this post was submitted on 19 Nov 2024
732 points (98.0% liked)

People Twitter

5236 readers
1454 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 14 hours ago (14 children)

Because in a lot of applications you can bypass hallucinations.

  • getting sources for something
  • as a jump off point for a topic
  • to get a second opinion
  • to help argue for r against your position on a topic
  • get information in a specific format

In all these applications you can bypass hallucinations because either it's task is non-factual, or it's verifiable while promoting, or because you will be able to verify in any of the superseding tasks.

Just because it makes shit up sometimes doesn't mean it's useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.

[–] [email protected] 19 points 11 hours ago (1 children)

All LLMs are text completion engines, no matter what fancy bells they tack on.

If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.

For everything else you are wading through territory you could probably do easier using other methods.

[–] [email protected] 1 points 2 hours ago

I love the people who are like "I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?" And blame ChatGPT when the problem is all they really needed was a fucking calculator

load more comments (12 replies)