this post was submitted on 02 Oct 2024
335 points (91.6% liked)

Technology

59341 readers
5311 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Built on unearned hype.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

LLMs that can spot tumors better than humans can

Are they though? LLMs specifically? Seems like a very strange use case for an LLM.

But yeah we're mostly in accordance, I wanted to riff a little bit because as a long-time tech worker I actually do have some bones to pick with the tech itself. The in-exactitude of its output and the "let the prompter beware" approach to dealing with its obvious inadequacies pisses me off and it seems like the perfect product for the current "test in production" "MVP (minimally viable product)" "pre-order the incomplete version" state software is in generally. The marketing and finance assholes are nearly fully running the show at this point and it's evident.

I think the usefulness of this particular technology (LLMs) is very overblown and I found its very early usages more harmful than helpful (i.e. autocorrect/autocomplete is wrong for me more often than it is right). It has decent applicability in some areas (machine translation for instance is pretty good), but the marketing department got hold of it and so now everything is AI this and AI that.

I think it's basically just another over-hyped technology that will eventually shake out to be used only where it is useful enough to justify its cost. If the company has to show profits at any point it is either going to go the surveillance capitalism ad route, or it'll have to increasingly charge more per query than the gibberish it generates is really worth. I don't see most people paying for ChatGPT long-term so they'll probably have to enshittify further beyond their current (already kind of shitty) state.