this post was submitted on 22 Jul 2023
166 points (85.8% liked)
Asklemmy
43970 readers
648 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not bullshit. It routinely does stuff we thought might not happen this century. The trick is we don't understand how. At all. We know enough to build it and from there it's all a magical blackbox. For this reason it's hard to be certain if it will get even better, although there's no reason it couldn't.
That goes back to the "not knowing how it works" thing. ChatGPT predicts the next token, and has learned other things in order to do it better. There's no obvious way to force it to care if it's output is right or just right-looking, though. Until we solve that problem somehow, it's more of an assistant for someone who can read and understand what it puts out. Kind of like a calculator but for language.
Honestly crypto wasn't totally either. It was a marginally useful idea that turned into a Beanie-Babies-like craze. If you want to buy or sell illegal stuff (which could be bad or could be something like forbidden information on democracy) it's still king.
Putting some expert system in front of LLMs seems to be working pretty well. Basically modeling how a human agent would interact with it.
We'll see how that goes, I guess. I'm not involved enough to comment.
I'm guessing the expert system would be a classical algorithm?