this post was submitted on 05 Feb 2024
202 points (84.1% liked)
Asklemmy
44004 readers
318 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes, and most likely more of a paradigm shift. The way deep learning models work is largely around static statistical models. The main issue here isn't the statistical side, but the static nature. For AGI this is a significant hurdle because as the world evolves, or simply these models run into new circumstances, the models will fail.
Its largely the reason why autonomous vehicles have sorta hit a standstill. It's the last 1% (what if an intersection is out, what if the road is poorly maintained, etc.) that are so hard for these models as they require "thought" and not just input/output.
LLMs have shown that large quantities of data seem to approach some sort of generalized knowledge, but researchers don't necessarily agree on that https://arxiv.org/abs/2206.07682. So if we can't get to more emergent abilities, it's unlikely AGI is on the way. But as you said, combining and interweaving these systems may get something close.