this post was submitted on 12 Oct 2023
36 points (100.0% liked)

Technology

37702 readers
287 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago (7 children)

I wonder how many people here actually looked at the article. They're arguing that ability to do things not specifically trained on is a more natural benchmark of the transition from traditional algorithm to intelligence than human-level performance. Honestly, it's an interesting point; aliens would not be using human-level performance as a benchmark so it must be subjective to us.

[–] [email protected] 4 points 1 year ago (6 children)

I guess the point I have an issue with here is 'ability to do things not specifically trained on'. LLMs are still doing just that, and often incorrectly - they basically just try to guess the next words based on a huge dataset they trained on. You can't actually teach it anything new, or to put it better it can't actually derive conclusions by itself and improve in such way - it is not actually intelligent, it's just freakishly good at guessing.

[–] [email protected] 2 points 1 year ago

Heck, sometimes someone comes to me and asks if some system can solve something they just thought of. Sometimes, albeit very rarely, it just works perfectly, no code changes required.

Not going to argue that my code is artificial intelligence, but huge AI models obviously has a higher odds of getting something random correct, just because it correlates.

load more comments (5 replies)
load more comments (5 replies)