this post was submitted on 13 Oct 2024
198 points (100.0% liked)

TechTakes

1375 readers
64 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 61 points 3 weeks ago (12 children)

Did someone not know this like, pretty much from day one?

Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

[–] [email protected] 37 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

Well, two responses I have seen to the claim that LLMs are not reasoning are:

  1. we are all just stochastic parrots lmao
  2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of "emergent").

So I think this research is useful as a response to these, although I think "fuck off, promptfondler" is pretty good too.

[–] [email protected] 21 points 3 weeks ago (1 children)

“Language is a virus from outer space”

load more comments (1 replies)
load more comments (6 replies)
[–] [email protected] 28 points 3 weeks ago (1 children)

there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

load more comments (1 replies)
[–] [email protected] 16 points 3 weeks ago (35 children)

A lot of people still don't, from what I can gather from some of the comments on "AI" topics. Especially the ones that skew the other way with its "AI" hysteria is often an invite from people who know fuck all about how the tech works. "Nudifier" or otherwise generative images or explicit chats with bots that portray real or underage people being the most common topics that attract emotionally loaded but highly uninformed demands and outrage. Frankly, the whole "AI" topic in the media is so massively overblown on both fronts, but I guess it is good for traffic and nuance is dead anyway.

load more comments (35 replies)
load more comments (9 replies)
[–] [email protected] 30 points 3 weeks ago (2 children)

We suspect this research is likely part of why Apple pulled out of the recent OpenAI funding round at the last minute. 

Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing. They certainly argue like it.

🔥

[–] [email protected] 17 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

This has been said multiple times but I don't think it's possible to internalize because of how fucking bleak it is.

The VC/MBA class thinks all communication can be distilled into saying the precise string of words that triggers the stochastically desired response in the consumer. Conveying ideas or information is not the point. This is why ChatGPT seems like the holy grail to them, it effortlessly^1^ generates mountains of corporate slop that carry no actual meaning. It's all form and no substance, because those people -- their entire existence, the essence of their cursed dark souls -- has no substance.

^1^ batteries not included

[–] [email protected] 14 points 3 weeks ago

The only difference between the average VC and the average Sovereign Citizen is income.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 14 points 3 weeks ago

Oh what a sweet, sweet tune to end a Sunday to

load more comments
view more: next ›