this post was submitted on 24 Jun 2024
30 points (100.0% liked)

technology

23328 readers
156 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

Here's a good & readable summary paper to pin your critiques on

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 5 months ago (2 children)

it's just as much a statistical accident when the models correspond with reality as when they don't

[–] [email protected] 14 points 5 months ago* (last edited 5 months ago)

I would not necessarily say that is true, and the article summarizes a philosophically interesting reason why:

The basic architecture of these models reveals this: they are designed to come up with a likely continuation of a string of text. It’s reasonable to assume that one way of being a likely continuation of a text is by being true; if humans are roughly more accurate than chance, true sentences will be more likely than false ones.

[–] [email protected] 5 points 5 months ago (1 children)

Have you actually used ChatGPT? The vast majority of the time it spits out good enough info. We use it at work frequently to write more tedious code. Ex: It's written approximately 7 trillion queryselectors for me, and as long as I hand hold it it will do a good job.

The biggest problem is when it comes to anything involving human safety. You also have to know that you have to hand hold it to get it to spit out something that's more or less exactly what you intended. But if you use it to draft a custom cover letter for you it's probably gonna do a good enough job, and it's not like anyone is actually reading that shit. It's great at doing basic math equations that involve a lot of conversions for me. It sure as hell aint the end all be all that every tech company seems to be pushing, but it's sure as hell not wrong 50% of the time.

[–] [email protected] 6 points 5 months ago* (last edited 5 months ago)

For me it is wrong more than 95% of the time. I stopped using it because it was just a waste of time. I am not doing particularly difficult or esoteric programming work and it just could not hack it at all. Often the ways it was wrong were quite subtle. And it presents wrong answers with the exact same confidence it presents right answers.