19
submitted 6 months ago by [email protected] to c/[email protected]

or is it just bean counters optimizing enshittification and monetization of a previously free product? oh its certainly the former bazinga

Unproven hypothesis seeks to explain ChatGPT's seemingly new reluctance to do hard work.

In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more "lazy," reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it's an issue, but the company isn't sure why. The answer may be what some are calling "winter break hypothesis." While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.

On Monday, a developer named Rob Lynch announced on X that he had tested GPT-4 Turbo through the API over the weekend and found shorter completions when the model is fed a December date (4,086 characters) than when fed a May date (4,298 characters). Lynch claimed the results were statistically significant.

top 5 comments
sorted by: hot top controversial new old
[-] [email protected] 21 points 6 months ago

AI researchers are taking it seriously

Half these guys are religious fruitcakes worshipping the mean scary future computer and the other half are pulling their hair out trying to get their colleagues to stop deifying the random number generator.

[-] [email protected] 10 points 6 months ago* (last edited 6 months ago)

found shorter completions when the model is fed a December date (4,086 characters) than when fed a May date (4,298 characters).

Duh, the longer you let it run the more data it has. Why wouldn’t the newer version be better? /s

[-] [email protected] 10 points 6 months ago

Me, shaking, terrified: COMPUTER! I COMMAND YOU!! DO AS I SAY!!

The demon in the box: bugs-no

[-] [email protected] 9 points 6 months ago* (last edited 6 months ago)

Large language model AIs are so volatile and unreliable, a previous random update made it unlearn simple math. The one thing computers are supposed to be good at.

[-] [email protected] 7 points 6 months ago

It's almost like if you feed an algorithm garbage data in, it gets garbage data out, but that couldn't be it, no way, they're techbro geniuses, far too smart to make that mistake!

this post was submitted on 12 Dec 2023
19 points (100.0% liked)

the_dunk_tank

15768 readers
603 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 3 years ago
MODERATORS