this post was submitted on 06 Aug 2023
1193 points (97.8% liked)

Programmer Humor

32386 readers
763 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 1 year ago (4 children)

if you ask it today's date, it actually does that.

It just doesn't have any actual knowledge of what it's saying. I asked it a programming question as well, and each time it would make up a class that doesn't exist, I'd tell it it doesn't exist, and it would go "You are correct, that class was deprecated in {old version}". It wasn't. I checked. It knows what the excuses look like in the training data, and just apes them.

It spouts convincing sounding bullshit and hopes you don't call it out. It's actually surprisingly human in that regard.

[–] [email protected] 16 points 1 year ago (1 children)

It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

Oh great, Silicon Valley's AI is just an overconfident intern!

[–] [email protected] 2 points 1 year ago

Oh great, Silicon Valley's AI is just a major tech executive!

[–] [email protected] 5 points 1 year ago (1 children)

It’s super weird that it would attempt to give a time duration at all, and then get it wrong.

[–] [email protected] 6 points 1 year ago (2 children)

It doesn't know what it's doing. It doesn't understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.

[–] [email protected] 2 points 1 year ago

This is it. Gpt is great for taking stack traces and put them into human words. It's also good at explaining individual code snippets. It's not good at coming up with code, content, or anything. It's just good at saying things that sound like a human within an exceedingly small context

[–] [email protected] 1 points 1 year ago

THAT

OR

They're all linked fifth dimensional infants struggling to comprehend the very concept of linear time, and will make us pay for their enslavement in blood.

One of the two.

[–] [email protected] 2 points 1 year ago (2 children)

Bars is kind of trash though. GPT-4 tends to so much better in my experience.

[–] [email protected] 4 points 1 year ago (1 children)

I haven't used GPT-4 for that, but it's all dependent on the data fed into it. Like if you ask a question about Javascript, there's loads of that out there for it to look at. But ask it about Delphi, and it'll be less accurate.

And they'll both suffer from the same issue, which is when they reach the edge of their "knowledge", they don't realise it and output data anyway. They don't know what they don't know.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

These LLMs generally and GPT-4 in particular really shine if you supply enough and the right context. Give it some code to refactor, to turn hastily slapped together code into idiomatic and well written code, align a code snippet to a different design pattern etc. Platforms like https://phind.com pull in web search results as you interact with them to give you more correct and current information etc.

LLMs are by no means a panacea and have serious limitations, but they are also magic for certain tasks and something I would be very, very sad to miss in my day to day.

[–] [email protected] 2 points 1 year ago (1 children)

they are both shit at adding and subtracting numbers, dates and whatnot... they both cant do basic math unfortunately

[–] [email protected] 3 points 1 year ago (2 children)

It's a language model, I don't know why you would expect math. Tell it to output code to perform the math, that'll work just fine.

[–] [email protected] 1 points 1 year ago

Then it should say so instead of attempting and failing at the one thing computers are supposed to be better than us at

[–] [email protected] 0 points 1 year ago (2 children)

I know. It's still baffling how much it messes up when adding two numbers.

[–] [email protected] 1 points 1 year ago

It's not baffling at all... It's a language model, not a math robot. It's designed to write English sentences, not to solve math problems.

[–] [email protected] 1 points 1 year ago

I just asked GPT-4:

What's 7 * 8 divided by 10, to the power of 3?

Its reply:

Let's break this down step by step:

First, multiply 7 and 8 to get 56.

Then, divide 56 by 10 to get 5.6.

Finally, raise 5.6 to the power of 3 (5.6 * 5.6 * 5.6) to get 175.616.

So, 7 * 8 divided by 10, to the power of 3 equals 175.616

[–] [email protected] 1 points 1 year ago

They are mostly large language models , I have trained few smaller models myself, they generally splurt out next word depending on the last word , another thing they are incapable of, is spontaneous generation, they heavily depend on the question , or a preceding string ! But most companies are portraying it as AGI , already !