rysiek

joined 2 years ago
[–] [email protected] 0 points 1 year ago (5 children)

@lloram239

> But human sensory inputs aren’t special

It's not about sensory inputs, it's about having a model of the world and objects in it and ability to make predictions.

> The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

GPT cannot "figure" anything out. That's the point. It only probabilistically generates text. That's what it does, there is no model of the world behind it, no predictions, no"figuring out".

[–] [email protected] 0 points 1 year ago (1 children)

@jalda

> Circular reasoning. “LLMs are different from human brains because they are different”.

LLMs are different than human brains because human brains are biological organs and LLMs are probability distributions over sequences of words. These are two completely different classes of entities. Like, I don't know how much more different two things *can* even be.

Are you claiming they are literally the same? Are you saying they are functionally the same? What *are* you claiming here, exactly?

[–] [email protected] 0 points 1 year ago (7 children)

@lloram239 great. ChatGPT and other LLMs demonstrably lack the ability to model the world and make predictions based on such models:
https://www.fastcompany.com/90877523/chatgpt-doesnt-know-what-its-saying

Glad we agree they're not intelligent, then!

[–] [email protected] 0 points 1 year ago (3 children)

@jalda

> We do it routinely. It is called Education System.

That relies on human brains that are trained. LLMs are not human brains. "Training" them is not the same thing as teaching humans about something. Human brains are way more complicated than just a bunch of weighed correlations.

And if you do want to claim it is in fact the same thing, we're back to square one: please provide proof that it is.

[–] [email protected] 0 points 1 year ago (6 children)

@Barbarian772 it matters because with regard to intelligent beings we have moral obligations, for example.

It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.

It's an extremely strong claim, and strong claims demand strong proof. Otherwise they are just hype and hand-waving, which all of the "ChatGPT intelligence" discourse is, in order to "maximize shareholder value".

[–] [email protected] 0 points 1 year ago (8 children)

@Barbarian772 as I said, I don't have to. You are making a claim of equivalence here. The burden of proof is on you.

Otherwise, I get to claim you're an alien from the Betelegeuse system, and if you object, I get to demand you prove you are not.

[–] [email protected] 0 points 1 year ago (9 children)

@Barbarian772 also, I never demanded a definition of intelligence that explicitly excluded "AI". I asked for one that excluded simple calculators but included human beings. The Wikipedia one is good enough for this conversation, and it just so happens that ChatGPT nor any other LLMs simply do not meet it.

[–] [email protected] 0 points 1 year ago (20 children)

@Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.

That's partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.

So, glad we established that!

[–] [email protected] 0 points 1 year ago (22 children)

@Barbarian772 no, GTP is not more "intelligent" than any human being, just like a calculator is not more "intelligent" than any human being — even if it can perform certain specific operations faster.

Since you used the term "intelligent" though, I would ask for your definition of what it means? Ideally one that excludes calculators but includes human beings. Without such clear definition, this is, again, just hand-waving.

I wrote about it in a bit longer form:
https://rys.io/en/165.html

[–] [email protected] 1 points 1 year ago (24 children)

@Barbarian772 I don't have to. It's the ChatGPT people making extremely strong claims about equivalence of ChatGPT and human intelligence. I merely demand proof of that equivalence. Which they are unable to provide, and instead use rhetoric and parlor tricks and a lot of hand waving to divert and distract from that fact.

[–] [email protected] 0 points 1 year ago (26 children)

@Barbarian772 so? If the cookie tastes sweet, what do I care what sweetening agent is used inside?

@BobKerman3999

view more: ‹ prev next ›