this post was submitted on 29 Jul 2023
59 points (96.8% liked)
Technology
37699 readers
274 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.
That's partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.
So, glad we established that!
@Barbarian772 also, I never demanded a definition of intelligence that explicitly excluded "AI". I asked for one that excluded simple calculators but included human beings. The Wikipedia one is good enough for this conversation, and it just so happens that ChatGPT nor any other LLMs simply do not meet it.
"Intelligence, at its core, involves the ability to model the world in order to predict and respond effectively to future events."
@lloram239 great. ChatGPT and other LLMs demonstrably lack the ability to model the world and make predictions based on such models:
https://www.fastcompany.com/90877523/chatgpt-doesnt-know-what-its-saying
Glad we agree they're not intelligent, then!
The whole argument of the article is just stupid. So ChatGPT ain't intelligent because it can't see picture, has hands and doesn't have a body? By that logic blind humans aren't either or paralyzed ones or amputees? The thing the article fails to realize is that those are all just sensory inputs. The more sensory inputs you get, the more cross-correlations between those the AI can figure out. Of course ChatGPT won't be able to do anything clever with sensory inputs it doesn't have, just like a human trying to listen to radiowaves with their ears. But human sensory inputs aren't special, they are just what evolution figure out was "good enough" for survival. The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.
@lloram239
> But human sensory inputs aren’t special
It's not about sensory inputs, it's about having a model of the world and objects in it and ability to make predictions.
> The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.
GPT cannot "figure" anything out. That's the point. It only probabilistically generates text. That's what it does, there is no model of the world behind it, no predictions, no"figuring out".
And how do you think that model gets build? From processing sensory inputs. And yes, language models do build internal models of the world from that.
That nonsense of a claim doesn't get any more true from repeating. Seriously, it's profoundly idiotic given everything ChatGPT can do.
So what? In what way does that limit its ability to reason about the world? Predictions about the world are probabilistic by nature, since the future hasn't happened yet.
@lloram239 ah, so you're down to throwing epithets like "idiotic" around. Clearly a mark of thoughtful and well-reasoned argument.
> Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.
Thing is: GPT doesn't make predictions about the world, it makes predictions about what the next word, phrase, sentence should be in a text, based on the prompt and the corpus it got "trained" on.
I am calling it idiotic because spending just a minute with ChatGPT proofs it wrong. Just like the claim that GPT doesn't make predictions about the world:
It's obviously capable making predictions about the world. Frequently giving very detailed and correct answers, which requires a deep understanding of the world. And yes, that ability to predict and understand the world is limited by how much of the world it can perceive through words alone, but that is no different from our ability to understand the world being limited by our perception. Also as it turns out, there is a surprising amount of stuff you can learn about the world just by text alone. There are surprisingly few topics that you can express in language that GPT doesn't have an answer too (math calculations being one example, due to the digits getting lost in the tokenization step).
If you wanna make arguments that GPT isn't intelligent, you have come up with something better than the same old tired phrases that are trivial debunked by just using it for a minute.
@lloram239 that's really akin to claiming that a mannequin is a human being because it really really looks alike.
The "predictions about the world" you refer to here are instead predictions about the text. They are not based on a model of the world, they are based on loads and loads of text the model was trained on.
I don't have to prove ChatGPT is not intelligent. That would be proving a negative. The burden of proof is on those claiming that it is intelligent.
For the job of presenting clothes in a shop, it's close enough. The problem domain matters. You can't expect a model that was never trained on a thing to perform well on that thing. Blind people aren't good at drawing pictures either, doesn't mean they aren't intelligent.
Text that describes the world. What do you think the electrical signal zapping around your brain are? Cats and dogs? The "world" is not what intelligence operates on. Your brain gets sensory information and that's it (see any of Donald Hoffman's talks). Just like ChatGPT gets text. All the "intelligence" does is figuring out patterns in that data and predicting what might come next. More diverse data from different senses of course helps. But as a little bit of playing around with ChatGPT easily shows, quite a lot of our understanding actually does survive getting mapped into the domain of language and text.