this post was submitted on 17 Sep 2023
25 points (79.1% liked)

Futurology

1673 readers
155 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago (1 children)

i like your definition of these ai tools. Its feels broad enough to cover all of the recent accomplishments so many are praising.

Many people aren’t able to distinguish that the software is just a tool and even less so as it becomes more autonomous

[–] [email protected] 1 points 1 year ago (1 children)

I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn't magic and it isn't emergent behavior. It isn't truly intelligent.

LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it's nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and "good" human responses.

There is no understanding behind it. No higher cognitive process. Just "what words go next based on Q&A training data." Which is why we get well written answers that are often total bullshit.

Even so, the tech could easily upend many writing careers.

[–] [email protected] 2 points 1 year ago

I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.