this post was submitted on 22 Jul 2023
166 points (85.8% liked)

Asklemmy

43796 readers
754 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 28 points 1 year ago (4 children)

As a software engineer, I think it is beyond overhyped. I have seen it used once in my day job before it was banned. In that case, it hallucinated a function in a library that didn't exist outside of feature requests and based its entire solution around it. It can not replace programmers or creatives and produce consistently equal quality.

I think it's also extremely disingenuous for Large Language Models to be billed as "AI". They do not work like human cognition and are basically just plagiarism engines. They can assemble impressive stuff at a rapid speed but are incapable of completely novel "ideas" - everything that they output is built from a statistical model of existing data.

If the hallucination problem could be solved in a local dataset, I could see LLMs as a great tool for interacting with databases and documentation (for a fictional example, see: VIs in Mass Effect). As it is now, however, I feel that it's little more than an impressive parlor trick - one with a lot of future potential that is being almost completely ignored in favor of bludgeoning labor, worsening the human experience, and increasing wealth inequality.

[–] [email protected] 5 points 1 year ago

Don’t ask LLMs about how to do something in power shell because there’s a good chance it will tell you to use a module or function that just doesn’t plain exist. I did use an outline ChatGPT created for a policy document and it did a pretty good job. And if you give it a compsci 100 level task or usually can output functional code faster than I can type.

[–] [email protected] 4 points 1 year ago (1 children)

They can assemble impressive stuff at a rapid speed but are incapable of completely novel "ideas" - everything that they output is built from a statistical model of existing data.

You just described basically 99.999% of humans as well. If you are arguing for general human intelligence, I'm on board. If you are trying to say humans are somehow different than AI, you have NFC what you are doing.

[–] [email protected] 6 points 1 year ago

I think we're on a very similar page. I'm not meaning that human intelligence is in a different category than potential artificial intelligence or somehow impossible to approximate or achieve (we're just evolutionarily-designed, replicating meat-computers). I'm meaning that LLMs are not intelligent and do not comprehend their inputs or datasets but statistically model them (there is an important and significant difference). It would make sense to me that they could play a role in development of AI but, by themselves, they are not AI any more than PCRE is a programming language.

[–] [email protected] 3 points 1 year ago

As a non-software engineer, it’s basically magic for programming. Can it handle your workload? Probably not based on your comment. I have, however, coaxed it to write several functional web applications and APIs. I’m sure you can do better, but it’s very empowering for someone that doesn’t have the same level of knowledge.

[–] [email protected] 0 points 1 year ago (1 children)

You have not realised yet that... yes, it has all the right to be called AI. They are doing the same thing we do. Learn and then create thoughts based on those learnings.

I even asked them to make up words that are not related to any language, and they create them, entirely new, never-used words, that are not even composites of others. These are creative machines. They might fail at answering some questions, but that is partially why we call it Artificial Intelligence. It's not saying that it is a machine of truth. Just a machine that "learns" and "knows". Sometimes correctly, sometimes wrong. Just like us.

[–] [email protected] 3 points 1 year ago (1 children)

Incorrect. An LLM COULD be a part of a system that implements AI but, itself, possesses no intelligence. Claiming otherwise is akin to claiming that the Pythagorean theorem is an AI because it "understands" geometry. Neither actually understands the data that they are fed but, are good at producing results that make it seem that way.

Human cognition does not work that way; it is much more complex and squishy. Association of current experiences with remembered experiences is only a fraction of what is going on in a brain related to cognition.

[–] [email protected] 1 points 1 year ago (1 children)

I am not saying it works exactly like humans inside of the black box. I just say it works. It learns and then creates thoughts. And it works.

You talk about how human cognition is more complex and squishy, but nobody really knows how it truly works inside.

All I see is the same kind of blackbox. A kid trying many, many times to stand up, or to say "papa", until it somehow works, and now the pathway is setup in the brain.

Obviously ChatGPT is just dealing with text. But does it make it NOT intelligent? I think it makes it very text-intelligent. Just add together all the AI pieces we are building and you got yourself a general AI that will do anything we do.

Yeah, maybe it does not work like our brain. But is a human brain structure the only possible structure for intelligence? I don't think so.

[–] [email protected] 1 points 1 year ago (1 children)

If you consider the amount of text an LLM has to consume to replicate something approaching human like language you have to appreciate there is something else going on with our cognition. LLM's give responses that make statistical sense but humans can actually understand why one arrangement of words might not make sense over the other.

[–] [email protected] 2 points 1 year ago (1 children)

Yes, it's inefficient... and OpenAI and Google are losing exactly because of that.

There's open source models already out there that are rivaling ChatGPT and that you can train on your 10 year-old laptop in a day.

And this is just the beggining.

Also... maybe we should check how many words of exposure a kid gets throughout their life to get to the point to develop arguments such as ChatGPT's... because the thing is that... ChatGPT does know way more about many things than any human being will ever do. Like, easily thousands of times more.

[–] [email protected] 1 points 1 year ago

And this is just the beggining.

Absolutely agreed, so long as protections are put in place to defang it as a weapon against labor (if few have leisure time or income to support tech development, I see great danger of stagnation). LLMs do clearly seem an important part in advancing towards real AI.