this post was submitted on 22 Jul 2023
166 points (85.8% liked)

Asklemmy

43970 readers
620 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 year ago (1 children)

Admittedly this isn't my main area of expertise, but I have done some machine learning/training stuff myself, and the thing you quickly learn is that machine learning models are lazy, cheating bastards who will take any shortcut they can regardless of what you are trying to get them to do. They are forced to get good at what you train them on but that is all the "effort" they'll put in, and if there's something easy they can do to accomplish that task they'll find it and use it. (Or, to be more precise and less anthropomorphizing, simpler and easier approaches will tend to be more successful than complex and fragile ones, so those are the ones that will shake out as the winners as long as they're sufficient to get top scores at the task.)

There's a probably apocryphal (but stuff exactly like this definitely happens) story of early machine learning where the military was trying to train a model to recognize friendly tanks versus enemy tanks, and they were getting fantastic results. They'd train on pictures of the tanks, get really good numbers on the training set, and they were also getting great numbers on the images that they had kept out of the training set, pictures that the model had never seen before. When they went to deploy it, however, the results were crap, worse than garbage. It turns out, the images for all the friendly tanks were taken on an overcast day, and all the images of enemy tanks were in bright sunlight. The model hadn't learned anything about tanks at all, it had learned to identify the weather. That's way easier and it was enough to get high scores in the training, so that's what it settled on.

When humans approach the task of finishing a sentence, they read the words, turn them into abstract concepts in their minds, manipulate and react to those concepts, then put the resulting thoughts back into words that make sense after the previous words. There's no reason to think a computer is incapable of the same thing, but we aren't training them to do that. We're training them on "what's the next word going to be?" and that's it. You can do that by developing intelligence and learning to turn thoughts into words, but if you're just being graded on predicting one word at a time, you can get results that are nearly as good by just developing a mostly statistical model of likely words without any understanding of the underlying concepts. Training for true intelligence would almost certainly require a training process that the model can only succeed at by developing real thoughts and feelings and analytical skills, and we don't have anything like that yet.

It is going to be hard to know when that line gets crossed, but we're definitely not there yet. Text models, when put to the test with questions that require synthesizing abstract ideas together precisely, quickly fall short. They've got the gist of what's going on, in the same way a programmer can get some stuff done by just searching for everything and copy-pasting what they find, but that approach doesn't scale and if they never learn what they're doing, they'll get found out when confronted with something that requires actual understanding. Or, for these models, they'll make something up that sounds right but definitely isn't, because even the basic understanding of "is this a real thing or is it fake" is beyond them, they just "know" that those words are likely and that's what got them through training.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

I agree with all your examples and experience. Anyone who knows machine learning would, I think. The controversial bit is here:

Training for true intelligence would almost certainly require a training process that the model can only succeed at by developing real thoughts and feelings and analytical skills, and we don’t have anything like that yet.

Maybe, or maybe not. How do we know we ourselves aren't just very complicated statistical models? Different people will have different answers to that.

Personally, I'd venture that any human concept can be expressed with some finite string of natural language. At least to a philosophical pragmatist, being able to work flawlessly with any finite string of natural language should be equivalent to perfectly understanding the concepts contained within, then. LLMs don't do that, but they're getting closer all the time.

Others take a different view on epistemology that require more than just competence, or dispute that natural language is as expressive as I claim. I'm just some rando, so maybe they have a point, but I do think it's not settled.

[–] [email protected] 1 points 1 year ago (1 children)

I would agree that we are also very complicated statistical models, there's nothing magical going on in the human brain either, just physics which as far as we know is math that we could figure out eventually. It's a massively huge order of magnitude leap in complexity from current machine learning models to human brains, but that's not to say that the only way we'll get true artificial intelligence is by accurately simulating a human brain, I'd guess that we'll have something that's unambiguously intelligent by any definition well before we're capable of that. It'll be a different approach from the human brain and may think and act in alien or unusual ways, but that can still count.

Where we are now, though, there's really no reason to expect true intelligence to emerge from what we're currently doing. It's a bit like training a mouse to navigate a maze and then wondering whether maybe the mouse is now also capable of helping you navigate your cross-country road trip. "Well, you don't know how it's doing it, maybe it has acquired general navigation intelligence!" It can't be disproven, I guess, but there's no reason to think that it picked up any of those skills because it wasn't trained to do any of that, and although it's maybe a superintelligent mouse packing a ton of brainpower into a tiny little brain, all our experience with mice would indicate that their brains aren't big enough or capable of that regardless of how much you trained them. Once we've bred, uh, mice with brains the size of a football, maybe, but not these tiny little mice.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

So I was thinking that that's about all that needs to be discussed, but I do actually have one thing to add. It sounds like you are just fundamentally less impressed with language than me. I wouldn't buy any hype about a maze-navigating neural net, but I do buy it (with space for doubt) about a natural language AI. I literally thought "this is 90% of the GAI problem solved, it just needs something for that last 10%" the first time I played with a transformer, and I think it was GPT-2. That might sound lame now but it was just such a fundamental advance on what was around before.

Time will tell I guess if it makes me a sucker like some consumers of past chatbots, or if there is something fundamentally different this time.

[–] [email protected] 2 points 1 year ago

I hope I don't come across as too cynical about it :) It's pretty amazing, and the things these things can do in, what, a few gigabytes of weights and a beefy GPU are many, many times better than I would've expected if you had outlined the approach for me 2 years ago. But there's also a long history of GAI being just around the corner, and we do keep turning corners and making useful progress, but it's always still a ways off after each leap. I remember some people thinking that chess was the pinnacle of human intelligence, requiring creativity and logic to succeed, and when computers blew past humans at chess, it became clear that no, that's still impressive but you can get good at chess without really getting good at anything else.

It might be possible for an ML model to assemble itself into general intelligence based solely on being fed words like we're doing, it does seem like the data going in contains enough to do that, but getting that last 10% is going to be hard, each percentage point much harder than the last, and it's going to require more rigorous training to stop them from skating by with responses that merely come close when things get technical or precise. I'd expect that we need more breakthroughs in tools or techniques to close that gap.

It's also important to remember that as humans, we're inclined to read consciousness and intent into everything, which is why pretty much every pantheon of gods includes one for thunder and lightning. Chatbots sound human enough that they cross the threshold for peoples' brains to start gliding over inaccuracies or strange thinking or phrasing, and we also unconsciously help our conversation partner by clarifying or rephrasing things if the other side doesn't seem to be understanding. I suppose this is less true now that they're giving longer responses and remaining coherent, but especially early on, the human was doing more work than they realized keeping the conversation on the rails, and once you started seeing that it removed a bit of the magic. Chatbots are holding their own better now but I think they still get more benefit of the doubt than we realize we're giving them.