this post was submitted on 02 Feb 2024
58 points (100.0% liked)

Technology

37699 readers
365 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

From the (middle of the) story: The reason CES was so packed with random “AI”-branded products was that sticking those two letters to a new company is seen as something of a talisman, a ritual to bring back the (VC) rainy season.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 9 months ago (1 children)

On the other hand, we’d been trying to do anything useful with natural language since the 50’s and had thoroughly failed.

That's really not true. For instance, machine translation and spam detection (document classification) were getting really good by the late 2000s. Image recognition was great beginning the late 2010s.

What we've seen in the last few years (besides continual incremental improvements in already-existing solutions) is improvement in the application of generative tools. So far the uses cases of generative models appear to be violating copyright, cheating on homework, and producing even more search engine spam. It can also be somewhat useful as a search engine so long as you want your answer to be authoritatively worded but don't care if it's true or not.

[–] [email protected] 1 points 9 months ago

In the 50's they thought we would have intellegent robot butlers by the 70's. They had solved more structured problems that seemed hard, like chess, and figured language and simple physical tasks couldn't be much different. They came up with some hacky chatbots and things in the 20th century, but it was all cheap tricks like strategically changing the subject - I talked to these things enough to tell. ChatGPT passes basically every test of short-term language reasoning we can throw at it. It's solved the problem for really basic purposes. It can take your Wendy's order without any fine-tuning.

Alright, I'm going to respond to the rest of this in quip-like fashion, since you've touched on a lot of separate-ish points here, but the tone intended is still neutral.

Image recognition was great beginning the late 2010s.

That was literally the same tech we're talking about here, just earlier and with a slightly different structure.

For instance, machine translation and spam detection (document classification) were getting really good by the late 2000s.

You and me have different memories of older machine translation. It could replace words and a few phrases fine, but it broke or produced awkward phrasings very often. It didn't engage with the underlying meanings at all. Spam detection worked well, but not similarly smart, and IIRC in some case was neural nets again.

violating copyright,

Disagree.

It can also be somewhat useful as a search engine so long as you want your answer to be authoritatively worded but don’t care if it’s true or not.

Or if the answer is easily verifiable, like it has been in my own cases.