this post was submitted on 26 Aug 2024
1551 points (97.8% liked)

memes

10259 readers
2705 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 2 months ago (2 children)

I am totally looking forward to AI customer support. The current model of a person reading a scripted response is painful and fucking awful and only rarely leads to a good resolution. I would LOVE an AI support where I could just describe the problem and it gives me answers and it only asks relevant follow up questions. I can't wait.

[–] [email protected] 23 points 2 months ago (1 children)

They're already deployed and they're less than helpful, because LLMs are bullshitting machines.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (2 children)

I already use LLMs to problem solve issues that I'm having and they're typically better than me punching questions into Google. I admit that I've once had an llm hallucinate while it was trying to solve a problem for me, but the vast majority of the time it has been quite helpful. That's been my experience at least. YMMV.

If you think LLMs suck, I'm guessing you haven't actually used telephone tech support in the past 10 years. That's a version of hell I wish on very few people.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

If you think LLMs suck, I'm guessing you haven't actually used telephone tech support in the past 10 years. That's a version of hell I wish on very few people.

I'm specifically claiming that they're bullshit machines. i.e. they're generating synthetic text without context or understanding. My experience with search engines and telephone support is way better than what any LLM fed me.

There have already been cases where phone operators where replaced with LLMs which gave dangerops advice to anorexig patients.

[–] [email protected] 2 points 2 months ago (1 children)

I understand their limitations, but you're overselling the negative. They're fucking awesome for what they can do, but they have drawbacks that you must be aware of. Just as it's lame to be an AI fanboi, it's equally lame to be an AI luddite.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

It's funny tou bring up luddites, since they actually had the right idea about technology like LLMs. They were highly skilled textile workers who opposed the introducyion of dangerous medhanical looms that produced low quality goos, but were so easy to use so that a child could work them (because they wanted to employ children). They only got their bad name of backward anti-technology lunatics afterwards. But they were actually concerned for low quality technology being deployed to weaken worker's rights, cheapen products and make bosses even richer. That's actually the main issue I have with what's happening with AI.

There's a book by Brian Merchant called "Blood in the machine" on the topic, if you're interested. He's also on a bunch of podcasts, if you're not the big reader.

I'm referring to "bullshit" in the way argued in this paper:

Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.

The technology is neat. I'll give you that. But it's incredibly overhyped.

[–] [email protected] 2 points 2 months ago (1 children)

If all you want is something trivial that's been done by enough people beforehand, it's no surprise that something approaching correct gets parroted back at you.

[–] [email protected] 2 points 2 months ago

That's 99% of what I'm looking for. If I'm figuring something out by myself, I'm not looking it up on the internet.

I'm an engineer and I've found LLMs great for helping me understand an issue. When you read something online, you have to translate from what the author is saying into your thinking and I've found LLMs are much better at re-framing information to match my inner dialog. I often find them much more useful than google searches in trying to find information.

[–] [email protected] 13 points 2 months ago (1 children)

The script doesn't go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it's protocol.

The humans you speak to could do exactly what you're asking for, if the business did not handcuff them to a script.

[–] [email protected] 1 points 2 months ago

The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.

The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.

But they do handcuff them to a script.... at least 1st and 2nd level tech support. That's the point. It's so fucking awful. It's a barrier to keep you from the more highly paid tech support people who may actually be able to answer your questions. First you have to wait on hold to make sure you think it's worth wasting their time on your annoying problem, THEN it's a maze you have to navigate, and then whoops you just got hung up on.... so sorry, start all over! LLMs are (can be) so much better at this!