this post was submitted on 17 Dec 2024
508 points (93.1% liked)

memes

10637 readers
2026 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 2 years ago
MODERATORS
508
submitted 1 day ago* (last edited 46 minutes ago) by [email protected] to c/[email protected]
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 50 points 20 hours ago* (last edited 20 hours ago) (2 children)

Ugh. Don’t get me started.

Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.

Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.

I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.

The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.

AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.

Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.

[–] [email protected] 5 points 5 hours ago* (last edited 5 hours ago) (1 children)

I know this is off topic, but every time i see you comment of a thread all i can see is the pepsi logo (i use the sync app for reference)

[–] [email protected] 5 points 5 hours ago

You know, just for you: I just changed it to the Coca Cola santa :D

[–] [email protected] 5 points 15 hours ago (2 children)

I gave it a math problem to illustrate this and it got it wrong

If it can’t do that imagine adding nuance

[–] [email protected] -1 points 4 hours ago* (last edited 4 hours ago)

Ymmv i guess. I've given it many difficult calculus problems to help me through and it went well

[–] [email protected] 7 points 15 hours ago (1 children)

Well, math is not really a language problem, so it's understandable LLMs struggle with it more.

[–] [email protected] 10 points 14 hours ago (1 children)

But it means it’s not “thinking” as the public perceives ai

[–] [email protected] 3 points 14 hours ago (1 children)

Hmm, yeah, AI never really did think. I can't argue with that.

It's really strange now if I mentally zoom out a bit, that we have machines that are better at languange based reasoning than logic based (like math or coding).

[–] [email protected] 18 points 20 hours ago (1 children)

And then google to confirm the gpt answer isn't total nonsense

[–] [email protected] 15 points 17 hours ago (2 children)

I've had people tell me "Of course, I'll verify the info if it's important", which implies that if the question isn't important, they'll just accept whatever ChatGPT gives them. They don't care whether the answer is correct or not; they just want an answer.

[–] [email protected] 1 points 40 minutes ago (1 children)

Well yeah. I'm not gonna verify how many butts it takes to swarm mount everest, because that's not worth my time. The robot's answer is close enough to satisfy my curiosity.

[–] [email protected] 1 points 36 minutes ago

For the curious, I got two responses with different calculations and different answers as a result. So it could take anywhere from 1.5 to 7.5 billion butts to swarm mount everest. Again, I'm not checking the math because I got the answer I wanted.

[–] [email protected] 3 points 15 hours ago

That is a valid tactic for programming or how-to questions, provided you know not to unthinkingly drink bleach if it says to.

[–] [email protected] 158 points 1 day ago (10 children)

Meanwhile Google search results:

  • AI summary
  • 2x "sponsored" result
  • AI copy of Stackoverflow
  • AI copy of Geeks4Geeks
  • Geeks4Geeks (with AI article)
  • the thing you actually searched for
  • AI copy of AI copy of stackoverflow
[–] [email protected] 80 points 1 day ago (3 children)

Should we put bets on how long until chatgpt responds to anything with:

Great question, before i give you a response, let me show you this great video for a new product you'll definitely want to check out!

[–] [email protected] 5 points 15 hours ago

Nah, it'll be more subtle than that. Just like Brawno is full of the electrolytes plants crave, responses will be full of subtle product and brand references marketers crave. And A/B studies performed at massive scales in real-time on unwitting users and evaluated with other AIs will help them zero in on the most effective way to pepper those in for each personality type it can differentiate.

[–] sleen 34 points 1 day ago

"Great question, before i give you a response, let me introduce you to raid shadow legends!"

[–] [email protected] 20 points 1 day ago (4 children)
load more comments (4 replies)
[–] [email protected] 31 points 1 day ago (5 children)

Google search is literally fucking dogshit and the worst it has EVER been. I'm starting to think fucking duckduckgo (relies on Bing) gives better results at this point.

[–] [email protected] 3 points 5 hours ago

Ive been using only duckduck for years now. If I don’t find something there, I dont need it.

[–] [email protected] 26 points 1 day ago (13 children)

I have been using Duck for a few years now and I honestly prefer it to Google at this point. I'll sometimes switch to Google if I don't find anything on Duck, but that happens once every three or four months, if that.

load more comments (13 replies)
load more comments (3 replies)
[–] sp3tr4l 12 points 1 day ago* (last edited 1 day ago)

We have new feature, use it!

No, its broken and stupid, I prefer old feature.

... Fine!

breaks old feature even harder

load more comments (7 replies)
[–] [email protected] 5 points 16 hours ago (1 children)

Reject proprietary LLMs, tell people to "just llama it"

[–] [email protected] 16 points 14 hours ago (1 children)
[–] [email protected] 4 points 5 hours ago (1 children)

Top is proprietary llms vs bottom self hosted llms. Bothe end with you getting smacked in the face but one looks far cooler or smarter to do, while the other one is streamlined web app that gets you there in one step.

[–] [email protected] 0 points 2 hours ago

But when it is open source, nobody gets regularly slain and the planet progressively destroyed due to mega conglomerate entities automating class violence

[–] [email protected] 33 points 1 day ago (3 children)

Did you chatgpt this title?

[–] [email protected] 7 points 1 day ago (3 children)

"Infinitively" sounds like it could be a music album for a techno band.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 9 points 20 hours ago (1 children)

Have they? Don't think I've heard that once and I work with people who use chat gpt themselves

[–] [email protected] 3 points 16 hours ago

I'm with you. Never heard that. Never.

[–] [email protected] 19 points 23 hours ago (2 children)

Last night, we tried to use chatGPT to identify a book that my wife remembers from her childhood.

It didn’t find the book, but instead gave us a title for a theoretical book that could be written that would match her description.

[–] [email protected] 7 points 23 hours ago (1 children)

At least it said if it exists, instead of telling you when it was written (hallucinating)

[–] [email protected] 7 points 23 hours ago

Maybe it’s trying to motivate me to become a writer.

load more comments (1 replies)
[–] [email protected] 11 points 22 hours ago (1 children)

How long until ChatGPT starts responding "It's been generally agreed that the answer to your question is to just ask ChatGPT"?

[–] [email protected] 10 points 21 hours ago

I'm somewhat surprised that ChatGPT has never replied with "just Google it, bruh!" considering how often that answer appears in its data set.

[–] [email protected] 11 points 22 hours ago (1 children)

just call it cgpt for short

Computer Generated Partial Truths

[–] [email protected] 5 points 21 hours ago (1 children)

Sadly, partial truths are an improvement over some sources these days.

[–] [email protected] 4 points 20 hours ago

Which is still better than "elementary truths that will quickly turn into shit I make up without warning", which is where ChatGPT is and will forever be stuck at.

[–] [email protected] 15 points 1 day ago* (last edited 1 day ago)

Both suck now.

I have to say, look it up online and verify your sources.

[–] [email protected] 24 points 1 day ago (2 children)

GPTs natural language processing is extremely helpful for simple questions that have historically been difficult to Google because they aren't a concise concept.

The type of thing that is easy to ask but hard to create a search query for like tip of my tongue questions.

[–] [email protected] 29 points 1 day ago (1 children)

Google used to be amazing at this. You could literally search "who dat guy dat paint dem melty clocks" and get the right answer immediately.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 12 points 1 day ago

I say, "Just search it." Not interested in being free advertising for Google.

[–] [email protected] 10 points 1 day ago (2 children)

This is entirely Google's fault.

load more comments (2 replies)
load more comments
view more: next ›