37
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]

This may be an unpopular opinnion.. Let me get this straight. We get big tech corporations to read the articles of the web and then summarize to me, the user the info I am looking for. Sounds cool, right? Yeah, except that why in the everloving duck would I trust Google, Microsoft, Apple or Meta to give me the correct info, unbiased and not curated? The past experiences all show that they will not do the right thing. So why is everyone so OK with what's going on? I just heard that Google may intend to remove sources. Great, so it's like trust me bro.

all 26 comments
sorted by: hot top controversial new old
[-] [email protected] 22 points 2 months ago

LLM just autocomplete on steroid. If they say it more, they lie.

If you want uncensored info, run local model. But most do not care or even know. Just how most people are with tech.

[-] [email protected] 5 points 2 months ago* (last edited 2 months ago)

Uncensored ais are the best. I can ask them all the immature sex questions I want and never get banned.

[-] [email protected] 4 points 2 months ago

I mean, sure, as long as you're keeping the data locally. Otherwise, yikes.

[-] [email protected] 2 points 2 months ago

Which model are you using?

[-] [email protected] 3 points 2 months ago* (last edited 2 months ago)

LLM just autocomplete on steroid.

Funny you should say this. I only have anecdotal evidence from me and a few friends, but the general consensus is that autocomplete and predictive text are much worse now than they used to be.

[-] [email protected] 3 points 2 months ago* (last edited 2 months ago)

because of ai stuff. For these kinds of things, they are perfectly happy to advertise unprecedented 99% accuracy rates, when in reality, non ai tools are held to much higher standard (mainly that they are expected to work). If the code I wrote had a consistent, perpetual 1% failure rate (even after fixing it, multiple times), I'd have been fired long ago.

[-] [email protected] 2 points 2 months ago

If anyone wants a great source on exactly how chat GPT is essentially autocomplete on steroids, Steven Wolfram did a great write-up. It's pretty technical. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

[-] xenspidey 11 points 2 months ago* (last edited 2 months ago)

I use LLM's for two things mainly. First to help with small coding things that are tedious or I just need something to bounce ideas off of (hobiest coder) also for asking questions that Google and the like can't answer. Like "if the unit is measure is toothpicks, how far is it from the earth to the moon" stuff like that. Or ballpark approximations of things.

[-] [email protected] 10 points 2 months ago

How are you sure of the correctness of the model's answers? If I tell you the moon is 69.420 toothpicks away from earth, are you going to believe me?

[-] xenspidey 4 points 2 months ago

Sure maybe it's wrong, but seems close enough to me.

The distance from Earth to the Moon is approximately 384,400 kilometers, which is about 9,760,000,000 toothpicks laid end to end.

[-] [email protected] 7 points 2 months ago

You're right about Google being trash at answering that.

It just completely ignores the question.

[-] [email protected] 0 points 2 months ago

I don't deny the usefulness aspect of AI. I used it recently to increase the resolution of a video. It's awesome. But when it's used to replace info search, art, music.. Just why?

[-] xenspidey 2 points 2 months ago

I like it for the use of art, I like making wallpapers for my phone or logos. I have a side business that I'll wait a logo for at some point. It makes way more sense to get it close with AI then give to an artist to tweak and give the final touches then all the back and forth and expense needed for a logo company.

[-] [email protected] 5 points 2 months ago* (last edited 2 months ago)

The only thing I have found actually useful with them, is that I can tabletop RPGs by myself and it's functionally the same as playing with real people. Right down to arguing over the interpretation of the rules.

[-] [email protected] 4 points 2 months ago

I've had this argument with friends a lot recently.

Them: it's so cool that I can just ask chatgpt to summarise something and I can get a concise answer rather than googling a lot for the same thing.

Me: But it gets things wrong all the time.

Them: Oh I know so I Google it anyway.

Doesn't make sense to me.

[-] [email protected] 11 points 2 months ago

People like AI because searches are full of SEO spam listicles. Eventually they will make LLMs as ad-riddled as everything else.

[-] [email protected] 3 points 2 months ago

My specific point here was about how this friend doesn't trust the results AND still goes to Google/others to verify, so he's effectively doubled his workload for every search.

[-] [email protected] 0 points 2 months ago

Then why not use an ad-blocker? It's not wise to think you're getting the right information when you can't verify the sources. Like I said, at least for me, the trust me bro aspect doesn't cut it.

[-] [email protected] 1 points 2 months ago

Ad blockers won't cut out SEO garbage.

[-] [email protected] 1 points 1 month ago

And the AI will? It will use all websites to give you the info. It doesn’t think, it spins.

[-] [email protected] 1 points 1 month ago

I didn't say that it will, just saying that ad blockers won't block it out.

[-] [email protected] 2 points 2 months ago

This is why I do a lot of my Internet searches with perplexity.ai now. It tells me exactly what it searched to get the answer, and provides inline citations as well as a list of its sources at the end. I've never used it for anything in depth, but in my experience, the answer it gives me is typically consistent with the sources it cites.

[-] [email protected] -1 points 2 months ago

We also get things wrong all the time. Would you double check info you got from a friend of coworker? Perhaps you should.

[-] [email protected] 1 points 2 months ago

I know how my friends and coworkers are likely to think. An LLM is far less predictable.

[-] [email protected] 3 points 2 months ago

Agreed. Show me your sources, I don't trust your executive summary.

this post was submitted on 13 May 2024
37 points (75.3% liked)

Technology

55938 readers
3288 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS