this post was submitted on 28 Jun 2024
279 points (96.3% liked)

Fuck AI

1090 readers
217 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 5 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 month ago (1 children)

I'm talking about the latter. Religious people often use LLMs as well (https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348). Their knowledge is likely limited to ChatGPT so they're likely to be vulnerable to these things. I think one of the things that worry me the most is that these people may take LLM bullshit at face value, or even worse, take them as a "divine commands".

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

I don't follow how you went from being concerned about using profanity in research papers because of audiences such as religious communities, to being concerned about LLMs spewing inaccurate things.

Has your original question always been about the latter?

I love the term too but I wonder how it'll be used in situations where profanity is discouraged

[–] [email protected] 2 points 1 month ago (1 children)

Yes, I was curious about about if experts want to convey the concept of LLM bullshit to certain audiences such as children's settings (which has been solved now) or religious clergy, they'll use the term "bullshit" or not. I apologize if I have miscommunicated that intention in my initial comment, and I always look forward to how to communicate better

[–] [email protected] 2 points 1 month ago

Ah, that makes more sense now.