this post was submitted on 13 Aug 2023
74 points (97.4% liked)

Technology

37585 readers
349 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses

WP gift article expires in 14 days.

https://archive.ph/eZvfT

https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 year ago (1 children)

Exactly what I was thinking.

I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.

Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.

[–] [email protected] 19 points 1 year ago

It's quite weird. I thought the article was going to be about how an eating disorder helpline had to withdraw its AI after it started telling people with EDs how to lose weight - which really did happen.

It feels like maybe the editor told the journalist to report on that but they just mucked around with ChatGPT instead.