1
72
submitted 2 months ago by [email protected] to c/[email protected]

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
24
submitted 3 months ago by [email protected] to c/[email protected]
3
3
For Starters (lemmy.world)
submitted 4 months ago by [email protected] to c/[email protected]

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
263
submitted 10 hours ago by [email protected] to c/[email protected]
5
16
submitted 8 hours ago by [email protected] to c/[email protected]

Not the Goldman-Sachs paper, the analysis of it. It's really worth the read.

6
70
submitted 1 day ago by [email protected] to c/[email protected]

As if beauty pageants with humans weren't awful enough. Let's celebrate simulated women with beauty standards too unrealistic for any real women to live up to!

7
219
submitted 1 day ago by [email protected] to c/[email protected]

As part of the wider tech industry's wider push for AI, whether we want it or not, it seems that Google's Gemini AI service is now reading private Drive documents without express user permission, per a report from Kevin Bankster on Twitter embedded below. While Bankster goes on to discuss reasons why this may be glitched for users like him in particular, the utter lack of control being given over his sensitive, private information is unacceptable for a company of Google's stature —and does not bode well for future privacy concerns amongst AI's often-forced rollout.

8
17
submitted 1 day ago by [email protected] to c/[email protected]
9
21
submitted 2 days ago by [email protected] to c/[email protected]

OpenAI is partnering with Los Alamos National Laboratory to study how artificial intelligence can be used to fight against biological threats that could be created by non-experts using AI tools, according to announcements Wednesday by both organizations. The Los Alamos lab, first established in New Mexico during World War II to develop the atomic bomb, called the effort a “first of its kind” study on AI biosecurity and the ways that AI can be used in a lab setting.

The difference between the two statements released Wednesday by OpenAI and the Los Alamos lab is pretty striking. OpenAI’s statement tries to paint the partnership as simply a study on how AI “can be used safely by scientists in laboratory settings to advance bioscientific research.” And yet the Los Alamos lab puts much more emphasis on the fact that previous research “found that ChatGPT-4 provided a mild uplift in providing information that could lead to the creation of biological threats.”

Much of the public discussion around threats posed by AI has centered around the creation of a self-aware entity that could conceivably develop a mind of its own and harm humanity in some way. Some worry that achieving AGI—advanced general intelligence, where the AI can perform advanced reasoning and logic rather than acting as a fancy auto-complete word generator—may lead to a Skynet-style situation. And while many AI boosters like Elon Musk and OpenAI CEO Sam Altman have leaned into this characterization, it appears the more urgent threat to address is making sure people don’t use tools like ChatGPT to create bioweapons.

“AI-enabled biological threats could pose a significant risk, but existing work has not assessed how multimodal, frontier models could lower the barrier of entry for non-experts to create a biological threat,” Los Alamos lab said in a statement published on its website.

The different positioning of messages from the two organizations likely comes down to the fact that OpenAI could be uncomfortable with acknowledging the national security implications of highlighting that its product could be used by terrorists. To put an even finer point on it, the Los Alamos statement uses the terms “threat” or “threats” five times, while the OpenAI statement uses it just once.

10
48
submitted 2 days ago by [email protected] to c/[email protected]
11
12
submitted 2 days ago by [email protected] to c/[email protected]

I do not recommend reading this article on a full stomach.

12
77
submitted 4 days ago by [email protected] to c/[email protected]

Written by a so called "Julie Howell" who "loves scouring the internet for delicious, simple, heartwarming recipes that make her look like a MasterChef winner" on a website called "Chef's Resource."

I get the "scouring the internet" part, but less the "MasterChef winner" part.

13
28
submitted 4 days ago by [email protected] to c/[email protected]

Generative AI is the nuclear bomb of the information age

14
44
submitted 4 days ago by [email protected] to c/[email protected]
15
31
submitted 5 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]
16
16
submitted 6 days ago by [email protected] to c/[email protected]
17
25
submitted 1 week ago by [email protected] to c/[email protected]

cross-posted from: https://discuss.tchncs.de/post/18541227

cross-posted from: https://discuss.tchncs.de/post/18541226

Google’s research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos.

18
1230
It isn't worth it (lemmy.world)
submitted 1 week ago by [email protected] to c/[email protected]
19
34
submitted 1 week ago by [email protected] to c/[email protected]
20
119
submitted 1 week ago by [email protected] to c/[email protected]
21
137
submitted 1 week ago by [email protected] to c/[email protected]
22
61
submitted 1 week ago by [email protected] to c/[email protected]
23
25
Honest Government Ad | AI (www.youtube.com)
submitted 2 weeks ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.world/post/17078489

The Government™ has made an ad about the existential threat that AI poses to humanity, and it’s surprisingly honest and informative

24
1107
submitted 2 weeks ago by [email protected] to c/[email protected]
25
225
One of us (lemmy.world)
submitted 2 weeks ago by [email protected] to c/[email protected]
view more: next ›

Fuck AI

909 readers
354 users here now

A place for all those who loathe machine-learning to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 4 months ago
MODERATORS