this post was submitted on 18 Feb 2024
174 points (100.0% liked)

Technology

37525 readers
279 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 6 months ago* (last edited 6 months ago) (1 children)

You don't need AI for any of that. Determined state actors have been fabricating information and propagandizing the public, mechanical Turk style, for a long long time now. When you can recruit thousands of people as cheap labour to make shit up online, you don't need an LLM.

So no, I don't believe AI represents a new or unique risk at the hands of state actors, and therefore no, I'm not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case. We've had enough of that already, thank you very much.

And that's ignoring the fact that an adversarial state actor having access to advanced LLMs isn't somehow negated or offset by us having them, too. There's no MAD for generative AI.

[–] [email protected] 1 points 6 months ago (2 children)

I’m not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case

What beliefs and values would we be abandoning by fighting back against tech that is literally costing people their literal lives?

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago) (1 children)

Hah I... think we're on the same side?

The original comment was justifying unregulated and unmitigated research into AI on the premise that it's so dangerous that we can't allow adversaries to have the tech unless we have it too.

My claim is AI is not so existentially risky that holding back its development in our part of the world will somehow put us at risk if an adversarial nation charges ahead.

So no, it's not harmless, but it's also not "shit this is basically like nukes" harmful either. It's just the usual, shitty SV kind of harmful: it will eliminate jobs, increase wealth inequality, destroy the livelihoods of artists, and make the internet a generally worse place to be. And it's more important for us to mitigate those harms, now, than to worry about some future nation state threat that I don't believe actually exists.

(It'll also have lots of positive impact as well, but that's not what we're talking about here)

[–] [email protected] 1 points 6 months ago

Ah gotcha. I must have misunderstood the flow there. Yeah, definitely seems like we're mostly on the same side