this post was submitted on 11 Nov 2024
15 points (100.0% liked)

TechTakes

1429 readers
97 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 week ago* (last edited 1 week ago) (10 children)

The job site decided to recommend me an article calling for the removal of most human oversight from military AI on grounds of inefficiency, which is a pressing issue since apparently we're already living in the Culture.

The Strategic Liability of Human Oversight in AI-Driven Military Operations

Conclusion

As AI technology advances, human oversight in military operations, though rooted in ethics and legality, may emerge as a strategic liability in future AI-dominated warfare.

~~Oh unknowable genie of the sketchily curated datasets~~ Claude, come up with an optimal ratio of civilian to enemy combatant deaths that will allow us to bomb that building with the giant red cross that you labeled an enemy stronghold.

[–] [email protected] 13 points 1 week ago (2 children)

So, ethics and legality are strategic liabilities? Jesus fucking Christ, that’s not even sneer-worthy. This guy is completely fucking insane.

[–] [email protected] 11 points 1 week ago* (last edited 1 week ago)

If you've convinced yourself that you'll mostly be fighting the AIs of a rival always-chaotic-evil alien species or their outgroup equivalent, you probably think they are.

Otherwise I hope shooting first and asking questions later will probably continue to be frowned upon in polite society even if it's automated agents doing the shooting.

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (1 children)

This is straight up Hague material right there, all he wants is plausible deniability

Computer said so 🥺

e: that's a shit take for several reasons and we have autonomous killers already. it's called air defense (in some modes) because how many civilians are going at mach fuck with RCS of 0.1m^2, that's no civilian that's ballistic missile. also lmao at speed of decision

perun video on this topic https://m.youtube.com/watch?v=tou8ahLZvP4

[–] [email protected] 5 points 1 week ago

Honestly the most surprising and interesting part of that episode of Power(projection)Points with Perun was the idea of simple land mines as autonomous lethal systems.

Once again, the concept isn't as new as they want you to think, moral and regulatory frameworks already exist, and the biggest contribution of the AI component is doing more complicated things than existing mechanisms but doing them badly.

load more comments (7 replies)