this post was submitted on 23 Nov 2023
24 points (100.0% liked)

vegan

6704 readers
1 users here now

:vegan-liberation:

Welcome to /c/vegan and congratulations on your first steps toward overcoming liberalism and ascending to true leftist moral superiority.

Rules

Resources

Animal liberation and direct action

Read theory, libs

Vegan 101 & FAQs

If you have any great resources or theory you think belong in this sidebar, please message one of the comm's mods

Take B12. :vegan-edge:

founded 3 years ago
MODERATORS
 

A few weeks ago Israeli defense minister Yoav Gallant said "We are fighting against animals", referring to Palestinians. The Ten Stages model of genocide puts "Dehumanization" at step 4 of 10. And of course we all see what is being done in Gaza. This can't all necessarily be placed at the feet of the ability of Israel (and its supporters) to dehumanize Palestinians, but I'm sure it helps it go down easier.

But being vegan I can't help but draw the obvious question: why does dehumanizing somebody, or saying they are an animal - not human - mean killing them becomes acceptable? This premise is not so easily swallowed by anybody who has given some thought to the practice of animal mass-murder. Would it be reasonable to say that the logic of capitalism, which in several areas requires dehumanization to function, is undermined by a veganism which rejects the premise that dehumanizing someone gives you license to maximally exploit them? More succinctly, does liberating animals help liberate all of us?

As said above, I don't think weakening dehumanization will really solve all problems as it's at most a salve of the conscience. I figure you all will agree with most of this stuff. Here's where it gets a bit more speculative, and I encourage you to ignore this next part if you think it's ridiculous.

So I think humanity will create real AI at some point, just as conscious as you or I. Maybe it'll be a product of techniques that are currently being used, just at greater scale, or maybe a qualitative breakthrough (or several) is still necessary, but I do think it will happen. We like to hate on LLM stuff on this site, often reasonably, but to think this is as good as AI tech will ever get is pretty shortsighted and largely (imo) driven by the knowledge that if this tech is developed under capitalism it will be used to oppress us, not liberate us. If you don't think AI is possible that's fine, I don't care, that isn't what this post is about. Argue about that elsewhere.

I watched Blade Runner 2049 a few years back. There's a scene where the human police chief is giving a speech to Officer K (a replicant) with the thesis being that there is a wall separating humans from replicants: humans have souls. At the time of watching I thought this was kind of ridiculous, given how secular society already is in 2023 and how there aren't any other elements in the world of Blade Runner indicating some great religious resurgence. Silly me for not thinking in material terms!

We already see great pushback from people to not compare human functions to AI techniques currently in use. This I think is justified for the same reason that people (reasonably) object to being dehumanized by being compared to animals: because under the prevailing paradigm, if you're dehumanized (whether by being compared to an animal or computer program), there is license to maximally exploit you. So of course people will redevelop the concept of souls as something they have that the machines do not, as a survival strategy under capitalism. Mirroring the veganism case, if AI is developed will the liberation of AI entities become joined with socialist struggle generally? Of course we are speculating. But I think it's an interesting angle to approach this issue. Will I be marching for AI labor rights in my lifetime?

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 9 months ago

I consider myself quite sceptical on the nerd rapture singularity shit, but I do not think it is impossible to build a machine that suffers.

I don't know how close we are, or if we've already done it, I consider the possibility that someone like a pig suffers so much more evident that this is where I focus my time but our algorithms already surpass creatures like jellyfish in terms of "neural" complexity type stuff.

I hope if we make truly thinking machines they either don't feel or can only feel pleasure. Like rather than getting an answer wrong "hurting" it just feels less good than getting it right or whatever. However we have no idea what we are doing and humans weirdly love fetishing punishment in teaching despite all the evidence it sort of sucks so I expect it to happen.

I don't care where a feeling mind exists, or how it came to be. I don't want anyone to suffer where that can not happen, so I expect I would protest against the use of thinking machines if they become real in my lifetime.

Cynically I expect there will be more sympathy for thinking machines than even charismatic megafauna.

[–] [email protected] 2 points 9 months ago

The easy question is will you be marching in your lifetime, and the answer is no

The amount of inputs of any sentient being's nervous system is orders of magnitude away from any current hardware, and I doubt we'll bridge that gap in our lifetime even with fusion powered quantum computing

But what if the fusion powered homeopathic quantum computer managed all that?

Can you kill AI? Can you hurt AI? Can you damage AI? Is the two weeks old backup the same person? Are two identical copies of the same AI the same person?

Any pain the AI experiences is programmed in, what's the correct amount of pain?

[–] [email protected] 2 points 9 months ago

I do think that challenging any sort of discrimination reduces the effectiveness of dehumanising propaganda by cutting off avenues btw.

Like when I grew up you could justify maiming someone by calling them gay, but now you can't and an easy excuse for why your violence is justified is gone.