this post was submitted on 20 Nov 2023
362 points (88.3% liked)

Asklemmy

43336 readers
954 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 4 points 9 months ago* (last edited 9 months ago) (2 children)

Well, to be fair, from what I've been hearing, one of the big points of contention of the internal battle at OpenAI was safety itself. Like some on the board being concerned about the "make your ChatGPT" feature debuting at the dev conference thing. So at least some people care. Which is more than I would have thought...

I do like the word "chucklefucks", though.

[โ€“] [email protected] 4 points 9 months ago

Totally agree. Looks like the whole argument was the OpenAI board firing Altman over his safety concerns but unexpectedly the whole team shared his concerns.

[โ€“] [email protected] -1 points 9 months ago (1 children)

So at least some people care.

I actually agree. However, them turning tail and immediately trying to ask for Sam Altman to come back showed how few in the organization really cared. 500 employees out of 700 thought that Altman leaving was enough to quit over. Meaning that sadly the majority involved are more worried about "just doing it" and fuck the consequences, and the few cooler heads in the room now are forced to eat their words and go groveling at the feet of the people who don't fucking care.

Pretty sad and disgusting that they couldn't stand by those principles when it mattered. I would have a lot more respect for them if they just shut down the whole thing and said "It's clear our employees don't agree with us on safeguarding AGI."

However, I do still think the AGI fears are completely overblown.

[โ€“] [email protected] 1 points 9 months ago

So you think AGI fears are overblown, but want the employees to back the coup of a company because of these same AGI fears?

Why don't we just trust the employees are intelligent human beings that have more context and decided Altman was a good boss? Or at the very least would be a better boss than whatever clusterfuck comes after him?