SneerClub

989 readers
8 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
51
52
 
 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

53
 
 

WE DEMAND A CORRECTION TO uh various minor nitpicks

also we swear we totally didn't get your email

bonus from thread:

I am having a lot of fun on Manifold, but if the team insists on inviting eugenics speakers to conferences, its probably time for me to leave :-/

What exactly is your objection to people exercising their bodily autonomy to implement voluntary eugenics?

54
 
 

It's the Guardian, but it's still a good read. All of Sneerclub's favorite people were involved.

Last weekend, Lighthaven was the venue for the Manifest 2024 conference, which, according to the website, is “hosted by Manifold and Manifund”. Manifold is a startup that runs Manifund, a prediction market – a forecasting method that was the ostensible topic of the conference.

Prediction markets are a long-held enthusiasm in the EA and rationalism subcultures, and billed guests included personalities like Scott Siskind, AKA Scott Alexander, founder of Slate Star Codex; misogynistic George Mason University economist Robin Hanson; and Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Miri).

Billed speakers from the broader tech world included the Substack co-founder Chris Best and Ben Mann, co-founder of AI startup Anthropic. Alongside these guests, however, were advertised a range of more extreme figures.

One, Jonathan Anomaly, published a paper in 2018 entitled Defending Eugenics, which called for a “non-coercive” or “liberal eugenics” to “increase the prevalence of traits that promote individual and social welfare”. The publication triggered an open letter of protest by Australian academics to the journal that published the paper, and protests at the University of Pennsylvania when he commenced working there in 2019. (Anomaly now works at a private institution in Quito, Ecuador, and claims on his website that US universities have been “ideologically captured”.)

Another, Razib Khan, saw his contract as a New York Times opinion writer abruptly withdrawn just one day after his appointment had been announced, following a Gawker report that highlighted his contributions to outlets including the paleoconservative Taki’s Magazine and anti-immigrant website VDare.

The Michigan State University professor Stephen Hsu, another billed guest, resigned as vice-president of research there in 2020 after protests by the MSU Graduate Employees Union and the MSU student association accusing Hsu of promoting scientific racism.

Brian Chau, executive director of the “effective accelerationist” non-profit Alliance for the Future (AFF), was another billed guest. A report last month catalogued Chau’s long history of racist and sexist online commentary, including false claims about George Floyd, and the claim that the US is a “Black supremacist” country. “Effective accelerationists” argue that human problems are best solved by unrestricted technological development.

Another advertised guest, Michael Lai, is emblematic of tech’s new willingness to intervene in Bay Area politics. Lai, an entrepreneur, was one of a slate of “Democrats for Change” candidates who seized control of the powerful Democratic County Central Committee from progressives, who had previously dominated the body that confers endorsements on candidates for local office.

55
 
 

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

56
1
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 
 

57
 
 

Apparently a senior SW engineer got fired for questioning readiness of the product, dude must still be chuckling to himself.

Found the story here https://hachyderm.io/@wesley83/112572728237770554

58
59
 
 

Uncritically sharing this article with naive hope. Is this just PR for a game? Probably. Indies deserve as much free press as possible though.

60
 
 

Someone I was following on TikTok, whose takes on tech industry bullshit and specifically AI hype I respected, made a video that Roko's basilisk is a serious concern. My apologies to those who have been in this same situation when I was less sympathetic.

61
 
 

Women have two niches in life: looking beautiful and making babies

The first niche will be taken by sexbots

The second by artificial wombs

Society will suddenly realize it doesn't need women and those in power will quickly start replacing and disempowering them.

People like to consider the positive implications of technology but they don't like to consider the negative implications

The smart strategy for women would be to ban ALL of the following:

  • sexbots
  • artificial wombs
  • trans women

Nobody except women should be allowed to look sexy(feminine sexy), look female or bear children.

To some extent TERF women like @jk_rowling are smart enough to realize that it's essential to defend the female monopoly on this stuff. But the average woman just isn't strategic enough to go along with this, and there's (as always) a collective action problem so you get defectors.

62
 
 

this time in open letter format! that'll sure do it!

there are "risks", which they are definite about - the risks are not hypothetical, the risks are real! it's totes even had some acknowledgement in other places! totes real defs for sure this time guize

63
2
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 
 

In his original post he said:

4: Related, breaking news: A popular Substack claims that COVID didn’t happen at all, and that both “lab leak” and “natural origins” are part of the higher-level conspiracy to distract people from the fact that there was never a virus in the first place.

He later edited the post to add:

I wonder if I could even more Substack likes if I one-upped them with a theory that lockdowns never even happened, and it was just one of those Berenstein Bear or Mandela Effect things where everyone has a false memory.

So now it's ironic, and therefore not harmful to spread the conspiracy theory to his large audience.

64
 
 
65
 
 

The highlight for me is coming up with some weird pseudoscience justification for why it’s okay to hit your kids.

66
67
 
 

A video interview with the artist John Wild about AI, AGI, eugenics and Silicon Valley TESCREAL cultism. Posting without watching.

68
 
 
69
 
 
70
71
 
 

The article doesn't mention SSC directly, but I think it's pretty obvious where this guy is getting his ideas

72
 
 

includes considerable nonspecific shit-talking of assigned EA enemies, including - horrors! - Timnit Gebru talking about the social issues of the actually-existing AI-industrial complex. also it's not a CASTLE it's a MANOR HOUSE, you fools, you rubes,

73
1
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 
 

Koanic Soul was a website on the virtues of craniomentry that was popular in early-2010s neoreactionary discourse. It told of how modern humanity is a mix of Cro-Magnon, Neanderthal and Melonhead. Each has different intellects and personality types. And you can tell by just looking at them.

We lost so much (that was well worth losing) when Koanic Soul closed in 2015-ish. Amazing new slurs for unworthy skull shapes ("snake-melon") that you just don’t hear any more.

Anyway, it turns out there are traces still remaining in rssing.com. This is just page 7 of several.

The main site was rambling delusional blog posts - the above link is just some of the RSS feed for the blogs - and a forum filled with our very good friends.

Here's a contemporary review from r/badscience.

(There is a current substack and a current youtube of the same name which are unrelated.)

74
 
 

dude has another banger today too, again from the bitter 4chan incel memepool but in bigger words: https://www.lesswrong.com/posts/nxmyGYfZaXvKALWGK/lukehmiles-s-shortform#ijhf8stE4Thc9CWXP

75
view more: ‹ prev next ›