ChairmanMeow

joined 1 year ago
[–] [email protected] 5 points 1 hour ago

Trump's attempt at making other Muslim countries make peace with Israel without properly addressing the Palestinian question is something Hamas cited as part of their 'casus belli', the reason they attacked Israel. They feared that if their supposed "allies" made peace, the Palestinian cause would be lost.

Trump didn't really deescalate tensions, rather he provoked some (e.g. the embassy move) and he tried to ignore other rising tensions because addressing those would be too difficult. One can easily argue his actions were the indirect cause of the current mess.

[–] [email protected] 4 points 4 hours ago

Historically when games are delisted they'll still be in your library.

[–] [email protected] 2 points 1 day ago

You do get the advantage of easy and above all fast placement.

Not sure how this would work out. There's pros and cons I suppose.

[–] [email protected] 2 points 3 days ago (1 children)

I have no issues connecting to my server when using my local DNS and self-signed certificates with the normal app either, or perhaps I'm misunderstanding you.

[–] [email protected] 3 points 3 days ago* (last edited 3 days ago)

If producing an AGI is intractable, why does the human meat-brain exist?

Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

The human brain is extremely complex and we still don't fully know how it works. We don't know if the way we learn is really analogous to how these AIs learn. We don't really know if the way we think is analogous to how computers "think".

There's also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don't fit the definition either. If that's true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we're overestimating how special we are.

And then there's the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn't "around the corner" as some enthusiasts claim. For any practical AGI we'd have to finish training in maybe a couple years, not millions of years.

And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?

[–] [email protected] 5 points 4 days ago (4 children)

This is a gross misrepresentation of the study.

That's as shortsighted as the "I think there is a world market for maybe five computers" quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.

That's not their argument. They're saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.

Maybe transformers aren't the path to AGI, but there's no reason to think we can't achieve it in general unless you're religious.

They're not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.

That's a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn't mean it has any relationship to the real world.

That's not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc...), and then present a computational proof that shows that this is in contradiction with other logical proofs.

Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There's a technical explanation in the paper that I'm not going to try and rehash since it's been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It's not a strawman, it's a hard proof of why it's impossible, like proving that pi has infinite decimals or something.

Ergo, anyone who claims that AGI is around the corner either means "a good AI that can demonstrate some but not all human behaviour" or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we'd still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don't offer a thought experiment, they provide a computational proof for this.

[–] [email protected] 5 points 6 days ago
[–] [email protected] 27 points 6 days ago (1 children)

This article was amended on 14 September 2023 to add an update to the subheading. As the Guardian reported on 12 September 2023, following the publication of this article, Walter Isaacson retracted the claim in his biography of Elon Musk that the SpaceX CEO had secretly told engineers to switch off Starlink coverage of the Crimean coast.

IIRC Musk didn't switch it off, it wasn't turned on in the first place and Musk refused to turn it on when the Ukrainian military reqeusted it.

Musk is a shithead but not for this reason.

[–] [email protected] 5 points 6 days ago

I mean, unless you believe life is like a fairytale where one side must necessarily be good and the other must necessarily be evil one can oppose and condemn two opposing parties in a conflict at the same time.

[–] [email protected] 13 points 1 week ago (5 children)

Ever considered neither Hezbollah nor Israel seem to care about civilian lives? Are they, perhaps, both fucking terrible?

[–] [email protected] 6 points 1 week ago (2 children)

https://github.com/cheeaun/phanpy?tab=readme-ov-file#easy-way

It's fairly literally just a download-and-run kind of deal it seems. Does seem pretty trivial.

[–] [email protected] 3 points 1 week ago
1
/r/eu4 (programming.dev)
view more: next ›