Honestly it's either that or some rosewattasttone level translation errors which sounds like a pretty good risk to take for me.
YourNetworkIsHaunted
So the guys who have been burning almost as much VC money as they have water and electricity in the name of building AGI have announced that they're totally gonna do it this time? Just a few more training runs man I swear this time we're totally gonna turn everyone into paperclips just let me have a few more runs.
Not gonna lie, "enforcing the line between ketchup and tomato sauce" isn't the sort of thing I'd expect the government to be into, but I guess I'm not mad about it?
Gotta be cheaper than buying new planes which would also have new engines. Generally there needs to be a pretty substantial increase in capability before it's worth retiring an existing platform, especially in a logistics role where you don't get as much benefit from the bleeding edge because nobody's supposed to be shooting at you in the first place.
I think the missing piece here is that B-52 isn't just a pretty good cargo hauler, it's a pretty good cargo hauler that we don't need to buy a whole new airframe to get. Think of it less as "we're commissioning these B-52s" and more as "hey look we found a way to use all these B-52s we already had" only this just keeps working forever.
It's not an exhaustive search technique, but it may be an effective heuristic if anyone is planning The Revolution(tm).
AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesn't matter to anyone.
I mean, doesn't somebody still need to validate that those keys only get to people over 18? Either you have a decentralized authority that's more easily corrupted or subverted or else you have the same privacy concerns at certificate issuance rather than at time of site access.
I mean, the whole point of declaring this era post-truth is that these people have basically opted out of consensus reality.
Why don't they just hire a wizard to cast an anti-tiktok spell over all of Australia instead? It would be just as workable and I know a guy who swears he can do it for cheaper than whatever server costs they're gonna try and push.
Okay apparently it was my turn to subject myself to this nonsense and it's pretty obvious what the problem is. As far as citations go I'm gonna go ahead and fall back to "watching how a human toddler learns about the world" which is something I'm sure most AI researchers probably don't have experience with as it does usually involve interacting with a woman at some point.
In the real examples that he provides, the system isn't "picking up the wrong goal" as an agent somehow. Instead it's seeing the wrong pattern. Learning "I get a pat on the head for getting to the bottom-right-est corner of the level" rather than "I get a pat on the head when I touch the coin." These are totally equivalent in the training data, so it's not surprising that it's going with the simpler option that doesn't require recognizing "coin" as anything relevant. This failure state is entirely within the realms of existing machine learning techniques and models because identifying patterns in large amounts of data is the kind of thing they're known to be very good at. But there isn't any kind of instrumental goal establishing happening here as much as the system is recognizing that it should reproduce games where it moves in certain ways.
This is also a failure state that's common in humans learning about the world, so it's easy to see why people think we're on the right track. We had to teach my little on the difference between "Daddy doesn't like music" and "Daddy doesn't like having the Blaze and the Monster Machines theme song shout/sang at him when I'm trying to talk to Mama." The difference comes in the fact that even as a toddler there's enough metacognition and actual thought going on that you can help guide them in the right direction, rather than needing to feed them a whole mess of additional examples and rebuild the underlying pattern.
And the extension of this kind of pattern misrecognition into sci-fi end of the world nonsense is still unwarranted anthropomorphism. Like, we're trying to use evidence that it's too dumb to learn the rules of a video game as evidence that it's going to start engaging in advanced metacognition and secrecy.
They have enough money to make it complex, no matter how simple the underlying issue.