[-] [email protected] 4 points 20 hours ago

There shouldn't be an endless grind, and from what I've seen in other interviews, Larian understands that too. They have a couple things they still want(ed) to work on and then move on to their next project(s).

They definitely shipped a complete product last August. So complete that a lot of the industry, or at least a loud minority, was getting upset at the raised standards (lol). I don't see how any consumer could complain.

[-] [email protected] 12 points 2 days ago

Have you tried reading it? It's written so poorly that I really hope no human was involved in this and it's just AI generated garbage.

[-] [email protected] 1 points 6 days ago

My bad, I wasn't precise enough with what I wanted to say. Of course you can confirm (with astronomically high likelihood) that a screenshot of AI Overview is genuine if you get the same result with the same prompt.

What you can't really do is prove the negative. If someone gets an output then replicating their prompt won't necessarily give you the same output, for a multitude of reasons. e.g. it might take all other things Google knows about you into account, Google might have tweaked something in the last few minutes, the stochasticity of the model is leading to a different output, etc.

Also funny you bring up image generation, where this actually works too in some cases. For example they used the same prompt with multiple different seeds and if there's a cluster of very similar output images, you can surmise that an image looking very close to that was in the training set.

[-] [email protected] 4 points 6 days ago* (last edited 6 days ago)

Assuming AI Overview does not cache results, they would be generated at search-time for each user and "search-event" independently. Even recreating the same prompt would not guarantee a similar AI Overview, ~~so there's no way to confirm.~~

Edit: See my comment below for what I actually meant to say

[-] [email protected] 2 points 1 week ago

I was guessing at a duel vs dual joke, but civil war makes more sense.

Although they're essentially the same joke in different clothes.

[-] [email protected] 28 points 1 week ago

Assuming we shrink all spacial dimensions equally: With Z, the diagonal will also shrink so that the two horizontal lines would be closer together and then you could not fit them into the original horizontal lines anymore. Only once you shrink the Z far enough that it would fit within the line-width could you fit it into itself again. X I and L all work at any arbitrary amount of shrinking though.

[-] [email protected] 6 points 2 weeks ago

Now I don't know if they ever changed anything since launch. But if you judged the max speed by the first flying saddle you got you didn't actually experience anything close to max speed. Pals that have their saddles unlocked at a higher level (usually) have a much higher speed when mounted.

[-] [email protected] 13 points 3 weeks ago

So is the example with the dogs/wolves and the example in the OP.

As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn't be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model's ability to draw any conclusions based on background.

However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn't even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.

[-] [email protected] 22 points 3 weeks ago

I'm sorry, I think you mean "blasting the pyramids with photons."

view more: next ›

Mirodir

joined 9 months ago