Soon we’ll find delivery robots trying to pull some amazing stunts, all thanks to the sacrifices of some daring Pokemon Go players. Good times ahead 🍿
TranquilTurbulence
Thanks. Seems like a really freaky situation. Must be something with the training data. My guess is, this LLM was trained with all the creepy hostility found on Twitter.
History repeats itself.
Some Old Thing (software/website/service/whatever) becomes bad, and people get really upset. Initially, many say that SOT is going to die. Techies switch from SOT to New Great Thing. For a while, techies at NGT celebrate and pat each other on the back for making this brilliant move.
Meanwhile, normies at SOT continue to use it. They hate it at first or even complain about it, but eventually they get used to how bad SOT is. Every now and then, they hear about NGT, but they just can’t switch because reasons.
After a few years it’s clear that, SOT hasn’t died yet, but also continues to have quite a few users too. Some people end up using both, while a small group of people vow to never touch SOT ever again. SOT and NGT both continue to exist, because apparently there are enough users for both.
I’ve seen these things happen so many times, that it’s about time to point out that there’s a pattern. Just look back at any tech controversy over the past 30 years and you can see it usually follows this pattern pretty well.
They could just run the whole dataset through sentiment analysis and delete the parts that get categorized as negative, hostile or messed up.
Twitter is another possibility. The LLM could have learned how to write like a bubbling barrel of radioactive toxic waste, and then just applied those lessons in longer format.
Stuff like this should help with that. If the AI can evaluate the response before spitting it out, that could improve the quality a lot.
Oh, there it is. I just clicked the first link, they didn’t like my privacy settings, so I just said nope and turned around. Didn’t even notice the link to the actual chat.
Anyway, that creepy response really came out of nowhere. Or did it?
What if the training data really does contain hostile and messed up stuff like this? Probably does, because these LLMs have eaten everything the internet has to offer, which isn’t exactly a healthy diet for a developing neural network.
Would be really interesting to know what kind of conversation preceded that line. What does it take to push an LLM off the edge like that. Did the student pull a DAN or something?
My guess is, the people who care didn’t stick around. As s result, quality went down.
It would make sense to include matching images in the search results and other engagement driven recommendations. There are quite a few screenshots too, so if the search can only handle text, it’s going to completely miss a pretty large category.
That’s the coldest time. Things people say about layers should be taken seriously.
If it’s windy, you’ll need to protect your face too, so bring a balaclava with you. Get one of those that have two holes for eyes. You know, bank robbery style.
VPN: essential or snake oil?