this post was submitted on 19 Nov 2024
85 points (100.0% liked)
Technology
37747 readers
341 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
With the sheer volume of training data required, I have a hard time believing that the data sanitation is high quality.
If I had to guess, it's largely filtered through scripts, and not thoroughly vetted by humans. So data sanitation might look for the removal of slurs and profanity, but wouldn't have a way to find misinformation or a request that the reader stops existing.
anything containing "die" ought to warrant a human skimming it over at least
I don't disagree, but it is a challenging problem. If you're filtering for "die" then you're going to find diet, indie, diesel, remedied, and just a whole mess of other words.
I'm in the camp where I believe they really should be reading all their inputs. You'll never know what you're feeding the machine otherwise.
However I have no illusions that they're not cutting corners to save money
huh? finding only the literal word "die" is a trivial regex, it's something vim users do all the time when editing text files lol
Sure, but underestimating the scope is how you wind up with a Scunthorpe problem
i feel like that's being forced in here, i'm literally just saying that they should scan through any text with the literal word "die" to make sure it's not obviously calling for murder. it's not a complex idea
They could just run the whole dataset through sentiment analysis and delete the parts that get categorized as negative, hostile or messed up.