this post was submitted on 17 Sep 2023
150 points (100.0% liked)
Chat
7498 readers
3 users here now
Relaxed section for discussion and debate that doesn't fit anywhere else. Whether it's advice, how your week is going, a link that's at the back of your mind, or something like that, it can likely go here.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Wonder whether in theory one could use a dataset of... everything else, have the AI exclude what it does not recognise, then run the exclusions against a dataset to see whether or not they contain children. There could be an additional layer of running the exclusions against a dataset of regular sexual content.
One issue is that admin of any site would still want to report any CSAM to authorities. That could be automated by an AI checker, but one would have to have a lot of faith that the AI was decently accurate and not generating many false reports. The workaround I described to avoid using datasets of abuse is unlikely to be particularly accurate - ok for the purposes of protecting admin, but leaves them in an odd spot when it comes to banning a user, especially where a user's livelihood could be impacted, or things like paid online courses. I guess specialist police departments probably would have to use highly relevant datasets, along with review by humans, but still - nobody wants to inadvertently clog up that system with false reports.