Hi All,
You may have seen the issues occurring on some lemmy.world communities regarding CSAM spamming. Thankfully it appears they have taken the right steps to reducing these attacks, but there are some issues with the way Lemmy caches images from other servers and the potential that CSAM can make its way onto our image server without us being aware.
Therefore we're taking a balanced approach to the situation, and try to take the least impactful way of dealing with these issues.
As you read this we're using AI (with thanks to @db0's fantastic lemmy-safety script) to scan our entire image server and automatically delete any possible CSAM. This does come with caveats, in that there will absolutely be false positives (think memes with children in) but this is preferable to nuking the entire image database or stopping people from uploading images altogether.
But this will at least somewhat guarantee (although maybe not 100%, but better than doing nothing) that CSAM is removed from the server.
We have a pretty good track record locally with account bannings (maybe one or two total) which is great, but if we notice an uptick in spam accounts we'll look to introduce measures to prevent these bots from creating spam if they slip past the registration process - ZippyBot can already take over community creation, which would stop any new account creating communities and only those with a high enough account score would be able to do so, for example.
We don't need (or want) to enable this yet, but just want you all to know we have tools available to help keep this instance safe if we need to use them.
Any questions please let me know.
Thanks all
Demigodrick
When it comes to CSAM it's important to stop them before they get the chance to post and host that on this Instance, so reporting isn't really going to work here.
Though I do agree that using score is a bad idea as it'll foster more of the toxic Reddit culture around differing opinions, since on Reddit Low Karma actually did punish you, locked you out of communities, Rate-limit you, even increased the chances of being shadowbanned (yes you can be shadowbanned for low Karma, or at one point you could've been). We really don't want to bring this kind of thing here to Lemmy, since not only does it invite toxicity, it's also less effective here since votes can't be as easily policed as they can on Reddit due to the Federated nature of the platform and the lower amount of Data Admins hold/collect about users.
My concern is that bots will start karma farming. Many subs are just taking off and its not invincible that a bot could start posting reddit content
That's also another big concern of mine, as it currently is there's no real incentive for Upvote farming on Lemmy however that would change if score was used to gatekeep people and accounts like Karma is on Reddit and Unlike Reddit you can't really police Upvotes and Downvotes because on separate servers you won't have access to their emails, IP address, or cookies.