this post was submitted on 21 Nov 2024
150 points (97.5% liked)

Technology

59562 readers
3078 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 23 points 1 day ago (3 children)

And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe...

[–] [email protected] 4 points 23 hours ago (1 children)

IFTAS is already working with Thorn towards this goal. But you already have access to such technology through my toolset.

[–] [email protected] 2 points 22 hours ago* (last edited 22 hours ago) (1 children)

This one? I loosely followed your work... Maybe I should try it someday. See how it does on a regular VPS. Thanks for the link to the IFTAS. Seems they have curated some useful links... I'll have a look at their articles. Hope they get somewhere with that. At this point, I don't think there is any blocklist accessible to the average Fediverse admin?!

Edit: Thx, saw your other comment with the link to horde-safety.

[–] [email protected] 2 points 22 hours ago (1 children)

Ye, a normal VPS would be too slow for production use, as a GPU is recommended. But you can plug in any home PC to do it without risks

[–] [email protected] 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

Do you think this approach would be worth a try for the threaded Fediverse (aka Lemmy)? I mean your use-case is very different. We have some rudimentary image detection to flag other kinds of unwanted images in Piefed. I could experiment with something like https://github.com/monatis/clip.cpp. Have it go through the media cache and see if it can do something useful for us. But I don't think it'd be worth all the effort unless the whole approach is somewhat accurate and runs in real time on average VPSes.

[–] [email protected] 3 points 22 hours ago* (last edited 22 hours ago)

This approach was developed precicely for threaded fediverse. The initial use-case was protecting my own lemmy from CSAM! Check out fedi-safety and pictrs-safety

[–] [email protected] 14 points 1 day ago (1 children)

I was going to say... Sure would be nice to have this feature in all the open source AI image generator tools but you're absolutely right 😩

[–] [email protected] 11 points 1 day ago

Yeah, unless someone publishes even a set of hashes of known bad content for the general public... I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

If everyone has access to the model it becomes much easier to find obfuscation methods and validate them. It becomes an uphill battle. It's unfortunate but it's an inherent limitation of most safeguards.

[–] [email protected] 2 points 1 day ago

You're probably right. I'm not sure if it's a good idea to walk close to the edge with things like this, though. Every update to the detection model could change things and get them in jail.... So I certainly wouldn't play a cat and mouse game with something that has several years of jailtime attached... But then I don't really know the thought process of the average pedo. And AI image detection comes with problems anyways. In the article they say it detected 6 million pictures already. While keeping quiet about the rate of false positives. We know people have gotten in serious trouble for (false) claims. And I also wouldn't want to be the Fediverse admin who has to go through thousands of flagged pictures and look at them and decide which is which. With consequences attached... Maybe a database of hashes would be the only option. That doesn't detect new pictures, but at the same time it comes without flase positives and you can't draw conclusions from hash values.