50
submitted 1 week ago by [email protected] to c/[email protected]

cross-posted from: https://midwest.social/post/14150726

But just as Glaze's userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze's protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."

top 5 comments
sorted by: hot top controversial new old
[-] [email protected] 20 points 1 week ago

Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."

Remember when tech bros tried to appear cool and benevolent and different from the mean old business tycoons of the past? They never were, but it’s pretty wild how quickly they’ve decided to become just nakedly evil.

[-] [email protected] 6 points 1 week ago

Capitalists gonna capitalist

[-] [email protected] 17 points 1 week ago* (last edited 1 week ago)

The big issue with all these data-poisoning attempts is that they're all just introducing noise via visible watermarking in order to try to introduce noise back into what are effectively extremely aggressive de-noising algorithms to try to associate training keywords with destructive noise. In practice, their result has been to either improve the quality of models trained on a dataset containing some poisoned images because for some reason adding more noise to the inscrutable anti-noise black box machine makes it work better, or to just be completely wiped out with a single low de-noise pass to clean the poisoned images.

Like literally within hours of the poisoning models being made public preliminary hobbyist testing was finding that they didn't really do what they were claiming (they make highly visible, distracting watermarks all over the image and they don't bother training algorithms as much as claimed or possibly even at all) and could be trivially countered as well.

[-] [email protected] 14 points 1 week ago* (last edited 1 week ago)

Nothing like porky lecturing us on respecting property rights when shutting down 30 year old ROMs, but them thinking the IPs of poor people should be shared with them: free of charge.

Plus, don't they have anything better to automate? Are you that bereft of ideas that automating away a hobby is your TOP PRIORITY!?

[-] [email protected] 5 points 1 week ago

The trick is to only draw extremely vulgar and obscene images that'd have to be filtered out of any dataset a company could possibly sell

this post was submitted on 05 Jul 2024
50 points (98.1% liked)

art

22242 readers
129 users here now

A community for sharing and discussing art, aesthetics, and music relating to '80s, '90s, and '00s retro microgenres and also art in general now!

Some cool genres and aesthetics include:

If you are unsure if a piece of media is on theme for this community, you can make a post asking if it fits. Discussion posts are encouraged, and particularly interesting topics will get pinned periodically.

No links to a store page or advertising. Links to bandcamps, soundclouds, playlists, etc are fine.

founded 4 years ago
MODERATORS