this post was submitted on 21 Jan 2024
30 points (91.7% liked)

Hacker News

4123 readers
3 users here now

This community serves to share top posts on Hacker News with the wider fediverse.

Rules0. Keep it legal

  1. Keep it civil and SFW
  2. Keep it safe for members of marginalised groups

founded 1 year ago
MODERATORS
 

There is a discussion on Hacker News, but feel free to comment here as well.

top 4 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 10 months ago

I mean.

No.

It makes for good headlines, but ultimately, if you can see and understand what an image is, an AI can too.

Unless its indiscernible to you too, its able to be used. Its probably best to consider another approach entirely

[–] [email protected] 6 points 10 months ago (1 children)

Until the models are constructed to account for the kind of noise that Nightshade throws, and work to countermand it. It's the same cat-and-mouse game that malware makers play with antivirus makers. Or youtuber and ad blockers. This will only make plagiarist AI stronger, sad to say.

[–] [email protected] 4 points 10 months ago

Exactly. Like, how hard would it be to reverse engineer the poison and create a reversal tool that applies the exact opposite modifications. Hell, I wouldn't be surprised if it could be defeated by something as simple as a little image compression or noise.

[–] [email protected] 5 points 10 months ago

This is the best summary I could come up with:


University of Chicago boffins this week released Nightshade 1.0, a tool built to punish unscrupulous makers of machine learning models who train their systems on data without getting permission first.

"Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image," said the team responsible for the project.

Nightshade was developed by University of Chicago doctoral students Shawn Shan, Wenxin Ding, and Josephine Passananti, and professors Heather Zheng and Ben Zhao, some of whom also helped with Glaze.

"Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists," the authors state in their paper.

The failure to consider the wishes of artwork creators and owners led to a lawsuit filed last year, part of a broader pushback against the permissionless harvesting of data for the benefit of AI businesses.

Matthew Guzdial, assistant professor of computer science at University of Alberta, said in a social media post, "This is cool and timely work!


The original article contains 704 words, the summary contains 174 words. Saved 75%. I'm a bot and I'm open source!