255
submitted 1 month ago by [email protected] to c/[email protected]
top 46 comments
sorted by: hot top controversial new old
[-] [email protected] 69 points 1 month ago

Stability AI crashed and burned so fast it's not even funny. Their talent is abandoning ship they've even been caught scraping images from Midjourney, which means they probably don't have a proper dataset.

[-] [email protected] 39 points 1 month ago

The model should be capable of much better than this, but they spent a long time censoring the model before release and this is what we got. It straight up forgot most human anatomy.

[-] [email protected] 56 points 1 month ago

There's a reason that artists in training often practice by drawing nudes, even if they don't intend for that to be the main subject of their art. If you don't know what's going on under the clothing you're going to have a hard time drawing humans in general.

[-] [email protected] -1 points 1 month ago

they have plenty of porn created using the AI lol

[-] [email protected] 8 points 1 month ago

This article is about the newest model, SD3 medium (2B). Previous Models such as SD2 and SDXL were also mostly unable to generate nudity, though they managed beach or summer images. The earliest SD1.5 is most capable of nudity, especially with the copious fine tunes focused on that. SD3 though completely freaks out as soon as it starts generating skin. Its straight up weird. Only winter images with full head to toe clothing produce humans at all. Its currently a landscape generator. Even realistic animals are hard for it. Whatever it successfully generates looks quite nice though. Pretty background wallpapers.

[-] [email protected] 3 points 1 month ago

wtf, they are selling something worse than the last?

[-] [email protected] 9 points 1 month ago

This sucks. I was really holding out hope that they might chart a better path forward than most of the alternatives.

[-] [email protected] 34 points 1 month ago* (last edited 1 month ago)

Honestly I think that it's models like these that output things that could be called art.

Whenever a model is actually good, it just creates pretty pictures that would have otherwise been painted by a human, whereas this actually creates something unique and novel. Just like real art almost always ilicits some kind of emotion, so too do the products of models like these and I think that that's much more interesting that having another generic AI postcard.

Not that I'm happy to see how much SD has fallen though.

[-] [email protected] 15 points 1 month ago

It would be great if the model could produce this beautifully disfigured stuff when the user asked it to. But if it can't follow the user's prompts reasonably, then it's pretty useless as a tool

[-] [email protected] 5 points 1 month ago

I can see an argument for artists choosing to use chaotic processes they can't really control.

Setting up a canvas and paints and brushes in a particular arrangement in the woods, and letting migratory animals and weather put their mark on the work, and then see what results. That could be art.

And if that can be art, then I guess chaotic, unpredictable AI models can output something that can be art, too.

[-] [email protected] 3 points 1 month ago

I agree, bring on the weird, I don't need accurate, I want hallucinated novelty. This is like people who treat LLM like a dictionary or search engine and complain about innaccuracy. They don't understand this is to be expected of a synthetized answer.

Hallucinations is an essential part of the value these things bring.

[-] [email protected] 34 points 1 month ago

Almost like the issues with repressing sex and nudity are harming the development of intelligence. Just like real life.

[-] [email protected] 6 points 1 month ago

I was going to say this, their new architecture seems to be better than previous ones, they have more compute and I'm guessing, more data. The only explanation for this downgrade is that they tried to ban porn. I haven't read online info about this at the time anyways, I'm just learning this recently

[-] [email protected] 5 points 1 month ago

I see this growing sentiment. Are we on the cusp of a re-examination of this social wound.

[-] [email protected] 26 points 1 month ago

Wow, the pile of limbs in the living room pic genuinely ceeeped me out.

[-] [email protected] 25 points 1 month ago

"Biblically accurate models"

[-] [email protected] 17 points 1 month ago

Ah, yes. Man made horrors beyond my comprehension.

[-] [email protected] 14 points 1 month ago

? They are all bad at first for the average person that uses surface level tools, but SD3 won't have the community to tune it because it is proprietary junk and irrelevant now.

[-] [email protected] 9 points 1 month ago

Would you mind sharing some good alternatives that aren’t proprietary junk?

[-] [email protected] 13 points 1 month ago* (last edited 1 month ago)

I believe pixart sigma is more open. The community hasn’t rallied around it though.

Edit: Fuck yes, pixart is AGPL!

[-] [email protected] 5 points 1 month ago* (last edited 1 month ago)

Now that everyone's no longer waiting in anticipation of SD3 perhaps we'll start seeing diversification of attention to other models.

[-] [email protected] 5 points 1 month ago

In my experience these open models is where the real work is being done. The large supervised models like DALL-E etc are more flashy but there's a lot more going on behind the scenes than the model itself so it feels like it's hard to gauge the real progress being done

[-] [email protected] 10 points 1 month ago

There are a lot of fine-tunes of earlier Stable Diffusion models (SD1.5 and SDXL) that are better than this, and will continue to see refinement for some time yet to come. Those were released with more permissive licenses so they've seen a lot of community work built on them.

[-] [email protected] 6 points 1 month ago

CommonCanvas, the CC only dataset model

[-] [email protected] 2 points 1 month ago

SD3 is planned to be open release later still though?

[-] [email protected] 6 points 1 month ago

No. I don't think so. The lead researcher left because of it.

[-] [email protected] 1 points 1 month ago

I'm not seeing about the lead researcher leaving because of that, just they are leaving. With the expenses far exceeding revenue right now being a suspected reason.

[-] [email protected] 2 points 1 month ago

SD3 won't have the community to tune it because it is proprietary junk and irrelevant now.

What changed between SDXL and SD3? I’m out of the loop on this one.

[-] [email protected] 5 points 1 month ago

They realized that no matter how much they charged as a one time fee, the people the got the one time fee enterprise license would eventually cost them more in computational costs them the fee. So they switched it to 6000 image generations, which wasn't enough for most of the community that made fixes and trained loras, so none of the "cool" community stuff will work with SD3.

[-] [email protected] 2 points 1 month ago

Have they considered a community sponsored "group buy" of compute, to just train the model as far as the community will bear ? SDXL was so great, surely 100k people could put 5$ a month toward making monthly improvement open source checkpoints happen ? I don't see any other financing model work out if the output is open source. It simply can't be financed after publication. And it won't get the community support if it's behind a paywall.

[-] Geologist 1 points 1 month ago

Maybe I’m out of the loop, but I was under the impression people paying for the enterprise tier were largely using the model on their own hardware, and that the removal of this tier was largely just rent seeking by SD against people improving on their model and selling access to a better version.

Did SD really sell unlimited access to their compute/ image generator for a fixed price? If so that’s just so dumb it’s hard to believe. I only started paying attention to the company recently though, so maybe I’m missing something.

[-] [email protected] 13 points 1 month ago

this is gonna lead to some weird fetishes

[-] [email protected] 10 points 1 month ago

Basically, any time a user prompt homes in on a concept that isn't represented well in the AI model's training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.

I'm so happy that the correct terminology is finally starting to take off in replacing 'hallucinate.'

[-] [email protected] 9 points 1 month ago

Such results may not be very useful for most people, but that's dope in an accidentally artistic way.

[-] [email protected] 6 points 1 month ago

Also from reddit, with zero irony:

Kudos to Stablility AI for releasing ANOTHER excellent model for FREE.

💀

[-] [email protected] 5 points 1 month ago

The model does have a lot of advantages over sdxl with the right prompting, but it seems to fall apart in prompts with more complex anatomy. Hopefully the community can fix it up once we have working trainers.

[-] [email protected] 5 points 1 month ago

"Laying on grass" is complex?

[-] [email protected] 2 points 1 month ago

Gotta think most sfw pictures of people are portraits. Poses are more advanced for sure

[-] [email protected] 4 points 1 month ago

holy yikes! call Cronenberg!

this post was submitted on 12 Jun 2024
255 points (98.1% liked)

Technology

55962 readers
3446 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS