this post was submitted on 25 Jul 2024
1008 points (97.5% liked)
Technology
59107 readers
3218 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
All the models I've used that do TTS/RVC and rotoscoping have definitely not produced professional results.
What are you using? Cause if you're a professional, and this is your experience, I'd think you'd want to ask me what I'm using.
Coqui for TTS, RVC UI for matching the TTS to the actor's intonation, and DWPose -> controlnet applied to SDXL for rotoscoping
Full open source, nice! I respect the effort that went into that implementation. I pretty much exclusively use 11 Labs for TTS/RVC, turn up the style, turn down the stability, generate a few, and pick the best. I do find that longer generations tend to lose the thread, so it's better to batch smaller script segments.
Unless I misunderstand ya, your controlnet setup is for what would be rigging and animation rather than roto. I do agree that while I enjoy the outputs of pretty much all the automated animators, they're not ready for prime time yet. Although I'm about to dive into KREA's new key framing feature and see if that's any better for that use case.
I was never able to get appreciably better results from 11 labs than using some (minorly) trained RVC model :/ The long scripts problem is something pretty much any text-to-something model suffers from. The longer the context the lower the cohesion ends up.
I do rotoscoping with SDXL i2i and controlnet posing together. Without I found it tends to smear. Do you just do image2image?
The voice library 11labs added includes some really reliable and expressive models. I've only trained a few voice clones, but I find them totally usable for swapping out short lines to avoid having to bring a subject back in to record. I'll fabricate a sentence or two, but for longer form stuff, I only use AI for the rough cuts. Then I'll practically record as a last step, once everything's gone through revision cycles. The "generate a few and chop em together" method is fine for short clips, but becomes tedious for longer stuff.
Funnily enough, when I say roto, I really just mean tracing the subject to remove it from the background. Background removal's so baked in to things now, I dunno if people even think of it as roto. But I mostly still prefer the Adobe solutions on this - roto brush in After Effects, for the AI/manual collaboration. As for roto in the A Scanner Darkly sense, I've played with a few of the video to video models, but mostly as a lark for fluff B-roll.