As someone who's gone through that recently - brace yourself.
I believe you can actually avoid downtime altogether if you're willing to lose post thumbnails that would be generated during the migration. And even those might be possible to get regenerated, although I haven't looked into it.
Issue 1 - lemmy-ui will crash and burn if it can't load the site logo. Specifically UI will simply say Server error and dev tools will show 500. Nginx will also only show 500 with no more detail and lemmy-ui will spam you something about having received an empty array. Apps will continue to work fine as they use API rather than the UI. Because I found out about it after having started the migration, I had to resort to setting site logo to NULL in the database. You might get away with just unsetting the logo in /admin, but prepare the query just in case. On mobile now, can't give you the query itself.
Issue 2 - pitcrs is not actually stateless. You MUST save that weird sled database and keep using it after the migration. Otherwise you'll find out the Server error gremlins coming out more and more often.
Issue 3 - pictrs migration code is a bit on the shit side. It tries to handle missing files (why are they missing in the first place?! I haven't deleted anything ffs!), but eventually gives up and stops the migration. I had to restart it enough times to lose count. Luckily enough it does resume, but keeps retrying all missing files, not ever discarding them. Output becomes unreadable.
Issue 4 - migration is dogshit slow. Took me nearly 4 hours to migrate ~20GB.
There might be something else in your case as you're running a much larger instance, but this is a prime example of how much alpha lemmy+pictrs really is.