this post was submitted on 25 Sep 2023
23 points (100.0% liked)

Jellyfin: The Free Software Media System

5720 readers
4 users here now

Current stable release: 10.10.0

Community Standards

Website

Forum

GitHub

Documentation

Feature Requests

Matrix (General Information & Help)

Matrix (Announcements)

Matrix (General Development)

Matrix (Off-Topic) - Come get to know the team and blow off steam!

Matrix Space - List of all the available rooms on Matrix.

Discord - Bridged to our Matrix rooms

founded 4 years ago
MODERATORS
 

I'm just getting started on my first setup. I've got radarr, sonarr, prowlarr, jellyfin, etc running in docker and reading/writing their configs to a 4TB external drive.

I followed a guide to ensure that hardlinks would be used to save disk space.

But what happens when the current drive fills up? What is the process to scale and add more storage?

My current thought process is:

  1. Mount a new drive
  2. Recreate the data folder structure on the new drive
  3. Add the path to the new drive to the jellyfin container
  4. Update existing collections to look at the new location too
  5. Switch (not add) the volume for the *arrs data folder to the new drive

Would that work? It would mean the *arrs no longer have access to the actual downloaded files. But does that matter?

Is there an easier, better way? Some way to abstract away the fact that there will eventually be multiple drives? So I could just add on a new drive and have the setup recognize there is more space for storage without messing with volumes or app configs?

top 20 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 1 year ago (1 children)

I'm going to be adding more drives to my current basic setup soon and I think LVM is how i'm going to go, then I can just extend the filesystem across multiple drives in the future as I need

[–] [email protected] 2 points 1 year ago (1 children)

Big thanks for this pointer. That seems like the move for me

[–] [email protected] 1 points 1 year ago (1 children)

LVMs can be convenient, but in my experience, they are a lot more of a headache than they're worth. But by all means give them a try. I think I just had bad luck.

But there is also nothing that stops the arrs from working across multiple drives. I use six, with content across all of them, all detected and manageable in the arrs.

[–] [email protected] 2 points 1 year ago

This is what I do. As soon as a drive is full I create a new default root path in the arrs. Tbf, I’ve only had to do this twice so far.

[–] [email protected] 5 points 1 year ago (1 children)

Add another vdev to my ZFS zpool. No changes to the filesystem or jellyfin.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

~~You can't remove drives from a zpool though. So if you start with a small drive and keep adding drives as you fill them up, you'll eventually run out of SATA ports and want to replace the smallest drive. The only way to do that is to create a new zpool and copy all of your data to it, which means you need a second set of drives that's at least as big as the first.~~

Or you could add a pci-e SATA card, if you have an extra pci-e port. Used cards like the Dell PERC H310 are cheap and reliable and support 8 drive on their own, or >256 with cheap expander cards that can be daisy-chained (and only need power, so they don't use up pci-e slots).

Edit: looks like they added support for removing drives about 5 years ago.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I prefer m2 pcie cards but same deal, expansion go brrr

Can also increase the size of a redundant vdev (eg zraid2) by replacing the drives one by one with larger ones. I recently used this approach to increase my 4TB vdev to 72TB

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

I'm using MergerFS, which makes this really easy. I set up a temp mergerfs array with all my disks except the one I want to replace, add the new drive to my first array, then run a command to move all data from the replaced drive to the temp array. The original array mount point doesn't notice the difference. Once it's done, I remove the old disk from my main mergerfs array, add the new one, and delete the "temp" array. Then I can remove the old disk from my Snapraid config and also physically remove it from the server.

If you've got an old PC laying around, you should look into setting up Open Media Vault on it.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

What's the problem?

...the arrs can handle more than one storage folder/drive just fine? You don't need to use hardlinks unless you want to continue to seed forever.

If you don't use hardlinks in the arrs, they wont duplicate the files, they will move them out of the download folder into the library folder. All the hardlink option does is allow you to continue to seed even after the media has been downloaded.

The data folder is separate, it only contains library details and metadata, no media files. It should never get big enough to fill up a drive.

My setup downloads media to a temp folder on an SSD, then moves the files onto one of my six drives depending on where I told it to put a series/movie when I added it.

[–] [email protected] 2 points 1 year ago (2 children)

When you get a new drive, you could move some of your library to it, like just the movies or tv or whatever. Then you only need to update one library.

Are you using Linux? You could set it up to mount the new drive into the existing file structure. That way you would not have to change any configurations.

It might also be handy to configure Jellyfin to save the nfo metadata with the media files so it doesn’t have to re match everything if the path changes.

I definitely had data indexed at /Mount/driveA/movies and when I moved it to driveB it was a bit of a pain.

In the long run you might want to invest in a NAS or something. That way you can just add more drives as needed.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (2 children)

NASes don't do anything you can't setup yourself, and their price-to-benefit ratio is absolute trash. The only reason you should ever buy one is if you are completely tech illiterate.

Otherwise, build one. If that's what you meant, agreed. Having one is absolutely worth it.

[–] [email protected] 1 points 1 year ago (1 children)

Begging my ISP to give me root access to the router they gave me so that I can set up one with a USB-SATA adapter and no additional equipment. (I already use SMB shared folders but they are a mess)

[–] [email protected] 2 points 1 year ago (1 children)

Routers make for terrible NASes.

But, you could do what my dad does, he chains his own router after the ISP provided one so he has full control of the second one in the chain.

My solution was to buy a router-modem that was compatible with the internet type my ISP provides, and ditch their piece of crap entirely.

[–] [email protected] 1 points 1 year ago

I’m willing to put up with low capacity, no backups and USB 2.0 speeds but thanks for the advice.

[–] [email protected] 1 points 1 year ago

I mean, all I said was they should think about “investing in a NAS”, whether you buy a Synology or build your own TrueNAS or whatever it will take more hardware than plugging in more usb drives.

[–] [email protected] 2 points 1 year ago (1 children)

Googling off of this response, I think you're right that an NAS is the best solution long term. And in terms of a fully scalable system, I saw that I can create a Distributed File System of multiple NAS systems to even further scale. So thank you

[–] [email protected] 1 points 1 year ago

Awesome, I upgraded my Jellyfin box from a Mac Mini with a bunch of usb drives attached, to a Synology 920+ and I have been really happy with it. I upgraded the ram on it and it runs Jellyfin along with the *arr containers just fine.

As someone else said you can also build your own if you want. Both solutions will allow for easy scaling in the future as you need.

[–] [email protected] 2 points 1 year ago

I recently did expand my storage. I started with one raid5 array with 4 drives. I just added another drive and grew the array, the LUKS container and the filesystem.

[–] [email protected] 2 points 1 year ago

I use btrfs. This allows me to add additional hard drives (of different sizes, too) over time very easily without having to touch any other part of the system.

[–] [email protected] 0 points 1 year ago

I built a 5-bay NAS from old computer parts and put ZFS on it for storing media and LLM models etc.