Go always with software RAID where possible to avoid vendor lock-in.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Can you elaborate on the scenario this is solving for? Isn't software RAID a performance hit?
Its cheaper, has better visibility for drive health, and things like CoW means a file is extremely unlikely to be corrupt on a power failure (with hardware raid, you are relying on the battery in the raid controller for that protection. I guess you could run CoW ontop of a hardware raid). CoW also helps spread wear on SSDs.
ZFS will heal data if it finds corrupted blocks, not sure that a hardware raid does.
ZFS is the same anywhere, and is adjusted via software (as opposed to the dell PERCs which i believe require booting into essentially bios. Certainly ive never had the work through iDRAC), and you dont have to learn that raid controllers control UI (altho, they are never difficult).
Its also another part that could fail and require like-for-like replacement. ZFS on satas just needs to be able to access the drive.
I looked into it ages ago, and ZFS on HBA made so much more sense than a $300 used raid controller.
For me only the case of inability to reassemble RAID array on different server (with different controller or even without it) for data recovery shouts a big "NO" to any RAID controller at home lab.
While it is fun to have "industrial grade" thing, it isn't fun to recover data from such arrays. Also, ZFS is a very good filesystem (imagine having 4.8 TB of data on 4 TB mirrored RAID. This is my case with zstd compression), but it isn't playing well with RAID controllers. You'll experience slowdowns and frequent data corruption.
Good to know, I appreciate the help! Do you think ZFS is a reasonable alternative to using RAID here?
Using ZFS on Proxmox for couple of years under different workloads (home servers, productions at job), it is very good.
Just tune it as you need :)
Thanks a ton! I'm on the proxmox forums trying to figure out if I should stick with the H330 that came with the server and return/sell the H730 I got, or if I should use the H730. Seems there's a recent thread where they're figuring it out so I'll get to the bottom of it.
Imo get the H730 if it's financially reasonable. The passthrough is better supported in my experience. You can resell the H330 fairly easily.
Turns out they put the H730 in the server already so I never got an H330. I want to test the SMART data but it looks like the newer firmware should be fine.
Be aware! The dell R730 most likely comes with a raid controller which is not suited for ZFS. You need a true HBA instead. Some raid controllers do let you set them up in JBOD mode but it still is not suited for ZFS as you need a proper HBA and or a raid controller where you can flash the firmware to IT mode.
For ZFS storage and many apps and more Truenas scale might be interesting to you.
I've been reading that the updated firmware for the PERC H730 has no issues in HBA mode, and there's a thread from December in the Proxmox forums on using an H330 and H730 and they seem to work fine. I'm trying to get more clarification in that thread, but I'll also do some testing myself.
I run my 730xd with a H730 in HBA for months and Truenas has never had an issue.
It seems that the issues may be quite a bit deeper than they seem. That the cache on the H730 can cause subtle issues. Are you able to get SMART information from the H730 for the disks?
There might have been some firmware version messing with cache, ok. But I run latest firmware and yes, my SMART is clean and my scrubs are clean.
Ok cool. I need to update everything anyways so once I get around to that I'll test the H730 a bit but it seems that the newer firmware should be ok for ZFS
Take a look at the following topic. It is not just relevant for TrueNAS but ZFS in generel. https://www.truenas.com/community/threads/whats-all-the-noise-about-hbas-and-why-cant-i-use-a-raid-controller.81931/
My plan was to install Proxmox and run TrueNAS on top of it
Proxmox runs ZFS natively already so there's not much reason to bother with TrueNAS IMO. If you need SMB shares and that sort of thing you can run a container and mount the ZFS volume into it.
I currently have 4x900 GB 10k SAS Dell Enterprise drives but I intend to bump that up to 10x900 GB over time. I’d like to be able to add these without much hassle
If you want to easily add drives later on then as far as I know the only good option is the controller in HBA mode with unRAID in a VM. Hardware RAID or ZFS don't make adding drives very easy.
I’m wondering if using ZFS with the RAID controller in HBA mode will be more worth it than a dedicated RAID setup
I think ZFS RAID with HBA mode on the controller is worth it vs traditional hardware RAID, it's more portable, less reliant on hardware.
And if I’m using a RAID setup, should I go RAID or unRAID? If I go RAID, is RAID 01, 10, or 60 a better option here?
With 10 drives I would probably do ZFS RAIDz2 if this was my setup. (RAIDz2 has 2 parity drives like RAID 6).
Thanks for the really helpful perspective!
I have this exact setup (R730 and ZFS), but I'll have to disagree on not using TrueNAS. There are features you may want to use, and the logical separation of the zfspool from the rest of the server has been handy. I boot and store my VMs off of SSDs outside of the main NAS pool.
If you want to use a NVMe boot drive on a PCIE card, the server isn't natively capable of it. You need to use a USB drive to bootstrap it with Clover. I forget the exact technical details. I have had no problems leaving it in the internal usb port over a couple years so far.
Thanks for the insight. It's something I'll definitely consider
I would strongly suggest not using 900GB 10kRPM drives (and especially not 10 of them) in [current year] when brand-new 8TB hard drives cost $120, and 14+TB recertified drives aren't much more than that. The power costs of 7 more drives than you need for the capacity definitely add up over several years of runtime.
I'm a college student and I already dropped a lot on the server. I haven't gone too deep into my planning for upgrades yet aside from the H730, more RAM when I can afford it, and more drives. I'll take the 8TB drives into consideration though, I'd just have to build that up a lot slower but it'd give me a lot of space. 10x8TB would be fun to have
You don't need 8 drives when they are 8 times larger than your current ones. I went from planning for 5+ drives to just downsizing to two drives in mirror. Then I can expand with another mirror.
Unless you need uptime and want to guarantee an SLA for your own services, you are much better off with a mirror or raidz1. Do regular backups (off-site, incremental) and don't fear the disk failure.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
Fewer Letters | More Letters |
---|---|
LVM | (Linux) Logical Volume Manager for filesystem mapping |
NAS | Network-Attached Storage |
NVMe | Non-Volatile Memory Express interface for mass storage |
RAID | Redundant Array of Independent Disks for mass storage |
SSD | Solid State Drive mass storage |
ZFS | Solaris/Linux filesystem focusing on data integrity |
6 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #598 for this sub, first seen 12th Mar 2024, 19:15] [FAQ] [Full list] [Contact] [Source code]
How do you want to access the files? Browser, SMB, NFS, iSCSI, app like syncthing?
If it were mine, I'd put all the drives in RAID 10, install Proxmox, and either use its containers or create a VM to run docker and give it a big virtual disk.
But the dell controllers aren't very flexible with resizing the RAID. If you want flexibility, consider flashing it to IT mode if possible and then doing zfs, software raid, or LVM groups.
The files will probably be NFS, SMB, or something similar. I have a FreeIPA domain throughout my entire network and this will probably serve as where I put my backups along with whatever other stuff I want. As I intend to expand the cluster, would HBA mode on the H730 be good enough and let ZFS handle it from there?
Google IBM m1015 hba, there's a ton on eBay for no money. It used to be TrueNAS go to. There's newer HBAs that are faster, but I don't think it will matter for you
If you do TN, you MUST read the manual and look at their ZFS intro guide. Trust me.
An H330 came with the server and I bought an H730 with it. I'd prefer to use one of those if possible
Just make sure it's HBA mode and it'll be fine. Sometimes called IT mode.
that's generally what I'm hearing so I think I'll give that a shot. I'll keep the H730 on hand as I want to do some testing with it.
I’m not sure why, but it seems like TrueNAS in a VM is not recommended (I saw a thread on their forum)… I also wanted the ability to add more drives later on, so I’ve gone with UnRAID and even though it’s only been a few weeks, it seems pretty functional and I’m glad I paid the licensing fee. I am trying to do Proxmox and OPNsense on one of those fanless N305 boxes and am getting very confused!
Either trueNAS or unraid work as a vm in Proxmox, but there's some caveats. You have to pass the whole hba/lsi pcie device to the vm so you can't split the backplane of the server.