Bookmeat

joined 1 year ago
[–] [email protected] 1 points 9 hours ago

I don't fully agree with this. There is a whole actual world (this planet), but one species is all it takes to destroy it.

[–] [email protected] 14 points 18 hours ago (4 children)

A couple of years back I read of a study which concluded that because we are all so unique, there is no one size fits all probiotic. It's a crap shoot if you want to find one that works for you.

[–] [email protected] 3 points 18 hours ago

The definition of insanity comes to mind.

[–] [email protected] 2 points 1 day ago

I'm not sure why SHR would require any file system, it's an abstraction that sits below that layer.

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago) (2 children)

I don't have a Synology, but I implemented SHR manually using mdadm and LVM. If you have a RAID 1 array you can just add disks to it and "upgrade" it to the next tier of your choice. So with 3 disks you could just go to RAID 5 and that chunk would now become RAID 5.

It wouldn't be merged with any additional chunks directly. Rather, you'd use LVM to create a volume group with each chunk added (each chunk world be formatted into a PV). From that you would carve out your volumes and then your file systems on top of that.

This is a pretty simple layout and I imagine Synology is pretty close to this.

The best way, I found to do this is to actually first partition your drives into 2-4TB chunks. Each of these partitions is then added to a RAID array, minimizing waste if you have mixed size drives. e.g. with three disks the first partitions are grouped into one raid array (chunk) and then the second partition of each disk is grouped into the next array, etc.

So, in your case, I suspect the Synology created a 1TB partition on each of your 1TB drives. If you replace 2 of those, Synology would create 1TB partitions on the new drives that match the existing raw disk size. It would then create a new RAID 1 array using the 3tb of additional space that's sitting on new 3tb partitions. LVM would then add this chunk to your volume(s).

Of course, that's just my guess based on research I did a while ago to build my own array. Check out the Synology RAID calculator. It'll give you some ideas about this, too.

[–] [email protected] 1 points 1 day ago

That's pretty sweet.

[–] [email protected] 2 points 2 days ago

The UN is useless against members with veto powers and always will be.

[–] [email protected] 44 points 1 week ago

"But the breakthrough will come just as soon as the chips no one can make are delivered."

Probably.

[–] [email protected] 8 points 1 week ago

If anyone thinks this is not for Meta's benefit, they're delusional.

[–] [email protected] 6 points 1 week ago

People will misuse this for personal gain at the expense of others and the planet. Consequences will be dramatic.

[–] [email protected] 9 points 1 week ago (1 children)

Someone at the office was dodging transit fare. Probably. :)

16
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]
 

I currently have a storage server with the following config.

Multiple raid6 volumes (mdadm) -> aggregated into a lvm volume group -> lvm volumes -> encrypted with luks1 -> (no partitioning) xfs file systems mounted and used by the os

I have the following criteria: I want to keep software raid (mdadm) with multiple raid sets, xfs, and lvm. I don't mind using 2fa, but I don't want to just store my secret keys on a dongle attached to my PC because that seems to defeat the point of encryption at rest.

My questions:

  1. Is there a better way to encrypt my data at rest?

  2. Is there a better layer at which to apply the encryption?

I'm mostly unhappy with luks1 over a whole lvm volume and looking for alternatives.

--

Thank you everyone for these great responses! I'll be looking into these ideas :)

view more: next ›