this post was submitted on 05 Oct 2024
116 points (87.7% liked)

Technology

58480 readers
3947 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 15 hours ago (2 children)

That's honestly intense. I would be terrified of having that much data in one place

[–] [email protected] 15 points 15 hours ago* (last edited 15 hours ago) (2 children)

While not hard drives, at $dayjob we bought a new server out with 16 x 64TB nvme drives. We don't even need the speed of nvme for this machines roll. It was the density that was most appealing.

It feels crazy having a petabytes of storage (albeit with some lost to raid redundancy). Is this what it was like working in tech up till the mid 00s with significant jumps just turning up?

[–] [email protected] 6 points 15 hours ago

This is exactly what it was like, except you didn't need it as much.

Storage used to cover how much a person needed and maybe 2-8x more, then datasets shot upwards with audio/mp3, then video, then again with Ai.

[–] [email protected] 4 points 15 hours ago (1 children)

Well hell, it's not like it's your money.

[–] [email protected] 10 points 15 hours ago (1 children)

a petabye of ssds is probably cheaper than a petabye of hdds when you account for rack costs, electricity costs, and maintenance.

[–] [email protected] 4 points 13 hours ago

Not a problem I've ever faced before, admittedly

[–] [email protected] 7 points 15 hours ago (2 children)

I guess you're expected to set those up in a RAID 5 or 6 (or similar) setup to have redundancy in case of failure.

Rebuilding after a failure would be a few days of squeaky bum time though.

[–] [email protected] 4 points 14 hours ago* (last edited 14 hours ago) (1 children)

Absolutely not. At those densities, the write speed isn't high enough to trust to RAID 5 or 6, particularly on a new system with drives from the same manufacturing batch (which may fail around the same time). You'd be looking at a RAID 10 or even a variant with more than two drives per mirror. Regardless of RAID level, at least a couple should be reserved as hot spares as well.

EDIT: RAID 10 doesn't necessarily rebuild any faster than RAID 5/6, but the write speed is relevant because it determines the total time to rebuild. That determines the likelihood that another drive in the array fails (more likely during a rebuild due to added drive stress). with RAID 10, it's less likely the drive will be in the same span. Regardless, it's always worth restating that RAID is no substitute for your 3-2-1 backups.

[–] [email protected] 1 points 14 hours ago

Yeah I have 6 14tb drives in raid 10, I'll get 2 more if i need it.

[–] [email protected] 2 points 13 hours ago

At raid6, rebuilds are 4.2 roentgens, not great but they're not horrible. Keep old backups.but the data isn't irreplaceable.

Raid5 is suicide if you care about your data.