Friday, April 26, 2013

On spinning rust and SSD's.

I got my Crucial M4 512GB SSD back for my laptop. It failed about three weeks ago, when I turned on my laptop it simply wasn't there. Complete binary failure mode -- it worked, then it didn't work. So I took it out of the laptop, verified in an external USB enclosure that it didn't "spin up" there either, installed a 750Gb WD Black 7200 rpm rust-spinner that was in my junk box for some project or another, and re-installed Windows and restored my backups. Annoying, but not fatal by any means. I've had to get used to the slow speed of spinning rust again versus the blazingly fast SSD, but at least I'm up and running. So this weekend I get to make another full backup, then swap out the rust for the SSD again.

At work I've had to replace several of the WD 2TB Enterprise drives in the new Linux-based infrastructure when smartd started whining about uncorrectable read errors. When StorStac got notification of that sort of thing it re-wrote the sector from the RAID checksums and that usually resolved it. The Linux 3.8 kernel's md RAID6 layer apparently doesn't do that, requiring me to kick the drive out of the md, slide in a replacement, fire off a rebuild, and then haul the drive over to my desktop and slide it in there and run a blank-out (write zeroes to the entire drive). Sometimes that resolves the issue, sometimes the drive really *is* toast, but at least it was an analog error (just one or two bad places on the drive), not a complete binary error (the entire drive just going blammo).

SSD's are the future. The new COW filesystems such as ZFS and BTRFS really don't do too well on spinning rust, because by their very nature they fragment badly over time. That doesn't matter on SSD's, it does matter with rust-spinners, for obvious reasons. With ZFS you can still get decent performance on rust if you use a second-level SSD cache, that's how I do my backup system here at home (which is an external USB3 hard drive and an internal SSD in my server), BTRFS has no such mechanism at present but to a certain extent compensates by having a (manual) de-fragmentation process that can be run from time to time during "off" hours. Still, both filesystems clearly prefer SSD to rotational storage. It's just the nature of the beast. And those filesystems have sufficient advantages in terms of functionality and reliability (except in virtualized environments as virtual machine filesystems -- but more on that later) that if your application can afford SSD's, that alone may be the tipping point that makes you go to SSD-based storage rather than rotational storage.

Still, it's clear to me that, at this time, SSD is still an immature technology subject to catastrophic failure with no warning. Rotational storage usually gives you warning, you start getting SMART notifications about sectors that cannot be read, about sectors being relocated, and so forth. So when designing an architecture for reliability, it is unwise to have an SSD be a single point of failure, as is often done for ESXi servers that lack hardware RAID cards supported by ESXi. It might *seem* that SSD is more reliable than rotational storage. And on paper, that may even be true. But the reality is that because the nature of the failures is different, in *reality* rotational storage gives you a much better chance of detecting and recovering from a failing drive than SSD's do. That may, or may not be important for your application -- in RAID it clearly isn't a big deal, since you'll be replacing the drive and rebuilding a new drive anyhow -- but for things like an ESXi boot drive it's something you should consider.

-ELG

No comments:

Post a Comment