Best Hard Drives for ZFS Server (Updated Apr 2019)

Today’s question comes from Jeff….

Q. What drives should I buy for my ZFS server? 

Answer: Here’s what I recommend, considering a balance of cost per TB, performance, and reliability.  I prefer NAS class drives since they are designed to run 24/7 and also are better at tolerating vibration from other drives.  I prefer SATA but SAS drives would be better in some designs (especially when using expanders).

For a home or small business FreeNAS storage server I think these are the best options, and I’ve also included some enterprise class drives.

Updated: July 19, 2015 – Added quieter HGST, and updated prices.
Updated: July 30, 2016 – Updated prices, and added WL drives
Updated July 15, 2017 – Updated prices, added larger drives, removed drives no longer being sold.
Updated September 17, 2018 – Added WD Gold drives.
Updated April 27, 2019 — Removed WL and HGST drives, added Seagate, updated all product lines.

Western Digital 3TB, 4TB, 5TB, 6TB, 8TB, 10TB, 12TB, and 14TB Drives

The highest rated and consistently available NAS class drives on the market today are made by Western Digital.  The 3 product lines are:

WD Red are tried and true NAS class drives designed to run 24/7.  Very stable and popular in FreeNAS systems.

    • 5400RPM
    • Supported in up to 8 drive bays.
    • Workload: 180TB/year
    • 3-year warranty

WD Red Pro designed for larger deployments suitable for small/medium businesses.

    • 7200RPM
    • Supported in up to 24 drive bays
    • Workload: 300TB/year
    • 5-year warranty

WD  HGST Ultrastar DC Datacenter class hard drives designed for heavy workloads (this lineup Replaces WD Golds).

    • 7200RPM
    • Supported in unlimited drive bays
    • Workload: 550TB/year
    • 5-year warranty

Seagate IronWolf – up to 14TB drives

Seagate had a bad reputation because of high failure rates in the past, but the newer offerings are more reliable and given the competitive prices they’re worth another look.  I would consider them again if building a new server.  Seagate has 3 product lines suitable for ZFS, all running at 7200RPM:

Seagate IronWolf (up to 14TB) are NAS class drives targeted at smaller deployments.

    • 5900-7200RPM
    • Supported in up to 8 drive bays
    • Workload: 180TB/year
    • 3-year warranty

Seagate IronWolf Pro are the next step up…

    • 7200RPM
    • Supported in configurations up to 16-24 bays
    • Workload: 300TB/year
    • 5-year warranty

Seagate Exos is the enterprise offering designed for enterprise workloads.

    • 7200RPM
    • Supports unlimited bays
    • Workload: 550TB/year
    • 5-year warranty

Buying Tips:

  • If you read reviews about failures, I discount negative reviews with DOAs or drives that fail within the first few days.  You’re going to be able to return those rather quickly.  What you want to avoid is a drive that fails a year or two in and have the hassle of dealing with a warranty claim.
  • Higher RPMs and larger disks are typically going to have faster seek times.
  • Gone are the days when you need a 24-bay server for large amounts of storage.  It’s far simpler to get a 4-bay chassis with 14TB drives.  If you don’t need more capacity or IOPS keep it simple.

Or buy a TrueNAS Storage Server from iXsystems

I’m cheap and tend to go with a DIY approach most of the time, but when I’m recommending ZFS systems in environments where availability is important I like the TrueNAS servers from iX Systems which will of course come with drives in configurations that have been well tested.  The prices on a TrueNAS are very reasonable compared to other storage systems and it can be setup in an HA cluster.  Even a FreeNAS Certified Server is probably not going to cost much more than doing it yourself (more often than not it ends up being less expensive than DIY).  And of course for a small server you can grab the 4-bay FreeNAS Mini (which ships with WD REDs).

Careful with “archival” drives

If you don’t get one of the drives above, some larger hard drives are using SMR (Shingled Magnetic Recording) which should not be used with ZFS if you care about performance until drivers are developed.  Be careful about any drive that says it’s for archiving purposes.

The ZIL / SLOG and L2ARC

The ZFS Intent Log (ZIL) should be on a SSD with battery backed capacitor that can flush out the cache in case of a drive failure.  I have done quite a bit of testing and like the Intel DC SSD series drives and also HGST’s S840Z.  These are rated to have their data overwritten many times and will not lose data on power loss.  These run on the expensive side, so for a home setup I typically try to find them used on eBay.  From a ZIL perspective there’s not a reason to get a large drive–but keep in mind  you get better performance with larger drives.  In my home I use 100GB DC S3700s and they do just fine.

I generally don’t use an L2ARC (SSH read cache) and instead opt to add more memory.  There are a few cases where an L2ARC makes sense when you have very large working sets.

For SLOG and L2ARC see my comparison of SSDs.

Capacity Planning for Failure

Most drives running 24/7 start having a high failure rate after 5-years, you might be able to squeeze 6 or 7 years out of them if you’re lucky.  So a good rule of thumb is to estimate your growth and buy drives big enough that you will start to outgrow them in 5+ years.  The price of hard drives is always dropping so you don’t really want to buy more much than you’ll need before they start failing.  Consider that in ZFS you shouldn’t run more than 70% full (with 80% being max) for your typical NAS applications including VMs on NFS.  But if you’re planning to use iSCSI you shouldn’t run more than 50% full.

ZFS Drive Configurations

My preference at home is almost always RAID-Z2 (RAID-6) with 6 to 8 drives which provides a storage efficiency of .66 to .75.  This scales pretty well as far as capacity is concerned and with double-parity I’m not that concerned if a drive fails.  6 drives in RAID-Z2 would net 8TB capacity all the way up to 24TB with 6TB drives.  For larger setups use multiple vdevs.  E.g. with 60 bays use 10 six drive RAID-Z2 vdevs (each vdev will increase IOPS).  For smaller setups I run 3 or 4 drives in RAID-Z (RAID-5).  In all cases it’s essential to have backups… and I’d rather have two smaller servers with RAID-Z mirroring to each other than one server with RAID-Z2.  The nice thing about smaller setups is the cost of upgrading 4 drives isn’t as bad as 6 or 8!  For enterprise setups I like ZFS mirrored pairs (RAID-10) for fast rebuild times and performance at storage efficiency of 0.50.

Enabling CCTL/TLER on Desktop Drives

Time-Limited Error Recovery (TLER) or Command Completion Time Limit (CCTL).

If you must run desktop drives… On desktop class drives such as the HGST Deskstar, they’re typically not run in RAID mode so by default they are configured to take as long as needed (sometimes several minutes) to try to recover a bad sector of data.  This is what you’d want on a desktop, however performance grinds to a halt during this time which can cause your ZFS server to hang for several minutes waiting on a recovery.  If you already have ZFS redundancy it’s a pretty low risk to just tell the drive to give up after a few seconds, and let ZFS rebuild the data.

The basic rule of thumb.  If you’re running RAID-Z, you have two copies so I’d be a little cautious about enabling TLER.  If you’re running RAID-Z2 or RAID-Z3 you have three or four copies of data so in that case there’s very little risk in enabling it.

Viewing the TLER setting:

Enabling TLER

Disabling TLER

(TLER should always be disabled if you have no redundancy).