I ran some performance tests comparing the Intel DC S3700, Intel DC S3500, Seagate 600 Pro, and Crucial MX100 when being used as a ZFS ZIL / SLOG Device. All of these drives have a capacitor backed write cache so they can lose power in the middle of a write without losing data.
Here are the results….
The Intel DC S3700 takes the lead with the Intel DC S3500 being a great second performer. I am surprised at how much better the Intel SSDs performed over the Crucial and Seagate considering Intel’s claims are not as great as the other two… Intel claims slower IOPS, and slower sequential performance yet it outperforms the other two SSDs.
|SSD||Seagate 600 Pro||Crucial MX 100||Intel DC S3500||Intel DC S3700|
|Sequencial (MB/s)||500||550||Read 340 / Write 100||Read 500 / Write 200|
|IOPS Random Read||85,000||85,000||70,000||75,000|
|IOPS Random Write||11,000||70,000||7,000||19,000|
|Endurance (Terabytes Written)||134||72||45||1874|
|Warranty||5 years||3 Years||5 years||5 years|
|ZFS ZIL SLOG Results Below||--||--||--||--|
|oltp 2 thread||66||160||732||746|
|oltp 4 thread||89||234||989||1008|
|seq write MB/s||5||14||93||99|
- Crucial MX 100 – 256GB – $113
- Seagate 600 Pro – 240GB – $190
- Intel DC S3500 – 80GB – $110
- Intel DC S3700 – 100GB – $208
- For most workloads use the Intel DC S3500.
- For heavy workloads use the Intel DC S3700.
- The best performance for the dollar is the Intel DC S3500.
In my environment the best device for a ZFS ZIL is either the Intel DC S3500 or the Intel DC S3700. The S3700 is designed to hold up to very heavy usage–you could overwrite the entire 100GB drive 10 times every day for 5 years! With the DC S3500 you could write out 25GB/day every day for 5 years. Probably for most environments the DC S3500 is enough.
I should note that both the Crucial and Seagate improved performance over not having a SLOG at all.
I would like to know why the Seagate 600 Pro and Crucial MX100 performed so badly… my suspicion is it may be the way ESXi on NFS forces a cache sync on every write, the Seagate and Crucial may be obeying the sync command while the Intel drives are ignoring it because they know they can rely on their power loss protection mechanism. I’m not entirely sure this is the difference but it’s my best guess.
This is based on my Supermicro 2U ZFS Server Build: Xeon E3-1240v3, The ZFS server is a FreeNAS 22.214.171.124 running under VMware ESXi 5.5. HBA is the LSI 2308 built into the Supermicro X10SL7-F, flashed into IT mode. The LSI 2308 is passed to FreeNAS using VT-d. The FreeNAS VM is given 8GB memory.
Zpool is 3x7200RPM Seagate 2TB drives in RAID-Z, in all tests an Intel DC S3700 is used as the L2ARC. Compression = LZ4, Deduplication off, sync = standard, encryption = off. ZFS dataset is shared back to ESXi via NFS. On that NFS share is a guest VM running Ubuntu 14.04 which is given 1GB memory and 2 cores. The ZIL device is changed out between tests, I ran each test seven times and took the average discarding the first three test results (I disregarded the first three results to allow some data to get cached into ARC…I did not see any performance improvement after repeating a test three times so I believe that was sufficient).
Thoughts for Future Tests
I’d like to repeat these tests using OmniOS and Solaris sometime but who knows if I’ll ever get to it. I imagine the results would be pretty close. Also, of particular interest would be testing on VMware ESXi 6 beta… I’d be curious to see if there are any changes in how NFS performs there… but if I tested it I wouldn’t be able to post the results because of the NDA.
sysbench --test=fileio --file-total-size=6G --file-test-mode=rndrw --max-time=300 run
sysbench --test=fileio --file-total-size=6G --file-test-mode=rndwr --max-time=300 run
sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=sbtest --mysql-user=root --mysql-password=test --num-threads=2 --max-time=60 run
sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=sbtest --mysql-user=root --mysql-password=test --num-threads=4 --max-time=60 run
sysbench --test=fileio --file-total-size=6G --file-test-mode=seqwr --max-time=300 run