SSD ZFS ZIL SLOG Benchmarks – Intel DC S3700, Intel DC S3500, Seagate 600 Pro, Crucial MX100 Comparison

I ran some performance tests comparing the Intel DC S3700, Intel DC S3500, Seagate 600 Pro, and Crucial MX100 when being used as a ZFS ZIL / SLOG Device.  All of these drives have a capacitor backed write cache so they can lose power in the middle of a write without losing data.

IMG_9271

Here are the results….

zfs_zil_ssd_comparison

The Intel DC S3700 takes the lead with the Intel DC S3500 being a great second performer.  I am surprised at how much better the Intel SSDs performed over the Crucial and Seagate considering Intel’s claims are not as great as the other two… Intel claims slower IOPS, and slower sequential performance yet it outperforms the other two SSDs.

SSDSeagate 600 ProCrucial MX 100Intel DC S3500Intel DC S3700
Size GB24025680100
ModelST240FP0021CT256MX100SSD1SSDSC2BB080G4SSDSC2BA100G3T
Sequencial (MB/s)500550Read 340 / Write 100Read 500 / Write 200
IOPS Random Read85,00085,00070,00075,000
IOPS Random Write11,00070,0007,00019,000
Endurance (Terabytes Written)13472451874
Warranty5 years3 Years5 years5 years
ZFS ZIL SLOG Results Below--------
oltp 2 thread66160732746
oltp 4 thread892349891008
random r/w20854314543641
random write8722618441661
seq write MB/s5149399

Drive Costs

Conclusion

  • For most workloads use the Intel DC S3500.
  • For heavy workloads use the Intel DC S3700.
  • The best performance for the dollar is the Intel DC S3500.

In my environment the best device for a ZFS ZIL is either the Intel DC S3500 or the Intel DC S3700.  The S3700 is designed to hold up to very heavy usage–you could overwrite the entire 100GB drive 10 times every day for 5 years!  With the DC S3500 you could write out 25GB/day every day for 5 years.  Probably for most environments the DC S3500 is enough.

I should note that both the Crucial and Seagate improved performance over not having a SLOG at all.

Unanswered Questions

I would like to know why the Seagate 600 Pro and Crucial MX100 performed so badly… my suspicion is it may be the way ESXi on NFS forces a cache sync on every write, the Seagate and Crucial may be obeying the sync command while the Intel drives are ignoring it because they know they can rely on their power loss protection mechanism.  I’m not entirely sure this is the difference but it’s my best guess.

Testing Environment

This is based on my Supermicro 2U ZFS Server Build:  Xeon E3-1240v3, The ZFS server is a FreeNAS 9.2.1.7 running under VMware ESXi 5.5.  HBA is the LSI 2308 built into the Supermicro X10SL7-F, flashed into IT mode.  The LSI 2308 is passed to FreeNAS using VT-d.  The FreeNAS VM is given 8GB memory.

Zpool is 3x7200RPM Seagate 2TB drives in RAID-Z, in all tests an Intel DC S3700 is used as the L2ARC.  Compression = LZ4, Deduplication off, sync = standard, encryption = off.  ZFS dataset is shared back to ESXi via NFS.  On that NFS share is a guest VM running Ubuntu 14.04 which is given 1GB memory and 2 cores.  The ZIL device is changed out between tests, I ran each test seven times and took the average discarding the first three test results (I disregarded the first three results to allow some data to get cached into ARC…I did not see any performance improvement after repeating a test three times so I believe that was sufficient).

Thoughts for Future Tests

I’d like to repeat these tests using OmniOS and Solaris sometime but who knows if I’ll ever get to it.  I imagine the results would be pretty close.  Also, of particular interest would be testing on  VMware ESXi 6 beta… I’d be curious to see if there are any changes in how NFS performs there… but if I tested it I wouldn’t be able to post the results because of the NDA.

Test Commands

Supermicro 2U ZFS Server Build

Here’s my latest VMware + ZFS VSAN all in one server build based on a Supermicro platform… this is borrowing from the napp-it all-in-one concept where you pass the SAS/SATA controller to the ZFS VM using VT-d and then share a ZFS dataset back to VMware using NFS or iSCSI.  For this particular server I’m using FreeNAS as the VSAN.

Chassis – 2U Supermicro 6 Bay Hotswap

15161878016_af4bd6c2fd_k

For the case I went with the Supermicro CSE-822T-400LPB.  It comes with rails, 6 hotswap bays, a 400W PSU (80 plus energy efficiency), and a 5.25 bay which works great for installing a mobile rack.

The 3400 RPM Nidec UltraFlo fans are too loud for my house (although quiet for a rack-mount server) and there is no way to keep the speed low in Supermicro’s BIOS so I replaced them with low noise Antec 80mm fans which are a couple dollars each.

14998128759_079d8fe1ab_k

14998338517_aa3f5ff041_k

Unfortunately these fans only have 3 wire (don’t have the RPM sensor) so I used freeipmi (apt-get install freeipmi) to disable monitoring the fan speeds:

#  ipmi-sensors-config -h 10.2.0.228 -u ADMIN -p ADMIN –filename=ipmi.config –checkout
Under the FAN sections change this to “No”…
Enable_Scanning_On_This_Sensor                                              No
# ipmi-sensors-config -h 10.2.0.228 -u ADMIN -p ADMIN –filename=ipmi.config –commit
Probably a better solution is to use 4-wire fans and lower the sensor thresholds but then you’re looking at around $10/fan.
The server makes about as much noise as my desktop computer now.

Motherboard – Supermicro X10SL7-F with LSI-2308

Supermicro’s X10SL7-F has a built in LSI 2308 controller with 8 SAS/SATA ports.  Using this across the two hotswap bays works perfectly.  I used the instructions on the FreeNAS forum post to flash the controller into IT mode.  Under VMware the LSI-2308 is passed to the ZFS Server which is connected to all 6 x 3.5″ hotswap bays plus the first two hotswap bays of the mobile rack.  This allows for 6 drives in a RAID-Z2 configuration, plus one SSD for ZIL and another for L2ARC.    The final 2.5″ drives in the mobile rack are connected into the two 6Gbps ports on the motherboard and used by VMware.  I usually enable VMware’s hostcache on SSDs.

vmware_zfs_vtd_supermicro_lsi2308

Memory: Crucial

Memory: Crucial 8GB DDR3L (low volt) ECC Server Memory I got the 16GB kit.

CPU Xeon E3-1231v3 Quad Core

I think the Xeon E3-1231v3 is about the best bang for the dollar for an E3 class CPU right now.  You get 4 3.4GHz cores (with 3.8GHz turbo), VT-d, ECC support, and hyper-threading.  It’s about $3 more than the Xeon E3-1230v3 but at 100MHz faster.   I’m actually using the Xeon E3-1240v3 (which is identical to the 1231v3) because I saw a good price on a used one awhile ago.

Mobile Rack 2.5″ SSD bay


15184899715_d71b7ad69a_k
ICY DOCK ToughArmor
4 x 2.5″ Mobile Rack.  The hotswap bays seem pretty high quality metal.

Hard Drives: I’m running 2TB Seagate 7200 RPM Drives, although they may not be the most reliable drives they’re cheap and I’m confident with RAID-Z2 (and I also always have two off-site backups and one onsite backup).  For SSDs I use Intel’s DC S3700 (here’s some performance testing I did on a different system, hopefully I’ll get a chance to retest on this build)

Racking with Rails on a Lackrack

15152114936_a863dcbece_o (1)
Awhile ago I discovered the Lackrack (these are $7 at Ikea)… I figured since I have rails I may as well install them.  The legs were slightly too close together so I used a handsaw to cut a small notch for the rails and mounted the rails using particle board screws.  One thing to note is the legs are hollow just below where the top screw of the rail mount goes so only that top screw is going to be able to bear any load.

14998127309_3741d99834_k

Here’s the final product:

supermicro_server_lackrack

What VMs runs on the server?

  • ZFS – provides CIFS and NFS storage for VMware and Windows, Linux, and FreeBSD clients.  Basically all our movies, pictures, videos, music, any anything of importance goes on here.
  • Backup server – backs up some of my offsite cloud servers.
  • Kolab – Mail / Groupware server (Exchange alternative)
  • Minecraft Server
  • Owncloud (Dropbox replacement)
  • VPN Server
  • Squid Proxy Cache (I have limited bandwidth where I’m at so this helps a lot).
  • DLNA server (so Android clients can watch movies).

Intel DC S3500 vs S3700 as a ZIL SSD Device Benchmarks

I ran some benchmarks comparing the Intel DC S3500 vs the Intel DC S3700 when being used as a SLOG/ZIL (ZFS Intent Log).  Both SSDs are what Intel considers datacenter class and they are both very reasonably priced compared to some of the other enterprise class offerings.

Update: 2014-09-14 — I’ve done newer benchmarks on faster hardware here: https://b3n.org/ssd-zfs-zil-slog-benchmarks-intel-dc-s3700-intel-dc-s3500-seagate-600-pro-crucial-mx100-comparison/

IMG_20140817_222031

SLOG Requirements

Since flushing the cache to spindles is slow, ZFS uses the ZIL to safely commit random writes.  The ZIL is never read from except in the case power is lost before “flushed” writes in memory have been written to the pool.  So to make a decent SLOG/ZIL the SSD must be able to sustain a power loss in the middle of a write without losing or corrupting data.  The ZIL translates random writes to sequential writes so it must be able to sustain fairly high throughput.  I don’t think random write IOPS is as important but I’m sure it helps some. Generally a larger SSD is better because they tend to offer more throughput.  I don’t have an unlimited budget so I got the 80GB S3500 and the 100GB S3700 but if you’re planning for some serious performance you may want to use a larger model, maybe around 200GB or even 400GB.

Specifications of SSDs Tested

Intel DC S3500 80GB

  • Seq Read/Write: 340MBps/100MBs
  • 70,000 / 7,000 4K Read/Write IOPS
  • Endurance Rating: 45TB written
  • Price: $113 at Amazon

Intel DC S3700 100GB

  • Seq Read/Write: 500MBs/200MBs
  • 75,000 / 19,000 4K Read/Write IOPS
  • Endurance Rating: 1.83PB written (that is a lot of endurance).
  • Price: $203 at Amazon

Build Specifications

Virtual NAS Configuration

  • FreeNAS 9.2.1.7 VM with 6GB memory and 2 cores.  Using VMXNET3 network driver.
  • RAID-Z is from VMDKs on 3×7200 Seagates.
  • SLOG/ZIL device is a 16GB vmdk on the tested SSD.
  • NFS dataset on pool is shared back to VMware.  For more information on this concept see Napp-in-one.
  • LZ4 Compression enabled on the pool.
  • Encryption On
  • Deduplication Off
  • Atime=off
  • Sync=Standard (VMware requests a cache flush after every write so this is a very safe configuration).

Don’t try this at home: I should note that FreeNAS is not supported running as a VM guest, and as a general rule running ZFS off of VMDKs is discouraged.  OmniOS would be better supported as a VM guest as long as the HBA is passed to the guest using VMDirectIO.  The Avoton processor doesn’t support VT-d which is why I didn’t try to benchmark in that configuration.

Benchmarked Guest VM Configuration

  • Benchmark vmdk is installed on the NFS datastore.  A VM is installed on that vmdk running Ubuntu 14.04 LTS, 2 cores, 1GB memory.  Para-virtual storage.
  • OLTP tests run against MariaDB Server 5.5.37 (fork from MySQL).

I wiped out the zpool after every configuration change, and ran each test three times for each configuration and took the average (in almost all cases subsequent tests ran faster because the ZFS ARC was caching reads into memory so I was very careful to run the tests in the same order on each configuration.  If I made a mistake I rebooted to clear the ARC).  I am mostly concerned with testing random write performance so these benchmarks are more concerned with write IOPS than with throughput.

Benchmark Commands

Random Read/Write:
# sysbench –test=fileio –file-total-size=6G –file-test-mode=rndrw –max-time=300 run

Random Write:
# sysbench –test=fileio –file-total-size=6G –file-test-mode=rndwr –max-time=300 run

OLTP 2 threads:
# sysbench –test=oltp –oltp-table-size=1000000 –mysql-db=sbtest –mysql-user=root –mysql-password=test –num-threads=2 –max-time=60 run

OLTP 4 threads:
# sysbench –test=oltp –oltp-table-size=1000000 –mysql-db=sbtest –mysql-user=root –mysql-password=test –num-threads=4 –max-time=60 run

Test Results

zil_random_read_write zil_random_write zil_oltp_4_theads zil_oltp_2_threads

 

SLOGTestTPS Avg
noneOLTP 2 Threads151
Intel DC 3500 80GBOLTP 2 Threads188
Intel DC 3700 100GBOLTP 2 Threads189
noneOLTP 4 Threads207
Intel DC 3500 80GBOLTP 4 Threads271
Intel DC 3700 100GBOLTP 4 Threads266
noneRNDRW613
Intel DC 3500 80GBRNDRW1120
Intel DC 3700 100GBRNDRW1166
noneRNDWR273
Intel DC 3500 80GBRNDWR588
Intel DC 3700 100GBRNDWR569

 

Surprisingly the Intel DC S3700 didn’t offer much of an advantage over the DC S3500.  Real life workload results may vary but the Intel DC S3500 is probably the best performance per dollar for a SLOG device unless you’re concerned about write endurance in which case you’ll want to use the DC S3700.

Other Observations

There are a few SSDs with power loss protection that would also work.  The Seagate 600 Pro SSD, and also for light workloads a consumer SSDs like the Crucial M500 and the Crucial MX100 would be decent candidates and still provide an advantage over running without a SLOG.

I ran a few tests comparing the VMXNET3 vs E1000 network adapter and there is a performance penalty for the E1000.  This test was against the DC 3500.

freenas_vmxnet_vs_e1000

NetworkTestTPS Avg
E1000gOLTP 2 Threads187
VMXNET3OLTP 2 Threads188
E1000gOLTP 4 Threads262
VMXNET3OLTP 4 Threads271
E1000gRNDRW1101
VMXNET3RNDRW1120
E1000gRNDWR564
VMXNET3RNDWR588

I ran a few tests with Encryption on and off and found a small performance penalty for encryption.  This test was against the DC S3700.

freenas_encryption_on_vs_off

EncryptionTestTPS Avg
OffOLTP 2 Threads195
OnOLTP 2 Threads189
OffOLTP 4 Threads270
OnOLTP 4 Threads266
OffRNDRW1209
OnRNDRW1166
OffRNDWR609
OnRNDWR569

NAS Server Build – ASRock C2750D4I & Silverstone DS380

The other day I got a little frustrated with my Gen 8 Microserver, I was trying to upgrade ESXi to 5.5 but the virtual media feature kept disconnecting in the middle of the install due to not having an ILO4 license–I actually bought an ILO4 enterprise license but I have no idea where I put it!  What’s the point of IPMI when you get stopped by licensing?  I hate having to physically plug in a USB key to upgrade VMware so much that I decided I’d just build a new server–which I honestly think is faster than messing around with getting an ISO image on a USB stick.

Warning: I’m sorry to say that I cannot recommend this motherboard that I reviewed earlier:  I ended up having to RMA this board twice to get one that didn’t crash.  The Marvell SATA Controller was never stable long term under load even after multiple RMAs so I ran it without using those ports which sort of defeated the reason I got the board in the first place.  Then in 2017 the board died shy of 3 years old, the shortest I have ever had a motherboard last me.  Generally I have been pretty happy with ASRock desktop boards but this server board isn’t stable enough for business or home use.  I have switched to Supermicro X10SDV Motherboards for my home server builds.

Build List

ASRock C2750D4I Motherboard / CPU

C2750D4I-1(L)

Update: 2014-05-11.  Here’s a great video review on the motherboard…

12 SATA ports!  This motherboard is perfect for ZFS which loves having direct access to JBOD disks.  The Marvell SATA controllers did not show up in VMware initially,  however Andreas Peetz provides a package for adding unsupported drivers in VMware, and this worked perfectly.  It took me a couple minutes to realize all you need to do is run these three commands:

Update November 16, 2014 .. it turned out the below issue was caused by a faulty Marvell controller on the motherboard, I ran FreeBSD (a supported OS) and the fault also occurred there so I RMAed the motherboard … I ended up getting a bad motherboard again but after a second RMA everything is stable in VMware… so you can disregard the below warning.

Update March 12, 2015.  My board continues to function okay, but some people are having issues with the drives working under VMware ESXi.  Read the comments for details.

Update August 23, 2014 ** WARNING Read this before you run the command below **  I had stability issues using the below hack to get the Marvell controllers to show up.  VMware started hanging as often as several times a day requiring a system reboot.  This is the entry in the motherboard’s event log: Critical Interrupt – I/O Channel Check NMI Asserted.  I swapped the Kingston memory out for Crucial on ASRock’s HCL list but the issue still persisted so I can’t recommend this drive for VMware.  After heavy I/O tests ZFS also detected data corruption on two drives connected to the Marvell controllers.  I am pretty sure this is because VMware does not officially support these drives so this issue likely doesn’t exist for operating systems that officially support the Marvell controller.

asrock_kvm_over_ipIPMI (allows for KVM over IP).  After being spoiled by this on a Supermicro board IPMI with KVM over IP is a must have feature for me, I’ll never plug a keyboard and monitor into a server again.

Avoton Octa-Core processor.  Normally I don’t even look at Atom processors, but this is not your grandfather’s Atom.  The Avoton processor supports VT-x, ECC memory,  AES instructions, and is a lot more powerful and at only 20 W TDP.  This CPU Boss benchmark says it will probably perform similarly to the Xeon E3-1220L.  The Avoton can also go up to 64GB memory where the E3 series is limited to 32GB making it a good option for VMware or for a high performance ZFS NAS.  The Avoton does not support VT-d so there is no passing devices directly to VMs.

My only two disappointments are no internal USB header on the board (I always install VMware on a USB stick so right now there’s a USB stick hanging on the back) and I wish they had used SFF-8087 mini-SAS connectors instead of individual SATA ports on the board to cut down on the number of SATA cables.

Overall I am very impressed with this board and it’s server-grade features like IPMI.

Instead of going into more detail here, I’ll just reference Patrick’s review of the ASRock C2750D4I

Alternative Avoton Boards

There are a few other options worth looking at.  The ASRock C2550D4I is the same board but Quad core instead of Octa Core.  I actually almost bought this one except I got the 2750 at a good price on SuperBiiz.

Also the SuperMicro A1SAi-2750F (Octa core) and A1SAi-2550F (Quad core) are good options if you don’t need as many SATA ports or you’re going to use a PCI-E SATA/SAS controller.  Supermicro’s motherboards have the advantage of Quad GbE ports, an internal USB header (not to mention USB 3.0), while sacrificing the number of SATA ports–only 2 SATA3 ports and 4 SATA2 ports.  These Supermicro boards use the smaller SO-DIMM memory.

 Silverstone DS-380: 8 hot-swap bay chassis

ds-380-and-asrock

 

The DS-380 has 8 hot-swap bays, plus room for four fixed 2.5″ drives for up to 12 drives.  As I started building this server I found the design was very well thought out.  Power button lockout (a necessity if you have kids), locking door, dust screens on fan intakes, etc.  The case is practical in that the designers cut costs where they could (like not painting the inside) but didn’t sacrifice anything of importance.

gen8_microserver_and_ds380_silverstone
HP Gen 8 Microserver (Left) next to Silverstone DS-380 (right)

A little larger than the HP Gen8 Microserver, but it can hold more than twice as many drives.  Also the Gen8 Microserver is a bit noisier.

ds-380-open

ds-380-tray-with-ssdYou’ll notice above from the top there is a set of two drives, then one drive by itself, and a set of five drives.  This struck me as odd at first, but this is actually that way by design.  If you have a tall PCI card plugged into your motherboard  (such as a video card) you can forfeit the 3rd drive from the top to make room for it.

The drive trays are plastic, obviously not as nice as a metal tray but not too bad either.  One nice feature is screw holes on the bottom allow for mounting a 2.5″ drive such as an SSD!  That’s well thought out!  Also there’s a clear plastic piece that runs alongside the left of each tray that carries the hard drive activity LED light to the front of the case (see video below).

Here’s the official Silverstone DS-380 site, and here’s a very detailed review of the DS-380 with lots of pictures by Lawrence Lee.

Storage

Using 4TB drives 8 bays would get you to 24TB using RAID-Z2 or RAID-6.  Plus have 4 2.5″ fixed bays left for SSDs.

Virtual NAS

I run a virtualized ZFS server on OmniOS following Gea’s Napp-in-one guide.  I deviate from his design slightly because I run on top of VMDKs instead of Passing the controllers to the guest VM (because I don’t have VT-d on the Avoton).

ZIL – Seagate SSD Pro

120GB Seagate Pro SSD.  The ZIL (ZFS Intent Log) is the real trick to high performance random writes, by being able to cache writes on capacitor backed cache the SSD can guarantee a write to the requesting application before it is transferred out of RAM and onto spindles.

So far…

I’m pretty happy with the custom build.  I think the Gen 8 HP Microserver looks more professional compared to the DS-380 which looks more like a DIY server.  But what matters is on the inside, and having access to IPMI when I need it without having to worry about licensing is worth something in my book.

M1015 HBA In the HP Gen8 Microserver

Here’s a quick overview on installing the IBM ServerRaid M1015 HBA (aka LSI SAS9220-8i) in the HP Gen8 Microserver.

ibm_serverraid_m1015_bracket

These cards can be bought for around $100 on ebay.  The HBA has two 6Gbps SAS ports (each port has 4 lanes, each lane is 6Gbps giving a theoretical maximum of 24Gbps per port and 48Gbps if using both ports).  A typical configuration for maximum performance is one lane to each drive using a SFF-8087 breakout cable.  With two of these cables this card is capable of running 8 drives.  You can run more drives with a SAS expander but I haven’t had a need to yet.  I typically flash it into IT (JBOD) mode.  This is a popular card for running ZFS, which is my use-case.

gen8_hp_microserver_sas1

The picture above shows the original location of the 4-drive bay SAS connector, you just need to move it to the HBA,  I didn’t have to re-wire it, there is plenty of slack in the cable so I just had to pull it to the M1015 and plug it in (below).

gen8_hp_microserver_sas2

At first boot all my drives were recognized and VMWare and all the guests booted up as normal.

hp_gen8_microserver_m1015_hba

Also, a few people have asked about mounting an extra drive in the ODD bay, here’s the power connection I think could be tapped into with a Y-splitter (below).

hp_microserver_odd_bay_power

Does this have an advantage over the Gen8 Microserver’s B20i SmartArray controller?   For a lot of setups it probably offers no advantage.  I probably wouldn’t do it in my environment except I already have a couple of M1015’s lying around.  Here’s what you get with the M1015.

  • In IT mode drives are hot-swappable.  No need to power-down to swap out a bad drive.
  • B20i only has 2 6Gbps ports, the other two are 3Gbps.  The M1015 can run up to 8 lanes (10 if you use the first two lanes from the B20i) in 6Gbps.  If you’re using the server as a NAS you’re more limited by the two single Gbps NICs so this shouldn’t be an issue for most setups.
  • The M1015 is known to work with 4TB drives, the Microserver only supports up to 3TB.
  • VMWare can be booted off a USB, but it needs at least one SATA drive to store the first VM’s configuration, so whatever SATA controller that drive is on can’t be used as a pass-through device.  So if you want to pass an HBA directly to a VM (which is a typical for Napp-it All-in-One setups) you can pass the entire M1015 controller to a VM which gives it direct hardware access to the drives (requires a CPU with VT-d).

 

Restore Zpool with CrashPlan

I messed up my zpool with a stuck log device, and rather than try to fix it I decided to wipe out the zpool and restore the zfs datasets using CrashPlan.

CrashPlan Restore

It took CrashPlan roughly a week to restore all my data (~1TB) which is making me consider local backups.  All permissions intact.  Dirty VM backups seemed to boot just fine (I also have daily clean backups using GhettoVCB just in case but I didn’t need them).  One nice thing is I could prioritize certain files or folders on the restore by creating multiple restore jobs.  I had a couple of high priority VMs and then wanted kid movies to come in next and then the rest of the data and VMs.

Installing CrashPlan on OmniOS

OmniOSCrashPlanI’m in the progress of switching my ZFS Server from OpenIndiana to OmniOS, mainly because OmniOS is designed only to be a server system so it’s a little cleaner, and they have a stable production release that’s commercially supported, and it has become Gea’s OS of choice for Napp-It.  One of the last things I had to do was get CrashPlan up and running, here’s a quick little howto…

Unfortunately, I got the error below:

So I went to look at the checkinstall script…

I’m not entirely sure how I fixed it, I modified the checkinstall script to look for java in /usr/java/bin, but then ran pkgadd and the CrashPlan installer refused to run because it detected the file had been modified, so I undid my change and re-ran pkgadd and it worked…

Or if you have a secure network you can set serviceHost in /opt/sfw/crashplan/conf from 127.0.0.1 to 0.0.0.0 and then on your client change the serviceHost in C:\Program Files\CrashPlan\conf\ui.properties to your OmniOS IP Address.
I also like to move my CrashPlan install and config onto the main pool where all my storage is… I had already created a dataset called /tank/crashplan:

On a side note my ZFS server CrashPlan backup passed the 1TB mark today!
Here’s a video…

HP Microserver Gen8 Specs Released

hp_microserver_gen_8Just saw on STH that the HP Microserver Gen8 specs have been released

  • Intel Pentium G2020T (2 core, 2.5Ghz, 3MB, 35W) OR Intel Celeron G1610T
  • 2GB ECC memory (up to 16GB, 2 slots)
  • One PCIe expansion slot.
  • Dual gigabit ethernet ports (332i adapter)
  • Dynamic Smart Array B120i/ZM
  • 150w power supply
  • HP iLO4 remote management port
  • Tool-less maintenance
  • LED health-status light-bar
  • The RAID controller can handle hot-swap now?

My initial thoughts:

The  Celeron G1610T or Pentium G2020T is a much needed upgrade from the previous Microserver’s AMD Turion II Neo N54L.  CPU Benchmark with all three processors: http://www.cpubenchmark.net/mid_range_cpus.html  See specs here: http://ark.intel.com/products/71070  Notice the lack of VT-d (unable to pass a PCI device directly to a VM), no AES-NI, and no Hyper-Threading.  If you really need those features it is an 1155 socket so if it’s not soldered to the board there’s the possibility of swapping it out for a Xeon E3 series processor…although you would need to be careful to run one cool enough.

16GB Memory is a nice upgrade from the previous Microserver’s 8GB, and it is very difficult to find a server this small that supports ECC memory (yes, you need ECC memory).  Unfortunately 8GB modules are fairly expensive.

Dual gigabit ports is a nice upgrade for a NAS, and I believe this particular adapter supports teaming so it may be possible to get a 2 gigabit aggregated link with a proper switch.

b120iSince I use ZFS I was concerned because JBOD mode is not listed as a mode for the  B120i, but this HP support article – Dynamic Smart Array Driver Support for Solaris indicates RAID mode can be disabled which puts the card into HBA mode.

HP iLO4 is a remote management feature, after being spoiled by Supermicro’s IPMI I’ll never plug a keyboard or monitor into a server again.  Basically this lets you remotely manage your server, power on/off, KVM, load media remotely.  It seems to me that HP is trying to sell a subscription with this service, so I’m hoping the important features are free.  Update 2013-06-19: Remote console and media requires the purchase of an ILO4 license.  This will run ~$150 for a three year license… a rather large disappointment.

The LED indicator could be a nice feature, it remains to be seen how ZFS can interface with it or if it can be controlled through some scripting.

hp_microserver_tuck_awayI’ve been a fan of the HP Microservers since using my N40Ls (read my Amazon Review of the HP Microserver), they’re small stack-able servers you can stick just about anywhere and run virtually silent.  Considering all these features if you were trying to build a server with similar specs you couldn’t do it for less than buying one of these.  This will handle running a light-weight server for a home NAS, VMWare ESXi with a few VMs, or a small business.


hp_microserver_father_son_2Start saving up, this makes for a great father-son project!  The G1610T model is $450 and the G2020T is $520… I believe the G2020T is only priced a few dollars more than the G1610T so you would be better off purchasing the cheaper model and upgrading the processor, but maybe there is something included in the more expensive model that’s not listed on the specs… perhaps a battery backed write cache for the RAID card.  (Update 2013-06-19: It appears this is not the case…)

More G8 Microserver updates and leaked pictures from Monsta…