FreeNAS Mini XL, 8 bay Mini-ITX NAS

Catching up on email, I saw a Newsletter from iX Systems announcing the FreeNAS Mini XL (the irony).  On the new FreeNAS Mini page it looks just like the FreeNAS mini but taller to accommodate 8-bays.

Available on Amazon starting at $1,500 with no drives.

Here’s the Quick Start Guide and Data Sheet.

The pictures show what appears to be equipped with the Asrock C2750d4i motherboard which has an 8-core Atom / Avoton processor.  With the upcoming FreeNAS 9.10 (based on FreeBSD 10) it should be able to run the bhyve hypervisor as well (at least from CLI, might have to wait until FreeNAS 10 for a bhyve GUI) meaning a nice all-in-one hypervisor with ZFS without the need for VT-d.   This may end up being a great successor to the HP Microserver for those wanting to upgrade with a little more capacity.

The case is the Ablecom CS-T80 so I imagine we’ll start seeing it from Supermicro soon as well.  According to Ablecom it has 8 hotswap bays plus 2 x 2.5″ internal bays and still managed to have room for a slim DVD/Blu-Ray drive.

ablecom_cs_t80It’s really great to see an 8-bay Mini-ITX NAS case that’s nicer than the existing options out there.  I hope the FreeNAS Mini XL will have an option for a more powerful motherboard even if it means having to use up the PCI-E slot with an HBA–I’m not really a fan of the Marvell SATA controllers on that board, and of course a Xeon-D would be nice.

 

 

VMware vs bhyve Performance Comparison

Playing with bhyve

Here’s a look at Gea’s popular All-in-one design which allows VMware to run on top of ZFS on a single box using a virtual 10Gbe storage network.  The design requires an HBA, and a CPU that supports VT-d so that the storage can be passed directly to a guest VM running a ZFS server (such as OmniOS or FreeNAS).  Then a virtual storage network is used to share the storage back to VMware.

vmware_all_in_one_with_storage_network
VMware and ZFS: All-In-One Design

bhyve, can simplify this design since it runs under FreeBSD it already has a ZFS server.  This not only simplifies the design, but it could potentially allow a hypervisor to run on simpler less expensive hardware.  The same design in bhyve eliminates the need to use a dedicated HBA and a CPU that supports VT-d.

freebsd_bhyve
Simpler bhyve design

I’ve never understood the advantage of type-1 hypervisors (such as VMware and Xen) over Type-2 hypervisors (like KVM and bhyve).  Type-1 proponents say the hypervisor runs on bare metal instead of an OS… I’m not sure how VMware isn’t considered an OS except that it is a purpose-built OS and probably smaller.  It seems you could take a Linux distribution running KVM and take away features until at some point it becomes a Type-1 hypervisor.  Which is all fine but it could actually be a disadvantage if you wanted some of those features (like ZFS).  A type-2 hypervisor that supports ZFS appears to have a clear advantage (at least theoretically) over a type-1 for this type of setup.

In fact, FreeBSD may be the best visualization / storage platform.  You get ZFS and bhyve, and also jails.  You really only need to run bhyve when virtualizing a different OS.

bhyve is still pretty young, but I thought I’d run some tests to see where it’s at…

Environments

This is running on my X10SDV-F Datacenter in a Box Build.

In all environments the following parameters were used:

  • Supermico X10SDV-F
  • Xeon D-1540
  • 32GB ECC DDR4 memory
  • IBM ServerRaid M1015 flashed to IT mode.
  • 4 x HGST Ultrastar 7K300 HGST 2TB enterprise drives in RAID-Z
  • One DC S3700 100GB over-provisioned to 8GB used as the log device.
  • No L2ARC.
  • Compression = LZ4
  • Sync = standard (unless specified).
  • Guest (where tests are run): Ubuntu 14.04 LTS, 16GB, 4 cores, 1GB memory.
  • OS defaults are left as is, I didn’t try to tweak number of NFS servers, sd.conf, etc.
  • My tests fit inside of ARC.  I ran each test 5 times on each platform to warm up the ARC.  The results are the average of the next 5 test runs.
  • I only tested an Ubuntu guest because it’s the only distribution I run in (in quantity anyway) addition to FreeBSD, I suppose a more thorough test should include other operating systems.

The environments were setup as follows:

1 – VM under ESXi 6 using NFS storage from FreeNAS 9.3 VM via VT-d

  • FreeNAS 9.3 installed under ESXi.
  • FreeNAS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

2 – VM under ESXi 6 using NFS storage from OmniOS VM via VT-d

  • OmniOS r151014 LTS installed under ESXi.
  • OmniOS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

3 – VM under FreeBSD bhyve

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.

4 – VM under FreeBSD bhyve sync always

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.
  • Sync=always

Benchmark Results

MariaDB OLTP Load

This test is a mix of CPU and storage I/O.  bhyve (yellow) pulls ahead in the 2 threaded test, probably because it doesn’t have to issue a sync after each write.  However, it falls behind on the 4 threaded test even with that advantage, probably because it isn’t as efficient at handling CPU processing as VMware (see next chart on finding primes).
sysbench_oltp

Finding Primes

Finding prime numbers with a VM under VMware is significantly faster than under bhyve.

sysbench_primes

Random Read

byhve has an advantage, probably because it has direct access to ZFS.

sysbench_rndrd

Random Write

With sync=standard bhyve has a clear advantage.  I’m not sure why VMware can outperform bhyve sync=always.  I am merely speculating but I wonder if VMware over NFS is translating smaller writes into larger blocks (maybe 64k or 128k) before sending them to the NFS server.

sysbench_rndwr

Random Read/Write

sysbench_rndrw

Sequential Read

Sequential reads are faster with bhyve’s direct storage access.

sysbench_seqrd

Sequential Write

What not having to sync every write will gain you..

sysbench_seqwr

Sequential Rewrite

sysbench_seqrewr

 

Summary

VMware is a very fine virtualization platform that’s been well tuned.  All that overhead of VT-d, virtual 10gbe switches for the storage network, VM storage over NFS, etc. are not hurting it’s performance except perhaps on sequential reads.

For as young as bhyve is I’m happy with the performance compared to VMware, it appears to be a slower on the CPU intensive tests.   I didn’t intend on comparing CPU performance so I haven’t done enough variety of tests to see what the difference is there but it appears VMware has an advantage.

One thing that is not clear to me is how safe running sync=standard is on bhyve.  The ideal scenario would be honoring fsync requests from the guest, however I’m not sure if bhyve has that kind of insight from the guest.  Probably the worst case under this scenario with sync=standard is losing the last 5 seconds of writes–but even that risk can be mitigated with battery backup. With standard sync there’s a lot of performance to be gained over VMware with NFS.  Even if you run bhyve with sync=always it does not perform badly, and even outperforms VMware All-in-one design on some tests.

The upcoming FreeNAS 10 may be an interesting hypervisor + storage platform, especially if it provides a GUI to manage bhyve.

 

Supermicro X10SDV-F Build; Datacenter in a Box

I don’t have room for a couple of rackmount servers anymore so I was thinking of ways to reduce the footprint and noise from my servers.  I’ve been very happy with Supermicro hardware so here’s my Supermicro Mini-ITX Datacenter in a box build.

Supermicro Microtower

Supermicro X10SDV Motherboard

Unlike most processors, the Xeon D is SOC (System on Chip) meaning that it’s built into the motherboard.  Depending on your compute needs, you’ve got a lot of pricing / power flexibility with the Mini-ITX Supermicro X10SDV motherboards with the Xeon D SOC CPU ranging from a budget build of 2 cores to a ridiculous 16 cores rivaling high end Xeon E5 class processors!

How many cores do you want?  CPU/Motherbord Options

x10sdv-4c-tln2f_spec Supermicro board with fan

A few things to keep in mind when choosing a board.  Some come with a FAN (normally indicated by a + after the core count), some don’t.  I suggest getting it with a fan unless you’re putting some serious air flow (such as with a 1U server) through the heatsink.  I got one without a fan and had to do a Noctua mod (below).

Many versions of this board are rated for 7-years lifespan which means they have components designed to last longer than most boards!  Usually computers go obsolete before they die anyway, but it’s nice to have that option if you’re looking for a permanent solution.  A VMware / NAS server that’ll last you 7-years isn’t bad at all!

On the last 5 digits, you’ll see two options: “-TLN2F” and “-TLN4F” this refers to the number network Ethernet ports (N2 comes with 2 x gigabit ports, and N4 usually comes with 2 gigabit plus 2 x 10 gigabit ports).  10 gbe ports may come in handy for storage, and also having 4 ports may be useful if you’re going to run a router VM such as pfSense.

 

I bought the first model just known as the “X10SDV-F” which comes with 8 cores and 2 gigabit network ports.  This board looks like it’s designed for high density computing.  It’s like cramming dual Xeon E5’s into a Mini-ITX board.  The Xeon D-1540 will well outperform the Xeon E3-1230v3 in most tests, can handle up to 128GB memory, two nics (this also comes in a model that offers two more 10Gbe providing four nics), IPMI, 6 SATA-3 ports, a PCI-E slot, and an M.2 slot.

Supermicro X10SDV-F Motherboard
Supermicro X10SDV-F

IPMI / KVM Over-IP / Out of Band Management

One of the great features of these motherboards is you will never need to plug in a keyboard, mouse, or monitor.  In addition to the 2 or 4 normal Ethernet ports, there is one port off to the side, the management port.  Unlike HP iLO, this is a free feature on the Supermicro motherboards.  The IPMI interface will get a DHCP address. You can download the Free IPMIView software from Supermicro, or use the Android app to scan your network for the IP address.  Login as ADMIN / ADMIN (be sure to change the password).

Supermicro IPMI KVM over IP

You can even reset or power off, and even if the power is off you can power on the server remotely.

 

Supermicro KVM

And of course you also get KVM over IP, which is so low level you can get into the BIOS and even load an ISO file from your workstation to boot off of over the network!

When I first saw IPMI I made sure all my new servers have it.  I hate messing around with keyboards and mice and monitors and I don’t have room for a hardware based KVM solution.  This out of band management port is the perfect answer.  And the best part is the ability to manage your server from remote.  I have used this to power on servers and load ISO files in California from Idaho.

I should note that I would not be exposing the IPMI port over the internet, make sure it’s on it’s behind a firewall accessible only through VPN.

Cooling issue | heatsink not enough

The first boot was fine but it crashed after about 5 minutes while I was in the BIOS setup…. after a few resets I couldn’t even get it to post.  I finally realized the CPU was getting too hot.  Supermicro probably meant for this model to be in a 1U case with good air flow.  The X10SDV-TLN4F is a little extra but it comes with a CPU fan in addition to the 10Gbe network adapters so keep that in mind if you’re trying to decide between the two boards.

Noctua to the Rescue

I couldn’t find a CPU fan designed to fit this particular socket, so I bought a 60MM Noctua NF-A6x25

60MM Noctua FAN on X10SDV-F
60MM Noctua Fan

This is my first Noctua fan and I think it’s the nicest fan I’ve ever owned.  It came packaged with screws, rubber leg things, an extension cord, a molex power adapter, and two noise reducer cables that slow the fan down a bit.  I actually can’t even hear the fan running at normal speed.

Noctua Fan on Xeon D-1540 X10SDV-F
Notcua Fan on Xeon D-1540

There’s not really a good way to screw the fan and the heatsink into the motherboard together, but I took the four rubber things and sort of tucked them under the heatsink screws.  This is  surprisingly a secure fit, it’s not ideal but the fan is not going to go anywhere.

Supermicro CSE-721TQ-250B

This is what you would expect from Supermicro, a quality server-grade case.  It comes with a 250 watt 80 plus power supply.  Four 3.5″ hotswap bays, trays are the same as you would find on a 16 bay enterprise chassis.  Also it comes with labels numbered from 0 to 4 so you could choose to label starting at 0 (the right way) or 1.  It is designed to fit two fixed 2.5″ drives, one on the side of the HDD cage, and the other can be used on top instead of an optical drive.

The case is roomy enough to work with, I had no trouble adding an IBM ServerRAID M1015 / LSI  9220-8i

CS721

 

I took this shot just to note that if you could figure out a way to secure an extra drive, there is room to fit three drives, or perhaps two drives even with an optical drive, you’d have to use a Y-splitter to power it.  I should also note that you could use the M.2. slot to add another SSD.

supermicro_x10sdv-f_sc721_opened

The case is pretty quiet, I cannot hear it at all with my other computers running in the same room so I’m not sure how much noise it makes.

This case reminds me of the HP Microserver Gen8 and is probably about the same size and quality but I think a little more roomier and with Supermicro IPMI is free.

Compared to the Silverstone DS380 the Supermicro CS721 is a more compact.  The DS380 has the advantage of being able to hold more drives.  The DS380 can fit 8 3.5″ or 2.5″ in hotswap bays plus an additional four 2.5″ fixed in a cage.  Between the two cases I much prefer the Supermicro CS-721 even with less drive capacity.  The DS380 has vibration issues with all the drives populated, and it’s also not as easy to work with.  The CS-721 looks and feels much higher quality.

Storage Capacity

cs721_open_doorI loaded mine with two Intel DC S3700 SSDs and 4 x 6TB drives in RAID-Z (RAID-5) the case can provide up to 18TB of storage which is a good amount for any data hoarder wanting to get started.

I think the Xeon D platform offers great value with a great range of power and pricing options.  The prices on the Xeon D motherboards are reasonable considering the Motherboard and CPU are combined, if you went with a Xeon E3 or E5 platform you’d be paying about the same or more to purchase them separately.   You’ll be paying anywhere from $350 to $2500 depending on how many cores you want.

Core Count Recommendations

For a NAS only box such as FreeNAS, OmniOS+NappIt, NAS4Free, etc. or a VMware All in one with FreeNAS and one or two light guest VMs I’d go with a simple 2C CPU.

For a bhyve or VMware + ZFS an all-in-one I think the 4C is a great starter board, it will handle probably a lot more than most people need for a home server running a handful of VMs including the ability to trans-code with a Plex or Emby server.

From there you can get 6C, 8C, 12C, or 16C, as you start getting more cores the clock frequency starts to go down so you don’t want to go overboard unless you really do need to use those cores.  Also, consider that you may prefer to get two or three smaller boards to allow failover instead of one powerful server.

What Do I Run On My Server Under My Desk?

Other Thoughts

cs721_frontI’m pretty happy with the build, I really like how much power you can get into a microserver these days.  My build has 8 cores (16 threads) and 32GB memory (can go up to 128GB!), and with 6TB drives in RAID-Z (RAID-5) I have 18TB of usable data (more with ZFS compression).  With VMware and ZFS you could run a small datacenter from a box under your desk.

 

SSD ZFS ZIL SLOG Benchmarks – Intel DC S3700, Intel DC S3500, Seagate 600 Pro, Crucial MX100 Comparison

I ran some performance tests comparing the Intel DC S3700, Intel DC S3500, Seagate 600 Pro, and Crucial MX100 when being used as a ZFS ZIL / SLOG Device.  All of these drives have a capacitor backed write cache so they can lose power in the middle of a write without losing data.

IMG_9271

Here are the results….

zfs_zil_ssd_comparison

The Intel DC S3700 takes the lead with the Intel DC S3500 being a great second performer.  I am surprised at how much better the Intel SSDs performed over the Crucial and Seagate considering Intel’s claims are not as great as the other two… Intel claims slower IOPS, and slower sequential performance yet it outperforms the other two SSDs.

SSDSeagate 600 ProCrucial MX 100Intel DC S3500Intel DC S3700
Size GB24025680100
ModelST240FP0021CT256MX100SSD1SSDSC2BB080G4SSDSC2BA100G3T
Sequencial (MB/s)500550Read 340 / Write 100Read 500 / Write 200
IOPS Random Read85,00085,00070,00075,000
IOPS Random Write11,00070,0007,00019,000
Endurance (Terabytes Written)13472451874
Warranty5 years3 Years5 years5 years
ZFS ZIL SLOG Results Below--------
oltp 2 thread66160732746
oltp 4 thread892349891008
random r/w20854314543641
random write8722618441661
seq write MB/s5149399

Drive Costs

Conclusion

  • For most workloads use the Intel DC S3500.
  • For heavy workloads use the Intel DC S3700.
  • The best performance for the dollar is the Intel DC S3500.

In my environment the best device for a ZFS ZIL is either the Intel DC S3500 or the Intel DC S3700.  The S3700 is designed to hold up to very heavy usage–you could overwrite the entire 100GB drive 10 times every day for 5 years!  With the DC S3500 you could write out 25GB/day every day for 5 years.  Probably for most environments the DC S3500 is enough.

I should note that both the Crucial and Seagate improved performance over not having a SLOG at all.

Unanswered Questions

I would like to know why the Seagate 600 Pro and Crucial MX100 performed so badly… my suspicion is it may be the way ESXi on NFS forces a cache sync on every write, the Seagate and Crucial may be obeying the sync command while the Intel drives are ignoring it because they know they can rely on their power loss protection mechanism.  I’m not entirely sure this is the difference but it’s my best guess.

Testing Environment

This is based on my Supermicro 2U ZFS Server Build:  Xeon E3-1240v3, The ZFS server is a FreeNAS 9.2.1.7 running under VMware ESXi 5.5.  HBA is the LSI 2308 built into the Supermicro X10SL7-F, flashed into IT mode.  The LSI 2308 is passed to FreeNAS using VT-d.  The FreeNAS VM is given 8GB memory.

Zpool is 3x7200RPM Seagate 2TB drives in RAID-Z, in all tests an Intel DC S3700 is used as the L2ARC.  Compression = LZ4, Deduplication off, sync = standard, encryption = off.  ZFS dataset is shared back to ESXi via NFS.  On that NFS share is a guest VM running Ubuntu 14.04 which is given 1GB memory and 2 cores.  The ZIL device is changed out between tests, I ran each test seven times and took the average discarding the first three test results (I disregarded the first three results to allow some data to get cached into ARC…I did not see any performance improvement after repeating a test three times so I believe that was sufficient).

Thoughts for Future Tests

I’d like to repeat these tests using OmniOS and Solaris sometime but who knows if I’ll ever get to it.  I imagine the results would be pretty close.  Also, of particular interest would be testing on  VMware ESXi 6 beta… I’d be curious to see if there are any changes in how NFS performs there… but if I tested it I wouldn’t be able to post the results because of the NDA.

Test Commands

Supermicro 2U ZFS Server Build

Here’s my latest VMware + ZFS VSAN all in one server build based on a Supermicro platform… this is borrowing from the napp-it all-in-one concept where you pass the SAS/SATA controller to the ZFS VM using VT-d and then share a ZFS dataset back to VMware using NFS or iSCSI.  For this particular server I’m using FreeNAS as the VSAN.

Chassis – 2U Supermicro 6 Bay Hotswap

15161878016_af4bd6c2fd_k

For the case I went with the Supermicro CSE-822T-400LPB.  It comes with rails, 6 hotswap bays, a 400W PSU (80 plus energy efficiency), and a 5.25 bay which works great for installing a mobile rack.

The 3400 RPM Nidec UltraFlo fans are too loud for my house (although quiet for a rack-mount server) and there is no way to keep the speed low in Supermicro’s BIOS so I replaced them with low noise Antec 80mm fans which are a couple dollars each.

14998128759_079d8fe1ab_k

14998338517_aa3f5ff041_k

Unfortunately these fans only have 3 wire (don’t have the RPM sensor) so I used freeipmi (apt-get install freeipmi) to disable monitoring the fan speeds:

#  ipmi-sensors-config -h 10.2.0.228 -u ADMIN -p ADMIN –filename=ipmi.config –checkout
Under the FAN sections change this to “No”…
Enable_Scanning_On_This_Sensor                                              No
# ipmi-sensors-config -h 10.2.0.228 -u ADMIN -p ADMIN –filename=ipmi.config –commit
Probably a better solution is to use 4-wire fans and lower the sensor thresholds but then you’re looking at around $10/fan.
The server makes about as much noise as my desktop computer now.

Motherboard – Supermicro X10SL7-F with LSI-2308

Supermicro’s X10SL7-F has a built in LSI 2308 controller with 8 SAS/SATA ports.  Using this across the two hotswap bays works perfectly.  I used the instructions on the FreeNAS forum post to flash the controller into IT mode.  Under VMware the LSI-2308 is passed to the ZFS Server which is connected to all 6 x 3.5″ hotswap bays plus the first two hotswap bays of the mobile rack.  This allows for 6 drives in a RAID-Z2 configuration, plus one SSD for ZIL and another for L2ARC.    The final 2.5″ drives in the mobile rack are connected into the two 6Gbps ports on the motherboard and used by VMware.  I usually enable VMware’s hostcache on SSDs.

vmware_zfs_vtd_supermicro_lsi2308

Memory: Crucial

Memory: Crucial 8GB DDR3L (low volt) ECC Server Memory I got the 16GB kit.

CPU Xeon E3-1231v3 Quad Core

I think the Xeon E3-1231v3 is about the best bang for the dollar for an E3 class CPU right now.  You get 4 3.4GHz cores (with 3.8GHz turbo), VT-d, ECC support, and hyper-threading.  It’s about $3 more than the Xeon E3-1230v3 but at 100MHz faster.   I’m actually using the Xeon E3-1240v3 (which is identical to the 1231v3) because I saw a good price on a used one awhile ago.

Mobile Rack 2.5″ SSD bay


15184899715_d71b7ad69a_k
ICY DOCK ToughArmor
4 x 2.5″ Mobile Rack.  The hotswap bays seem pretty high quality metal.

Hard Drives: I’m running 2TB Seagate 7200 RPM Drives, although they may not be the most reliable drives they’re cheap and I’m confident with RAID-Z2 (and I also always have two off-site backups and one onsite backup).  For SSDs I use Intel’s DC S3700 (here’s some performance testing I did on a different system, hopefully I’ll get a chance to retest on this build)

Racking with Rails on a Lackrack

15152114936_a863dcbece_o (1)
Awhile ago I discovered the Lackrack (these are $7 at Ikea)… I figured since I have rails I may as well install them.  The legs were slightly too close together so I used a handsaw to cut a small notch for the rails and mounted the rails using particle board screws.  One thing to note is the legs are hollow just below where the top screw of the rail mount goes so only that top screw is going to be able to bear any load.

14998127309_3741d99834_k

Here’s the final product:

supermicro_server_lackrack

What VMs runs on the server?

  • ZFS – provides CIFS and NFS storage for VMware and Windows, Linux, and FreeBSD clients.  Basically all our movies, pictures, videos, music, any anything of importance goes on here.
  • Backup server – backs up some of my offsite cloud servers.
  • Kolab – Mail / Groupware server (Exchange alternative)
  • Minecraft Server
  • Owncloud (Dropbox replacement)
  • VPN Server
  • Squid Proxy Cache (I have limited bandwidth where I’m at so this helps a lot).
  • DLNA server (so Android clients can watch movies).