Here’s my current battlestation…  since the office and guest bedroom are shared I recently migrated from a desktop to a Dell Latitude E5450 with a docking station so I could use the room as an office most of the time but undock when guests are staying the night.

Starting with the Lackracks

  • 2U server running VMware and FreeNAS
  • Samsung ML-2850ND Laserjet printer
  • On the top table networking stuff behind some videos
  • A Tivoli Model One table-top radio which I use for my computer speaker (stereo gives me a headache so it’s a mono-speaker)
  • Dell Docking station
  • Dell Latitude E5450 with Nvidia GeForce 840M.  I looked at a few options and came to this model because I wanted NBD support, a robust docking solution, and a GPU for the occasional game.  Getting upgrades from Dell is pricey so I bought from Amazon another 8GB memory module to bring it up to 16GB and a Samsung 850 EVO 1TB SSD.
  • Portable blue-ray drive so we can watch movies
  • A couple of ASUS monitors
  • Orochi Razer mouse.  This is the only mouse I’ve gotten to work well since it can reverse the buttons in the mouse firmware instead of relying on the OS which doesn’t always produce consistent results–you left handers know what I mean.
  • Microsoft Sidewinder X4 keyboard
  • On the far right a ScanSnap S1300–About 6 or 7 years ago I got an IRS audit with an accusation that I owed much more.  I am honest on my taxes but they don’t take a Quicken report as evidence.  So I had to prove it by digging through boxes of papers to find all the supporting documents on my return.  It was then that I decided to go paperless.  I now scan and OCR every document that comes across my desk per What Happens When You Send Me A Letter   We are now entirely paperless except for one small desk drawer where I file a few important papers.
  • 20150826_222938Speaking of important papers, in front of the laptop is a stamp I use to mark documents as scanned when I want to keep the physical copy–every once in awhile I run across a document I don’t want to shred such as my vehicle title.  The stamp prevents me from accidentally scanning the same papers multiple times if I’m not sure that I’ve scanned it in yet.


VMware vs bhyve Performance Comparison

Playing with bhyve

Here’s a look at Gea’s popular All-in-one design which allows VMware to run on top of ZFS on a single box using a virtual 10Gbe storage network.  The design requires an HBA, and a CPU that supports VT-d so that the storage can be passed directly to a guest VM running a ZFS server (such as OmniOS or FreeNAS).  Then a virtual storage network is used to share the storage back to VMware.


VMware and ZFS: All-In-One Design

bhyve, can simplify this design since it runs under FreeBSD it already has a ZFS server.  This not only simplifies the design, but it could potentially allow a hypervisor to run on simpler less expensive hardware.  The same design in bhyve eliminates the need to use a dedicated HBA and a CPU that supports VT-d.


Simpler bhyve design

I’ve never understood the advantage of type-1 hypervisors (such as VMware and Xen) over Type-2 hypervisors (like KVM and bhyve).  Type-1 proponents say the hypervisor runs on bare metal instead of an OS… I’m not sure how VMware isn’t considered an OS except that it is a purpose-built OS and probably smaller.  It seems you could take a Linux distribution running KVM and take away features until at some point it becomes a Type-1 hypervisor.  Which is all fine but it could actually be a disadvantage if you wanted some of those features (like ZFS).  A type-2 hypervisor that supports ZFS appears to have a clear advantage (at least theoretically) over a type-1 for this type of setup.

In fact, FreeBSD may be the best visualization / storage platform.  You get ZFS and bhyve, and also jails.  You really only need to run bhyve when virtualizing a different OS.

bhyve is still pretty young, but I thought I’d run some tests to see where it’s at…


This is running on my X10SDV-F Datacenter in a Box Build.

In all environments the following parameters were used:

  • Supermico X10SDV-F
  • Xeon D-1540
  • 32GB ECC DDR4 memory
  • IBM ServerRaid M1015 flashed to IT mode.
  • 4 x HGST Ultrastar 7K300 HGST 2TB enterprise drives in RAID-Z
  • One DC S3700 100GB over-provisioned to 8GB used as the log device.
  • No L2ARC.
  • Compression = LZ4
  • Sync = standard (unless specified).
  • Guest (where tests are run): Ubuntu 14.04 LTS, 16GB, 4 cores, 1GB memory.
  • OS defaults are left as is, I didn’t try to tweak number of NFS servers, sd.conf, etc.
  • My tests fit inside of ARC.  I ran each test 5 times on each platform to warm up the ARC.  The results are the average of the next 5 test runs.
  • I only tested an Ubuntu guest because it’s the only distribution I run in (in quantity anyway) addition to FreeBSD, I suppose a more thorough test should include other operating systems.

The environments were setup as follows:

1 – VM under ESXi 6 using NFS storage from FreeNAS 9.3 VM via VT-d

  • FreeNAS 9.3 installed under ESXi.
  • FreeNAS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

2 – VM under ESXi 6 using NFS storage from OmniOS VM via VT-d

  • OmniOS r151014 LTS installed under ESXi.
  • OmniOS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

3 – VM under FreeBSD bhyve

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.

4 – VM under FreeBSD bhyve sync always

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.
  • Sync=always

Benchmark Results

MariaDB OLTP Load

This test is a mix of CPU and storage I/O.  bhyve (yellow) pulls ahead in the 2 threaded test, probably because it doesn’t have to issue a sync after each write.  However, it falls behind on the 4 threaded test even with that advantage, probably because it isn’t as efficient at handling CPU processing as VMware (see next chart on finding primes).

Finding Primes

Finding prime numbers with a VM under VMware is significantly faster than under bhyve.


Random Read

byhve has an advantage, probably because it has direct access to ZFS.


Random Write

With sync=standard bhyve has a clear advantage.  I’m not sure why VMware can outperform bhyve sync=always.  I am merely speculating but I wonder if VMware over NFS is translating smaller writes into larger blocks (maybe 64k or 128k) before sending them to the NFS server.


Random Read/Write


Sequential Read

Sequential reads are faster with bhyve’s direct storage access.


Sequential Write

What not having to sync every write will gain you..


Sequential Rewrite




VMware is a very fine virtualization platform that’s been well tuned.  All that overhead of VT-d, virtual 10gbe switches for the storage network, VM storage over NFS, etc. are not hurting it’s performance except perhaps on sequential reads.

For as young as bhyve is I’m happy with the performance compared to VMware, it appears to be a slower on the CPU intensive tests.   I didn’t intend on comparing CPU performance so I haven’t done enough variety of tests to see what the difference is there but it appears VMware has an advantage.

One thing that is not clear to me is how safe running sync=standard is on bhyve.  The ideal scenario would be honoring fsync requests from the guest, however I’m not sure if bhyve has that kind of insight from the guest.  Probably the worst case under this scenario with sync=standard is losing the last 5 seconds of writes–but even that risk can be mitigated with battery backup. With standard sync there’s a lot of performance to be gained over VMware with NFS.  Even if you run bhyve with sync=always it does not perform badly, and even outperforms VMware All-in-one design on some tests.

The upcoming FreeNAS 10 may be an interesting hypervisor + storage platform, especially if it provides a GUI to manage bhyve.


Supermicro X10SDV-F Build; Datacenter in a Box

cs721_frontI don’t have room for a couple of rackmount servers anymore so I was thinking of ways to reduce the footprint and noise from my servers.  I’ve been very happy with Supermicro hardware so here’s my Supermicro Mini-ITX Datacenter in a box build:

Supermicro X10SDV-F Motherboard

This board looks like it’s designed for high density computing.  It is the most computing power I have ever seen crammed into a Mini-ITX board.  A Xeon D-1540 which will well outperform the Xeon E3-1230v3 in most tests, can handle up to 128GB memory, two nics (this also comes in a model that offers two more 10Gbe), IPMI, 6 SATA-3 ports, a PCI-E slot, and an M.2 slot.

Supermicro X10SDV-F Motherboard

Supermicro X10SDV-F

Cooling issue | heatsink not enough

The first boot was fine but it crashed after about 5 minutes while I was in the BIOS setup…. after a few resets I couldn’t even get it to post.  I finally realized the CPU was getting too hot.  Supermicro probably meant for this model to be in a 1U case with good air flow.  The X10SDV-TLN4F is a little extra but it comes with a CPU fan in addition to the 10Gbe network adapters so keep that in mind if you’re trying to decide between the two boards.

Noctua to the Rescue

I couldn’t find a CPU fan designed to fit this particular socket, so I bought a 60MM Noctua.

60MM Noctua FAN on X10SDV-F

60MM Noctua Fan

This is my first Noctua fan and I think it’s the nicest fan I’ve ever owned.  It came packaged with screws, rubber leg things, an extension cord, a molex power adapter, and two noise reducer cables that slow the fan down a bit.  I actually can’t even hear the fan running at normal speed.

Noctua Fan on Xeon D-1540 X10SDV-F

Notcua Fan on Xeon D-1540

There’s not really a good way to screw the fan and the heatsink into the motherboard together, but I took the four rubber things and sort of tucked them under the heatsink screws.  This is  surprisingly a secure fit, it’s not ideal but the fan is not going to go anywhere.

Supermicro CSE-721TQ-250B

This is what you would expect from Supermicro, a quality server-grade case.  It comes with a 250 watt 80 plus power supply.  Four 3.5″ hotswap bays, trays are the same as you would find on a 16 bay enterprise chassis.  Also has labels numbered from 0 to 4 so you could choose to label starting at 0 (the right way) or 1.  It is designed to fit two fixed 2.5″ drives, one on the side of the HDD cage, and the other can be used on top instead of an optical drive.

The case is roomy enough to work with, I had no trouble adding an IBM ServerRAID M1015 / LSI  9220-8i



I took this shot just to note that if you could figure out a way to secure an extra drive, there is room to fit three drives, or perhaps two drives even with an optical drive, you’d have to use a Y-splitter to power it.  I should also note that you could use the M.2. slot to add another SSD.


The case is pretty quiet, I cannot hear it at all with my other computers running in the same room so I’m not sure how much noise it makes.

This case reminds me of the HP Microserver Gen8 and is probably about the same size and quality but I think a little more roomier and with Supermicro IPMI is free.

Compared to the DS380 the Supermicro CS721 is a more compact.  The DS380 has the advantage of being able to hold more drives.  The DS380 can fit 8 3.5″ or 2.5″ in hotswap bays plus an additional four 2.5″ fixed in a cage.  Between the two cases I much prefer the Supermicro CS-721 even with less drive capacity.  The DS380 has vibration issues with all the drives populated, and it’s also not as easy to work with.  The CS-721 looks and feels much higher quality.


Storage Capacity

cs721_open_door6TB drives are about the largest you’d want to use, with four of those in RAID-Z (RAID-5) the case can provide up to 18TB of storage which is a good amount for any data hoarder wanting to get started.

Other Thoughts

What I would love to see from case manufacturers is a NAS case between the CS-721 (mini-ITX) and CS-743 (E-ATX).  The market could use a good mATX sized hotswap NAS case.

I’m pretty happy with the build and despite it’s small size it can house a lot of computational power.  8 cores (16 threads) and up to 128GB memory, and even up to 18TB of data.  With VMware and ZFS you could run a small datacenter from a box under your desk.

I should be posting some ZFS benchmarks on this build shortly.

FreeNAS 9.3 on VMware ESXi 6.0 Guide

This is a guide which will install FreeNAS 9.3 under VMware ESXi and then using ZFS share the storage back to VMware.  This is roughly based on Napp-It’s All-In-One design, except that it uses FreeNAS instead of OminOS.


Disclaimer:  I should note that FreeNAS does not officially support running virtualized in production environments.  If you run into any problems and ask for help on the FreeNAS forums, I have no doubt that Cyberjock will respond with “So, you want to lose all your data?”  So, with that disclaimer aside let’s get going:

Update: Josh Paetzel wrote a post on Virtualizing FreeNAS so this is somewhat “official” now.  I would still exercise caution.

1. Get proper hardware

SuperMicro X10SL7-F (which has a built in LSI2308).
Xeon E3-1240v3
ECC Memory

Hard drives.  The LSI2308 has 8 ports, I like do to two DC S3700s for a striped SLOG device and then do a RAID-Z2 of spinners on the other 6 slots.  Also get one (preferably two for a mirror) drives that you will plug into the SATA ports (not on the LSI controller) for the local ESXi data store.  I’m using DC S3700s because that’s what I have, but this doesn’t need to be fast storage, it’s just to put FreeNAS on.

2.  Flash the LSI 2308 to IT firmware.

Here’s instructions to flash the firmware:  FreeNAS 9.3 wants P16, which you can get from Supermicro:

3. Optional: Over-provision ZIL / SLOG SSDs.

If you’re going to use an SSD for SLOG you can over-provision them.  You can boot into an Ubuntu LiveCD and use hdparm, instructions are here:  You can also do this after after VMware is installed by passing the LSI controller to an Ubuntu VM (FreeNAS doesn’t have hdparm).  I usually over-provision down to 8GB.

4. Install VMware ESXi 6

ImageThe free version of the hypervisor is here. I usually install it to a USB drive plugged into the motherboard’s internal header.

Under configuration, storage, click add storage.  Choose one (or two) of the local storage disks plugged into your SATA ports (do not add a disk on your LSI controller).

5. Create a Virtual Storage Network.

For this example my VMware management IP is, the VMware Storage Network ip is, and the FreeNAS Storage Network IP is

Create a virtual storage network with jumbo frames enabled.

VMware, Configuration, Add Networking. Virtual Machine…

Create a standard switch (uncheck any physical adapters).

Image [8]

Image [11]


Add Networking again, VMKernel, VMKernel…  Select vSwitch1 (which you just created in the previous step), give it a network different than your main network.  I use for my storage so you’d put for the IP and for the netmask.

Image [12]

Some people are having trouble with an MTU of 9000.  I suggest leaving the MTU at 1500 and make sure everything works there before testing an MTU of 9000.  Also, if you run into networking issues look at disabling TSO offloading (see comments).

Under vSwitch1 go to Properties, select vSwitch, Edit, change the MTU to 9000.  Answer yes to the no active NICs warning.

Image [14]

Image [15]

Then select the Storage Kernel port, edit, and set the MTU to 9000.

Image [17]

Image [18]

6. Configure the LSI 2308 for Passthrough (VT-d).

Configuration, Advanced Settings, Configure Passthrough.

Image [19]

Mark the LSI2308 controller for passthrough.

Image [20]

You must have VT-d enabled in the BIOS for this to work so if it won’t let you for some reason check your BIOS settings.

Reboot VMware.

7. Create the FreeNAS VM.

Download the FreeNAS ISO from

Create a new VM, choose custom, put it on one of the drives on the SATA ports, Virtual Machine version 11, Guest OS type is FreeBSD 64-bit, 1 socket and 2 cores.  Try to give it at least 8GB of memory.  On Networking give it three adapters, the 1st NIC should be assigned to the VM Network, 2nd NIC to the Storage network.  Set both to VMXNET3.  Then add a 3rd NIC and set it to E1000 and on the VM Network which we’ll use temporarily until the VMXNET3 drivers are up and running.


SCSI controller should be the default, LSI Logic Parallel.

Choose Edit the Virtual Machine before completion.

Here you can add a second boot drive for a mirror if you have two local storage drives.

Before finishing the creation of the VM click Add, select PCI Devices, and choose the LSI 2308.

Image [32]

And be sure to go into the CD/DVD drive settings and set it to boot off the FreeNAS iso.  Then finish creation of the VM.

8. Install FreeNAS.

Boot of the VM, install it to your SATA drive (or two of them to mirror boot).


After it’s finished installing reboot.

9. Install VMware Tools.

The FreeBSD compat6x and perl packages are no longer available on their FTP site.  I’ve updated the instructions to install the binary version from VMware Tools Installer. 


In VMware right-click the FreeNAS VM,  Choose Guest, then Install/Upgrade VMware Tools.  You’ll then choose interactive mode.

Mount the CD-ROM and copy the VMware install files to FreeNAS:

Once installed Navigate to the WebGUI, it starts out presenting a wizard, I usually set my language and timezone then exit the rest of the wizard.

Under System, Tunables
Add three Tunables.  Variables for each should be: vmxnet3_load, vmxnet_load, and vmmemctl_load,  The type should be Loader and the Value YES on all three.

(I think all that’s needed is the vmxnet3_load)

Reboot FreeNAS.  On reboot you should notice that the VMXNET3 NICS now work (except the NIC on the storage network can’t find a DHCP server, but we’ll set it to static later), also you should notice that VMware is now reporting that VMware tools are installed.


If all looks well shutdown FreeNAS (you can now choose Shutdown Guest from VMware to safely power it off), remove the E1000 NIC and boot it back up (note that the IP address on the web gui will be different).

10.  Update FreeNAS

Before doing anything let’s upgrade FreeNAS to the latest stable under System Update.

This is a great time to make some tea.

Once that’s done it should reboot.  Then I always go back again and check for updates again to make sure there’s nothing left.

11. SSL Certificate on the Management Interface (optional)

On my DHCP server I’ll give FreeNAS a static IP, and setup an entry for it on my local DNS server.  So for this example I’ll have a DNS entry on my internal network for

If you don’t have your own internal Certificate Authority you can create one right in FreeNAS:

System, CAs, Create internal CA.  Increase the key length to 4096 and make sure the Digest Algorithm is set to SHA256.


Click on the CA you just created, hit the Export Certificate button, click on it to install the Root certificate you just created on your computer.  You can either install it just for your profile or for the local machine, I usually do local machine, and you’ll want to make sure to store it is in the Trusted Root Certificate Authorities store.



Just a warning, that you must keep this Root CA guarded, if a hacker were to access this he could generate certificates to impersonate anyone (including your bank) to initiate a MITM attack.

Also Export the Private Key of the CA and store it some place safe.

Now create the certificate…

System, Certificates, Create Internal Certificate.  Once again bump the key length to 4096.  The important part here is the Common Name must match your DNS entry.  If you are going to access FreeNAS via IP then you should put the IP address in the Common Name field.


System, Information.  Set the hostname to your dns name.

System, General.  Change the protocol to HTTPS and select the certificate you created.  Now you should be able to go to use https to access the FreeNAS WebGUI.

12. Setup Email Notifications

Account, Users, Root, Change Email, set to the email address you want to receive alerts (like if a drive fails or there’s an update available).

System, Advanced

Show console messages in the footer.  Enable (I find it useful)

System Email…

Fill in your SMTP server info… and send a test email to make sure it works.

13.  Setup a Proper Swap

FreeNAS by default creates a swap partition on each drive, and then stripes the swap across them so that if any one drive fails there’s a chance your system will crash.  We don’t want this.

System, Advanced…

Swap size on each drive in GiB, affects new disks only. Setting this to 0 disables swap creation completely (STRONGLY DISCOURAGED).   Set this to 0.

Open the shell.  This will create a 4GB swap file (based on


System, Tunables, Add Tunable.

Variable=swapfile, Value=/usr/swap, Type=rc.conf


Next time you reboot on the left Navigation pane click Display System Processes and make sure the swap shows up.  If so it’s working.


14. Configure FreeNAS Networking

Setup the Management Network (which you are currently using to connect to the WebGUI).

Network, Interfaces, Add Interface, choose the Management NIC, vmx3f0, and set to DHCP.


Setup the Storage Network

Add Interface, choose the Storage NIC, vmx3f1, and set to (I setup my VMware hosts on 10.55.0.x and ZFS servers on 10.55.1.x), be sure to select /16 for the netmask.  And set the mtu to 9000.


Open a shell and make sure you can ping the ESXi host at

Reboot.  Let’s make sure the networking and swap stick.

15. Hard Drive Identification Setup

Label Drives.   FreeNAS is great at detecting bad drives, but it’s not so great at telling you which physical drive is having an issue.  It will tell you the serial number and that’s about it.  But how confident are you in knowing which drive fails?  If FreeNAS tells you that disk da3 (by the way, all these da numbers can change randomly) is having an issue how do you know which drive to pull?  Under Storage, View Disks, you can see the serial number, this still isn’t entirely helpful because chances are you can’t see the serial number without pulling a drive.  So we need to map them to slot numbers or labels of some sort.


There are two ways you can deal with this.  The first, and my preference, is sas2ircu.  Assuming you connected the cables between the LSI 2308 and the backplane in proper sequence sas2ircu will tell you the slot number the drives are plugged into on the LSI controller.  Also if you’re using a backplane with an expander that supports SES2 it should also tell you which slots the drives are in.  Try running this command:


You can see that it tells you the slot number and maps it to the serial number.  If you are comfortable that you know which physical drive each slot number is in then you should be okay.

If not, the second method, is remove all the drives from the LSI controller, and put in just the first drive and label it Slot 0 in the GUI by clicking on the drive, Edit, and enter a Description.



Put in the next drive in Slot 1 and label it, then insert the next drive and label it Slot 2 and so on…

The Description will show up in FreeNAS and it will survive reboots.  it will also follow the drive even if you move it to a different slot.  So it may be more appropriate to make your description match a label on the removable trays rather than the bay number.

It doesn’t matter if you label the drives or use sas2ircu, just make sure you’re confident that you can map a serial number to a physical drive before going forward.

16.  Create the Pool.

Storage, Volumes, Volume Manager.

Click the + next to your HDDs and add them to the pool as RAID-Z2.

Click the + next to the SSDs and add them to the pool.  By default the SSDs will be on one row and two columns.  This will create a mirror.  If you want a stripe just add one Log device now and add the second one later.  Make certain that you change the dropdown on the SSD to “Log (ZIL)”  …it seems to lose this setting anytime you make any other changes so change that setting last.  If you do not do this you will stripe the SSD with the HDDs and possibly create a situation where any one drive failure can result in data loss.


Back to Volume manager and add the second Log device…


I have on numerous occasions had the Log get changed to Stripe after I set it to Log, so just double-check by clicking on the top level tank, then the volume status icon and make sure it looks like this:


17.  Create an NFS Share for VMware

You can create either an NFS share, or iSCSI share (or both) for VMware.  First here’s how to setup an NFS share:

Storage, Volumes, Select the nested Tank, Create Data Set

Be sure to disable atime.


Sharing, NFS, Add Unix (NFS) Share.   Add the vmware_nfs dataset, and grant access to the storage network, and map the root user to root.


Answer yes to enable the NFS service.

In VMware, Configuration, Add Storage, Network File System and add the storage:


And there’s your storage!


18.  Create an iSCSI share for VMware

Note that at this time, based on some of the comments below with people having connection drop issues on iSCSI I suggest testing with heavy concurrent loads to make sure it’s stable.  Watch dmesg and /var/log/messages on FreeNAS for iSCSI timeouts.  Personally I use NFS.  But here’s how to enable iSCSI:

Storage, select the nested tank, Create Zvol.  Be sure compression is set to lz4.  Check Sparse Volume.  Choose advanced mode and optionally change the default block size.  I use 64K block-size based on some benchmarks I’ve done comparing 16K (the default), 64K, and 128K.  64K blocks didn’t really hurt random I/O but helped some on sequential performance, and also gives a better compression ratio.  128K blocks had the best better compression ratio but random I/O started to suffer so I think 64K is a good middle-ground.  Various workloads will probably benefit from different block sizes.


Sharing, Block (iSCSI), Target Global Configuration.

Set the base name to something sensible like:  Set Pool Available Space Threshold to 60%

iscsi _target_global

Portals tab… add a portal on the storage network.


Initiator.  Add Initiator.


Targets.  Add Target.


Extents.  Add Extent.


Associated Targets.  Add Target / Extent.


Under Services enable iSCSI.

In VMware Configutration, Storage Adapters, Add Adapter, iSCSI.

Select the iSCSI Software Adapter in the adapters list and choose properties.  Dynamic discovery tab.  Add…


Close and re-scan the HBA / Adapter.

You should see your iSCSI block device appear…


Configuration, Storage, Add Storage, Disk/LUN, select the FreeBSD iSCSi Disk,


19.  Setup ZFS VMware-Snapshot coordination.

Storage.  Vmware-Snapshot.  Add VMware-Snapshot.  Map your ZFS dataset to the VMware data store.



20. Periodic Snapshots

Add periodic snapshot jobs for your VMware storage under Storage, Periodic Snapshot Tasks.  You can setup different snapshot jobs with different retention policies.


21. ZFS Replication

If you have a second FreeNAS Server (say you can replicate the snapshots over to it.  On, Replication tasks, view public key. copy the key to the clipboard.

On the server you’re replicating to,, go to Account, View Users, root, Modify User, and paste the public key into the SSH public Key field.  Also create a dataset called “replicated”.

Back on

Add Replication.  Do an SSH keyscan.


And repeat for any other datasets.  Optionally you could also just replicate the entire pool with the recursive option.

21.  Automatic Shutdown on UPS Battery Failure (Work in Progress).

The goal is on power loss, before the battery fails to shutdown all the VMware guests including FreeNAS.  So far all I have gotten is the APC working with VMware.  Edit the VM settings and add a USB controller, then add a USB device and select the UPS, in my case a APC Back-UPS ES 550G.  Power FreeNAS back on.

On the shell type:

dmesg|grep APC

This will tell you where the APC device is.  IN my case it’s showing up on ugen0.4.  I ended up having to grant world access to the UPS…

For some reason I could not get the GUI to connect to the UPS, I can selected ugen0.4, but under the drivers dropdown I just have hyphens —— … I set it manually in /usr/local/etc/nut/ups.conf

However, this file gets overwritten on reboot, and also the rc.conf setting doesn’t seem to stick.  I added this tunable to get the rc.conf setting…


And I created my ups.conf file in /mnt/tank/ups.conf.  Then I created a script to stop the nut service, copy my config file and restart the nut service in /mnt/tank/

Then under tasks, Init/Shutdown Scripts I added a task to run the script post init.


Next step is to configure automatic shutdown of the VMware server and all guests on it…  I have not done this yet.

There’s a couple of approaches to take here.  One is to install a NUT client on the ESXi, and the other is to have FreeNAS ssh into VMware and tell it to shutdown.  I may update this section later if I ever get around to implementing it.

22. Backups.

Before going live make sure you have adequate backups!  You can use ZFS mirroring with a fast link.  For slow network connections Rsync will work better (Took under Tasks -> Rsync tasks) or use a cloud service like CrashPlan.

 Setup Complete… mostly.

Well, that’s all for now.

Best Hard Drives for ZFS Server 2015

Today’s question comes from Jeff…. What drives should I buy for my ZFS server? 

Here’s what I recommend, considering a balance of cost per TB, performance, and reliability.  I prefer Enterprise Grade or NAS class drives since they are designed to run 24/7 and also are better at tolerating vibration from other drives.  These are all SATA, SAS drives would be better in some designs (especially when using expanders) but for a storage server with around 8 drives I think these are the best options.

Prices are as of the last update of this post.

Updated: July 19, 2015 – Added quieter HGST, and updated prices.

2TB Drives – $35/TB

HGST 2TB OEM drives are very inexpensive right now surprisingly beating out the cost per TB of larger drives.   They won’t carry the HGST 5-year warranty but you can usually get a 1-year warranty from the seller.  HGST drives are reliable so the lower cost probably justifies the lack of a warranty.  2TB HGST drives also boast a MTBF of 2 million hours!

HGST Ultrastar 7K3000 2TB 64MB Cache 7200RPM SATA III (HUA723020ALA641) Enterprise Grade.  1-Year Warranty.  $70 $35/TB  Note that this drive is a little noisy on seeks.  With 6 of them going I found them a little loud in my office.

HGST Deskstar 7K4000 2TB 64MB Cache 7200RPM SATA III (DS724020ALE640)  Desktop grade.  1-year Warranty.  $70 / $35/TB.  If noise is not a concern, I’d opt for the Ultrastar.  But if it is this Deskstar is nearly silent.  It isn’t an enterprise class drive, however the internals are nearly identical (and might be identical).  TLER is disabled by default but, unlike most desktop drives it can be enabled manually.

Enabling CCTL/TLER

Time-Limited Error Recovery (TLER) or Command Completion Time Limit (CCTL).

On desktop class drives such as the Deskstar, they’re typically not run in RAID mode so by default they are configured to take as long as needed (sometimes several minutes) to try to recover a bad sector of data.  This is what you’d want on a desktop, however performance grinds to a halt during this time which can cause your ZFS server to hang for several minutes waiting on a recovery.  If you already have ZFS redundancy it’s a pretty low risk to just tell the drive to give up after a few seconds, and let ZFS rebuild the data.

The basic rule of thumb.  If you’re running RAID-Z, you have two copies so I’d be a little cautious about enabling TLER.  If you’re running RAID-Z2 or RAID-Z3 you have three or four copies of data so in that case there’s very little risk in enabling it.

Viewing the TLER setting:

Enabling TLER

Disabling TLER

(TLER should always be disabled if you have no redundancy).

3TB, 4TB, 5TB, and 6TB Drives $40/TB to $50/TB

On the larger drives I’d purchase either HGST Deskstar NAS or the WD RED series.  Both are designed for 24-7 operation and for use in systems with up to 8-bays.


HGST Deskstar NAS 64MB Cache 7200RPM SATA III 3-year Warranty.  The main advantage of this drive is it’s faster at 7200RPM and as a result it significantly outperforms the WD Red.  See StorageReview’s benchmarks on the 4TB Deskstar.  Also at 5TB and 6TB the cache doubles to 128MB.  In general if the price is the same or pretty close I’d prefer the HGST drive.


WD RED NAS 64MB ~5400RPM SATA III 3-year Warranty
wd_red.  The WD drive runs a little cheaper, if the price is less than the HGST by more than $5/TB I would consider this drive to save a little money.

8TB+ drives

The current 8TB and larger drives are all using SMR (Shingled Magnetic Recording) which should not be used with ZFS if you care about performance until drivers are developed.  The drives may be tolerable for backups, but I’d still be safe and stick with 6TB and under.


For SLOG and L2ARC see my comparison of SSDs.

ZFS Drive Configurations

My preference is almost always RAID-Z2 (RAID-6) with 6 to 8 drives which provides a storage efficiency of .66 to .75.  This scales pretty well as far as capacity is concerned.  6 drives in RAID-Z2 would net 8TB capacity all the way up to 24TB with 6TB drives.  For larger setups use multiple vdevs.  E.g. with 60 bays use 10 six drive RAID-Z2 vdevs (each vdev will increase IOPS).  For smaller setups you can run 3 or 4 drives in RAID-Z (RAID-5) but I prefer to have double parity when possible.  In both cases it’s essential to have backups.

Port Forwarding with Verizon Wireless NAT

I thought I’d do a followup to my last post, because this is another issue with Verizon Wireless.  Sometimes you need to be able to forward ports to devices on your LAN and this is impossible to do when you’re behind a Verizon Wireless NAT.

But, it is possible to create a port forward by using ssh to create a reverse tunnel from a remote server back to your house.  You can do this easily with a $5/month VPS.


Signup for a cheap cloud server / VPS (Virtual Private Server).  What you want to look for is a VPS near the location to where your Verizon connection routes out.  You can figure this out by using mtr.  E.g.


As you can see from the trace route my Verizon Wireless connection usually routes out through Seattle.   Vultr has quite a few locations, including a location in Seattle so I setup a VPS.  You should look at the best VPS provider for your location, but if you decide to use Vultr use this link to sign up and I’ll get $10 (two months of free port forwarding).

The OS/Distro doesn’t matter too much, I’ve done it with FreeBSD and Ubuntu.

Login to your VPS server, edit /etc/ssh/sshd_config and enable GatewayPorts…

Restart ssh

Now, you need a Linux/FreeBSD server on your LAN.  I’ve got an Ubuntu VM under VMware named “wormhole” for this purpose.  On wormhole generate some ssh keys.

Then copy /root/.ssh/ on wormhole to the /root/.ssh/authorized_keys on your VPS.  At this point you should be able to ssh into your VPS from your wormhole VM without using a password.  You’ll need to do it once to get the key fingerprint.

On “wormhole”, make sure autossh is installed (apt-get install autossh) and create a file called /etc/cron.d/autossh

Here’s a quick example to forward two ports.  The first line forwards the Minecraft port and the second line will forward port 8443 on the VPS to port 443 to a server on your network.

After saving the file give it executable permissions…

Then reboot to make sure the connections establish.  Now you should be able to connect a Minecraft client to your VPS server and have the port connect to your LAN.  If you can’t, check the cron logs, and also check root’s mail for any errors.  Also run ps aux to make sure autossh is running.

Autossh is pretty resilient, it will automatically reconnect after connection drops and such.  I don’t think I’ve ever had to restart autossh manually.

As a bonus, you could install SoftEther VPN on your VPS and use it to compress your connection to save on bandwidth/increase speed.

VPN into LAN behind Verizon Wireless NAT using SoftEther

Today’s Question is From Tom:

Hey Ben,

Could you share a post or details on how you configured your SoftEther VPN in order to reach the internal network from the outside, on Verizon? I’m in the same predicament, which an unlimited 4G connection, but am unable to reach files due to Verizon’s 4G NAT firewall. If you have some time, after the holidays, would you be so kind as to publish a write-up? Right now I am connecting to a Private Internet Access VPN on my local machine in order to increase download speeds.


Hi, Tom.  I setup a SoftEther VPN server on my LAN under a VMware VM, but you can also run SoftEther on your desktop or on pretty much any  server.  Here’s how mine is setup in ESXi 5.5.

Enable Promiscuous Mode on the VMware vSwitch that’s connected to the network that you will VPN into (most likely you only have one vSwitch in VMware) by going to Configuration, Networking, vSwitch Properties, choose vSwitch, Edit, Security tab, and change Promiscuous Mode to Accept.


Create a VM for SoftEther, you can use just about any OS, however SoftEther says it works best with Linux and recommends a RHEL compatible OS.  I’ve built it on Ubuntu 14.04 and not had any issues, but for this post I’ll show how to do it with CentOS 7.  Here’s my VM settings…


Pretty standard CentOS 7 install, choose infrastructure server and development tools.  And of course don’t forget to configure and enable networking before hitting begin installation… I always seem to miss that.


Install updates…

# yum upgrade

Disable SE Linux by setting SELINUX=disabled in /etc/selinux/config and then reboot or

Disable the firewall…

Follow the SoftEther Install on Linux and Initial Configurations document.   By the end of that document you should have a running SoftEther service but it still won’t be configured.

Download the SoftEther Server Manager for Windows and connect to your VM… the first time you connect you’ll be prompted to set an Administrator password.


And you’ll be presented with a Setup Wizard…



SoftEther can do DDNS if you like so you can pick a sub-domain….


You can optionally choose to enable IPSEC / L2TP but it’s not needed if you’ll be using the SoftEther client.

You can also enable SoftEther’s free VPN Azure service.  This is a nice backup if you can’t connect directly using NAT traversal.


Then create a user and set the local bridge to the network adapter on the network that you want to be able to access.


Now on the client…

You can connect to  resolves to your Verizon Wireless external IPv4 IP address… By default SoftEther continually sends out UDP packets to traverse the NAT, so when a client attempts to connect it follows the packets back through.

Sometimes this UDP hole punching technique doesn’t work for NAT traversal,  I seem to have noticed issues if the VPN client is also behind a NAT or some restricted network like at a hotel.  That’s what the VPNAzure address is for.  SoftEther maintains a reverse tunnel by connecting to vpnazure so you can access your network using which will relay your connection back to your VPN server.  I don’t think it matters what port you connect on, I usually use port 5555 but sometimes networks block those ports in which case I’ll use 443.


A couple of other settings you may be interested in… under Advanced Settings I usually check “Use Data Compression” to speed things up a bit.  And if all you’re using your VPN for is to access resources on your network, and not tunnel all your internet traffic you can check the  “No Adjustments of Routing Table” which prevents your internet connection from being routed through your VPN.

Hope that helps.