FreeNAS 9.10 on VMware ESXi 6.0 Guide

This is a guide which will install FreeNAS 9.10 under VMware ESXi and then using ZFS share the storage back to VMware.  This is roughly based on Napp-It’s All-In-One design, except that it uses FreeNAS instead of OminOS.


This post has had over 160,000 visitors, thousands of people have used this setup in their homelabs and small businesses.  I should note that I myself would not run FreeNAS virtualized in a production environments.  But many have done so successfully.  If you run into any problems and ask for help on the FreeNAS forums, I have no doubt that Cyberjock will respond with “So, you want to lose all your data?”  So, with that disclaimer aside let’s get going:

This guide was originally written for FreeNAS 9.3, I’ve updated it for FreeNAS 9.10.  Also, I believe Avago LSI P20 firmware bugs have been fixed and have been around long enough to be considered stable so I’ve removed my warning on using P20.  Added sections 7.1 (Resource reservations) and 16.1 (zpool layouts) and some other minor updates.

1. Get proper hardware

Example 1: Supermicro 2U Build
SuperMicro X10SL7-F (which has a built in LSI2308 HBA).
Xeon E3-1240v3
ECC Memory
6 hotswap bays with 2TB HGST HDDs (I use RAID-Z2)
4 2.5″ hotswap bays.  2 Intel DC S3700’s for SLOG / ZIL, and 2 drives for installing FreeNAS (mirrored)

Example 2: Mini-ITX Datacenter in a Box Build
X10SDV-F (build in Xeon D-1540 8 core broadwell
ECC Memory
IBM 1015 / LSI 9220-8i HBA
4 hotswap bays with 2TB HGST HDDs (I use RAID-Z)
2 Intel DC S3700’s.  1 for SLOG / ZIL, and one to boot ESXi and install FreeNAS to.

Hard drives.  See info on my Hard Drives for ZFS post.

The LSI2308/M1015 has 8 ports, I like do to two DC S3700s for a striped SLOG device and then do a RAID-Z2 of spinners on the other 6 slots.  Also get one (preferably two for a mirror) drives that you will plug into the SATA ports (not on the LSI controller) for the local ESXi data store.  I’m using DC S3700s because that’s what I have, but this doesn’t need to be fast storage, it’s just to put FreeNAS on.

2. Flash HBA to IT Firmware

As of FreeNAS 9.3.1 or greater you should be flashing to IT mode P20 (looks like it’s P21 now but it’s not available by every vendor yet).

I strongly suggest pulling all drives before flashing.

 LSI 2308 IT firmware for Supermicro

Here’s instructions to flash the firmware:

Supermicro firmware:

For IBM M1015 / LSI Avago 9220-8i

Instructions for flashing firmware:

LSI / Avago Firmware:

(If you already have the card passed through to FreeNAS via VT-d (steps 6-8) you can actually flash the card from FreeNAS using the sas2flash utility using the steps below (in this example my card is already in IT mode so I’m just upgrading it):

[root@freenas] # cd /root/
[root@freenas] # mkdir m1015
[root@freenas] # cd m1015/
[root@freenas] # wget
[root@freenas] # chmod +x /usr/local/sbin/sas2flash
[root@freenas] # sas2flash -o -f Firmware/HBA_9211_8i_IT/2118it.bin -b sasbios_rel/mptsas2.rom
LSI Corporation SAS2 Flash Utility
Version (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

Advanced Mode Set

Adapter Selected is a LSI SAS: SAS2008(B2)

Executing Operation: Flash Firmware Image

Firmware Image has a Valid Checksum.
Firmware Version
Firmware Image compatible with Controller.

Valid NVDATA Image found.
NVDATA Version
Checking for a compatible NVData image...

NVDATA Device ID and Chip Revision match verified.
NVDATA Versions Compatible.
Valid Initialization Image verified.
Valid BootLoader Image verified.

Beginning Firmware Download...
Firmware Download Successful.

Verifying Download...

Firmware Flash Successful.

Resetting Adapter...

(Wait a few minutes, at this point FreeNAS finally crashed.  Poweroff.  FreeNAS, and then reboot VMware)

Warning on P20 buggy firmware:

Some earlier versions of the P20 firmware were buggy, so make sure it’s version P20.00.04.00 or later.  If you can’t P20 in aversion later than P20.00.04.00 then use P19 or P16.

3. Optional: Over-provision ZIL / SLOG SSDs.

If you’re going to use an SSD for SLOG you can over-provision them.  You can boot into an Ubuntu LiveCD and use hdparm, instructions are here:  You can also do this after after VMware is installed by passing the LSI controller to an Ubuntu VM (FreeNAS doesn’t have hdparm).  I usually over-provision down to 8GB.

Update 2016-08-10: But you may want to only go to 20GB depending on your setup!  One of my colleagues discovered 8GB over-provisioning wasn’t even maxing out 10Gb network (remember, every write to VMware is a sync so it hits the ZIL no matter what) with 2 x 10Gb fiber lagged connections between VMware and FreeNAS.  This was on an HGST 840z so not sure if the same holds true for the Intel DC S3700… and it wasn’t virtualized setup.  But thought I’d mention it here.

4. Install VMware ESXi 6

Image The free version of the hypervisor is here.  I usually install it to a USB drive plugged into the motherboard’s internal header.

Under configuration, storage, click add storage.  Choose one (or two) of the local storage disks plugged into your SATA ports (do not add a disk on your LSI controller).

5. Create a Virtual Storage Network.

For this example my VMware management IP is, the VMware Storage Network ip is, and the FreeNAS Storage Network IP is

Create a virtual storage network with jumbo frames enabled.

VMware, Configuration, Add Networking. Virtual Machine…

Create a standard switch (uncheck any physical adapters).

Image [8]

Image [11]

Add Networking again, VMKernel, VMKernel…  Select vSwitch1 (which you just created in the previous step), give it a network different than your main network.  I use for my storage so you’d put for the IP and for the netmask.

Image [12]

Some people are having trouble with an MTU of 9000.  I suggest leaving the MTU at 1500 and make sure everything works there before testing an MTU of 9000.  Also, if you run into networking issues look at disabling TSO offloading (see comments).

Under vSwitch1 go to Properties, select vSwitch, Edit, change the MTU to 9000.  Answer yes to the no active NICs warning.

Image [14]

Image [15]

Then select the Storage Kernel port, edit, and set the MTU to 9000.

Image [17]

Image [18]

6. Configure the LSI 2308 for Passthrough (VT-d).

Configuration, Advanced Settings, Configure Passthrough.

Image [19]

Mark the LSI2308 controller for passthrough.

Image [20]

You must have VT-d enabled in the BIOS for this to work so if it won’t let you for some reason check your BIOS settings.

Reboot VMware.

7. Create the FreeNAS VM.

Download the FreeNAS ISO.

Create a new VM, choose custom, put it on one of the drives on the SATA ports, Virtual Machine version 11, Guest OS type is FreeBSD 64-bit, 1 socket and 2 cores.  Try to give it at least 8GB of memory.  On Networking give it two adapters, the 1st NIC should be assigned to the VM Network, 2nd NIC to the Storage network.  Set both to VMXNET3.


SCSI controller should be the default, LSI Logic Parallel.

Choose Edit the Virtual Machine before completion.

If you have a second local drive (not one that you’ll use for your zpool) here you can add a second boot drive for a mirror.

Before finishing the creation of the VM click Add, select PCI Devices, and choose the LSI 2308.

Image [32]

And be sure to go into the CD/DVD drive settings and set it to boot off the FreeNAS iso.  Then finish creation of the VM.

7.1 FreeNAS VM Resource allocation

Also, since FreeNAS will be driving the storage for the rest of VMware, it’s a good idea to make sure it has a higher priority for CPU and Memory than other guests.  Edit the virtual machine, under Resources set the CPU Shares to “High” to give FreeNAS a higher priority, then under Memory allocation lock the guest memory so that VMware doesn’t ever borrow from it for memory ballooning.  You don’t want VMware to swap out ZFS’s ARC (memory read cache).

freenas_vmware_cpu_resource allocation


8. Install FreeNAS.

Boot of the VM, install it to your SATA drive (or two of them to mirror boot).


After it’s finished installing reboot.

9. Install VMware Tools.

SKIP THIS STEP.  As of FreeNAS 9.10.1 installing VMware should may no longer be necessary–you can skip step 9 and go to 10.  Just leaving this for historical purposes.

In VMware right-click the FreeNAS VM,  Choose Guest, then Install/Upgrade VMware Tools.  You’ll then choose interactive mode.

Mount the CD-ROM and copy the VMware install files to FreeNAS:

# mkdir /mnt/cdrom
# mount -t cd9660 /dev/iso9660/VMware\ Tools /mnt/cdrom/
# cp /mnt/cdrom/vmware-freebsd-tools.tar.gz /root/
# tar -zxmf vmware-freebsd-tools.tar.gz
# cd vmware-tools-distrib/lib/modules/binary/FreeBSD9.0-amd64
# cp vmxnet3.ko /boot/modules

Once installed Navigate to the WebGUI, it starts out presenting a wizard, I usually set my language and timezone then exit the rest of the wizard.

Under System, Tunables
Add a Tunable.  Variables should be: vmxnet3_load.  The type should be Loader and the Value YES .

Reboot FreeNAS.  On reboot you should notice that the VMXNET3 NICS now work (except the NIC on the storage network can’t find a DHCP server, but we’ll set it to static later), also you should notice that VMware is now reporting that VMware tools are installed.


If all looks well shutdown FreeNAS (you can now choose Shutdown Guest from VMware to safely power it off), remove the E1000 NIC and boot it back up (note that the IP address on the web gui will be different).

10.  Update FreeNAS

Before doing anything let’s upgrade FreeNAS to the latest stable under System Update.

This is a great time to make some tea.

Once that’s done it should reboot.  Then I always go back again and check for updates again to make sure there’s nothing left.

11. SSL Certificate on the Management Interface (optional)

On my DHCP server I’ll give FreeNAS a static/reserved IP, and setup an entry for it on my local DNS server.  So for this example I’ll have a DNS entry on my internal network for

If you don’t have your own internal Certificate Authority you can create one right in FreeNAS:

System, CAs, Create internal CA.  Increase the key length to 4096 and make sure the Digest Algorithm is set to SHA256.


Click on the CA you just created, hit the Export Certificate button, click on it to install the Root certificate you just created on your computer.  You can either install it just for your profile or for the local machine, I usually do local machine, and you’ll want to make sure to store it is in the Trusted Root Certificate Authorities store.



Just a warning, that you must keep this Root CA guarded, if a hacker were to access this he could generate certificates to impersonate anyone (including your bank) to initiate a MITM attack.

Also Export the Private Key of the CA and store it some place safe.

Now create the certificate…

System, Certificates, Create Internal Certificate.  Once again bump the key length to 4096.  The important part here is the Common Name must match your DNS entry.  If you are going to access FreeNAS via IP then you should put the IP address in the Common Name field.


System, Information.  Set the hostname to your dns name.

System, General.  Change the protocol to HTTPS and select the certificate you created.  Now you should be able to go to use https to access the FreeNAS WebGUI.

12. Setup Email Notifications

Account, Users, Root, Change Email, set to the email address you want to receive alerts (like if a drive fails or there’s an update available).

System, Advanced

Show console messages in the footer.  Enable (I find it useful)

System Email…

Fill in your SMTP server info… and send a test email to make sure it works.

13.  Setup a Proper Swap

FreeNAS by default creates a swap partition on each drive, and then stripes the swap across them so that if any one drive fails there’s a chance your system will crash.  We don’t want this.

System, Advanced…

Swap size on each drive in GiB, affects new disks only. Setting this to 0 disables swap creation completely (STRONGLY DISCOURAGED).   Set this to 0.

Open the shell.  This will create a 4GB swap file (based on

dd if=/dev/zero of=/usr/swap0 bs=1m count=4096
chmod 0600 /usr/swap0

If you are on FreeNAS 9.10

System, Tasks, Add Init/Shutdown Script, Type=Command.  Command:

echo "md99 none swap sw,file=/usr/swap0,late 0 0" >> /etc/fstab && swapon -aL

When = Post Init


If you are on FreeNAS 9.3

System, Tunables, Add Tunable.

Variable=swapfile, Value=/usr/swap0, Type=rc.conf


Back to Both:

Next time you reboot on the left Navigation pane click Display System Processes and make sure the swap shows up.  If so it’s working.


14. Configure FreeNAS Networking

Setup the Management Network (which you are currently using to connect to the WebGUI).

Network, Interfaces, Add Interface, choose the Management NIC, vmx3f0, and set to DHCP.


Setup the Storage Network

Add Interface, choose the Storage NIC, vmx3f1, and set to (I setup my VMware hosts on 10.55.0.x and ZFS servers on 10.55.1.x), be sure to select /16 for the netmask.  And set the mtu to 9000.


Open a shell and make sure you can ping the ESXi host at

Reboot.  Let’s make sure the networking and swap stick.

15. Hard Drive Identification Setup

Label Drives.   FreeNAS is great at detecting bad drives, but it’s not so great at telling you which physical drive is having an issue.  It will tell you the serial number and that’s about it.  But how confident are you in knowing which drive fails?  If FreeNAS tells you that disk da3 (by the way, all these da numbers can change randomly) is having an issue how do you know which drive to pull?  Under Storage, View Disks, you can see the serial number, this still isn’t entirely helpful because chances are you can’t see the serial number without pulling a drive.  So we need to map them to slot numbers or labels of some sort.


There are two ways you can deal with this.  The first, and my preference, is sas2ircu.  Assuming you connected the cables between the LSI 2308 and the backplane in proper sequence sas2ircu will tell you the slot number the drives are plugged into on the LSI controller.  Also if you’re using a backplane with an expander that supports SES2 it should also tell you which slots the drives are in.  Try running this command:

# sas2ircu 0 display|less


You can see that it tells you the slot number and maps it to the serial number.  If you are comfortable that you know which physical drive each slot number is in then you should be okay.

If not, the second method, is remove all the drives from the LSI controller, and put in just the first drive and label it Slot 0 in the GUI by clicking on the drive, Edit, and enter a Description.



Put in the next drive in Slot 1 and label it, then insert the next drive and label it Slot 2 and so on…

The Description will show up in FreeNAS and it will survive reboots.  it will also follow the drive even if you move it to a different slot.  So it may be more appropriate to make your description match a label on the removable trays rather than the bay number.

It doesn’t matter if you label the drives or use sas2ircu, just make sure you’re confident that you can map a serial number to a physical drive before going forward.

16.1 Choose Pool Layout

For high performance the best configuration is to maximize the number of VDEVs by creating mirrors (essentially RAID-10).  That said, with my 6-drive RAID-Z2 array with 2 DC S3700 SSDs for SLOG/ZIL my setup performs very well with VMware in my environment.  If you’re running heavy random I/O mirrors are more important, but if you’re just running a handful of VMs RAID-Z / RAID-Z2 will probably offer great performance as long as you have a good SSD for SLOG device.   I like to start double parity at 5 or 6 disk VDEVs, and triple parity at 9 disks.  Here some some sample configurations:

Example zpool / vdev configurations

2 disks = 1 mirror
3 disks = RAID-Z
4 disks = RAID-Z or 2 mirrors
5 disks = RAID-Z, or RAID-Z2, or 2 mirrors with hot spare.
(Don’t configure 5 disks with 4 drives being in RAID-Z plus 1 hot spare–that’s just ridiculous.  Make it a RAID-Z2 to begin with).
6 disks = RAID-Z2, or 3 mirrors
7 disks = RAID-Z2, or 3 mirrors plus hot spare
8 disks = RAID-Z2, or 4 mirrors
9 disks = RAID-Z3, or 4 mirrors plus hot spare
10 disks = RAID-Z3, 2 vdevs of 5 disk RAID-Z2 or 5 mirrors
11 disks = RAID-Z3, 2 vdevs of 5 disk RAID-Z2 plus hot spare or 5 mirrors with hot spare
12 disks = 2 vdevs of 6 disk RAID-Z2, or 5 mirrors with 2 hot spares
13 disks = 2 vdevs of 6 disk RAID-Z2 plus hot spare or 5 mirrors with one hot spare
14 disks = 2 vdevs of 7 disk RAID-Z2 or 6 mirrors plus 2 hot spares
15 disks = 3 vdevs of 5 disk RAID-Z2 or 7 mirrors with 1 hot spare
16 disks = 3 vdevs of 5 disk RAID-Z2 plus hot spare or 7 mirrors with 2 hot spares
17 disks = 3 vdevs of 5 disk RAID-Z2 plus hot spares or 7 mirrors with 3 hot spares
18 disks = 2 vdevs of 9 disk RAID-Z3, 3 vdevs of 6 disk RAID-Z2 or 8 mirrors with 2 hot spares
19 disks = 2 vdevs of 9 disk RAID-Z3, 3 vdevs of 6 disk RAID-Z2 plus hot spares or 8 mirrors with 3 hot spares
20 disks = 2 vdevs of 10 disk RAID-Z3 4 vdevs of 5 disk RAID-Z2 plus hot spares or 9 mirrors with 2 hot spares

Anyway, that gives you a rough idea.  The more vdevs the better random performance.  It’s always a balance between capacity, performance, and safety.

16.2  Create the Pool.

Storage, Volumes, Volume Manager.

Click the + next to your HDDs and add them to the pool as RAID-Z2.

Click the + next to the SSDs and add them to the pool.  By default the SSDs will be on one row and two columns.  This will create a mirror.  If you want a stripe just add one Log device now and add the second one later.  Make certain that you change the dropdown on the SSD to “Log (ZIL)”  …it seems to lose this setting anytime you make any other changes so change that setting last.  If you do not do this you will stripe the SSD with the HDDs and possibly create a situation where any one drive failure can result in data loss.


Back to Volume manager and add the second Log device…


I have on numerous occasions had the Log get changed to Stripe after I set it to Log, so just double-check by clicking on the top level tank, then the volume status icon and make sure it looks like this:


17.  Create an NFS Share for VMware

You can create either an NFS share, or iSCSI share (or both) for VMware.  First here’s how to setup an NFS share:

Storage, Volumes, Select the nested Tank, Create Data Set

Be sure to disable atime.


Sharing, NFS, Add Unix (NFS) Share.   Add the vmware_nfs dataset, and grant access to the storage network, and map the root user to root.


Answer yes to enable the NFS service.

In VMware, Configuration, Add Storage, Network File System and add the storage:


And there’s your storage!


18.  Create an iSCSI share for VMware

WARNING: Note that at this time, based on some of the comments below with people having connection drop issues on iSCSI I suggest testing with heavy concurrent loads to make sure it’s stable.  Watch dmesg and /var/log/messages on FreeNAS for iSCSI timeouts.  Personally I use NFS.  But here’s how to enable iSCSI:

Storage, select the nested tank, Create Zvol.  Be sure compression is set to lz4.  Check Sparse Volume.  Choose advanced mode and optionally change the default block size.  I use 64K block-size based on some benchmarks I’ve done comparing 16K (the default), 64K, and 128K.  64K blocks didn’t really hurt random I/O but helped some on sequential performance, and also gives a better compression ratio.  128K blocks had the best better compression ratio but random I/O started to suffer so I think 64K is a good middle-ground.  Various workloads will probably benefit from different block sizes.


Sharing, Block (iSCSI), Target Global Configuration.

Set the base name to something sensible like:  Set Pool Available Space Threshold to 60%

iscsi _target_global

Portals tab… add a portal on the storage network.


Initiator.  Add Initiator.


Targets.  Add Target.


Extents.  Add Extent.


Associated Targets.  Add Target / Extent.


Under Services enable iSCSI.

In VMware Configutration, Storage Adapters, Add Adapter, iSCSI.

Select the iSCSI Software Adapter in the adapters list and choose properties.  Dynamic discovery tab.  Add…


Close and re-scan the HBA / Adapter.

You should see your iSCSI block device appear…


Configuration, Storage, Add Storage, Disk/LUN, select the FreeBSD iSCSi Disk,


19.  Setup ZFS VMware-Snapshot coordination.

This will coordinate with VMware to take clean snapshots of the VMs whenever ZFS takes a snapshot of that dataset.

Storage.  Vmware-Snapshot.  Add VMware-Snapshot.  Map your ZFS dataset to the VMware data store.

ZFS / VMware snapshots of NFS example.


ZFS / VMware snapshots of iSCSI example.


20. Periodic Snapshots

Add periodic snapshot jobs for your VMware storage under Storage, Periodic Snapshot Tasks.  You can setup different snapshot jobs with different retention policies.


21. ZFS Replication

If you have a second FreeNAS Server (say you can replicate the snapshots over to it.  On, Replication tasks, view public key. copy the key to the clipboard.

On the server you’re replicating to,, go to Account, View Users, root, Modify User, and paste the public key into the SSH public Key field.  Also create a dataset called “replicated”.

Back on

Add Replication.  Do an SSH keyscan.


And repeat for any other datasets.  Optionally you could also just replicate the entire pool with the recursive option.

22.  Automatic Shutdown on UPS Battery Failure (Work in Progress)

The goal is on power loss, before the battery fails to shutdown all the VMware guests including FreeNAS.  So far all I have gotten is the APC working with VMware.  Edit the VM settings and add a USB controller, then add a USB device and select the UPS, in my case a APC Back-UPS ES 550G.  Power FreeNAS back on.

On the shell type:

dmesg|grep APC

ugen0.4: <APC> at usbus0 

This will tell you where the APC device is.  IN my case it’s showing up on ugen0.4.  I ended up having to grant world access to the UPS…

chmod 777 /dev/ugen0.4

For some reason I could not get the GUI to connect to the UPS, I can selected ugen0.4, but under the drivers dropdown I just have hyphens —— … I set it manually in /usr/local/etc/nut/ups.conf

driver = usbhid-ups
port = /dev/ugen0.4
desc = "APC 1"

However, this file gets overwritten on reboot, and also the rc.conf setting doesn’t seem to stick.  I added this tunable to get the rc.conf setting…


And I created my ups.conf file in /mnt/tank/ups.conf.  Then I created a script to stop the nut service, copy my config file and restart the nut service in /mnt/tank/

service nut stop
cp /mnt/tank/ups.conf /usr/local/etc/nut/ups.conf
service nut start

Then under tasks, Init/Shutdown Scripts I added a task to run the script post init.


Next step is to configure automatic shutdown of the VMware server and all guests on it…  I have not done this yet.

There’s a couple of approaches to take here.  One is to install a NUT client on the ESXi, and the other is to have FreeNAS ssh into VMware and tell it to shutdown.  I may update this section later if I ever get around to implementing it.

23. Backups

Before going live make sure you have adequate backups!  You can use ZFS replication with a fast link.  For slow network connections Rsync will work better (Took under Tasks -> Rsync tasks) or use a cloud service like CrashPlan.   Here’s a nice CrashPlan on FreeNAS Howto.

BACKUPS BEFORE PRODUCTION.  I can’t stress this enough, don’t rely on ZFS’s redundancy alone, always have backups (one offsite, one onsite) in place before putting anything important on it.

 Setup Complete… mostly.

Well, that’s all for now.

335 thoughts on “FreeNAS 9.10 on VMware ESXi 6.0 Guide”

  1. This is a really great post that gave me some good tips. There is great overlap between the hardware we use.
    In an all-in-one like this, do you see any benefits to using vSphere 6 over 5.5? I use a combination of the desktop client and the command line to manage the VMs and thus run vmx-09 instead of 11.

  2. Thanks, Soeren. There are a few minor advantages:

    ESXi 6 allows you to create and modify the newer HW versions, however it doesn’t allow you to use the newer features of those versions. I can edit a VM I created with HW11. That’s a small step up from 5.5 at least which wouldn’t let you edit newer hardware versions at all. I pretty much do all my management from the desktop C# client, I haven’t needed any of the newer features that aren’t available there.

    VMware Tools for FreeBSD work fine with FreeNAS 9.3, in ESXi 5.5 one had to compile or download the vmxnet3 driver that someone on the FreeNAS forums made.

    The NFS now supports 4.1, the main advantage here is if you want multi-pathing to your storage. I believe FreeBSD 10.1 has NFS 4.1, so FreeNAS 10 will likely support ESXi with NFS 4.1.

    Other than that I don’t see any major changes. My critical production server I’ll keep on 5.5 for awhile longer, but I’m running 6.0 on two servers without issues so far.

  3. As I’m building a server based on the ASRock Avoton board, I figured I’d do some interesting stuff during my testing phase so I used the ESXi RDM hack that’s floating around the internet so I could present the raw disks on the machines SATA ports to a NAS VM. Nothing important on the machine currently, nothing lost and only my spare time wasted. I also wanted to test for issues with the disk controller.

    The TL;DR of this lash up was:

    Debian ‘Wheezy’ wrote 2TB of /dev/zero to an ext4 formatted ‘RDM’ disk with no problem and did it 6 times, once to each of the disks I was using.

    OmniOS + napp-it, wrote about the same to a RAIDZ2 array made of 6x2TB ‘RDM’ disks (I decided it’s ACL’s were too much of a pain for a 3 user home server).

    FreeNAS… exploded quite spectacularly and managed to lock the host machine solid with disk errors after only 200GB of data had been passed to a recreated RAIDZ2 array.

    Post FreeNAS the Debian experiment was performed again and all 6 disks passed with flying colours (as well as being given the thumbs up by both ESXi and the BIOS’s SMART check). It was also noted from inside the vSphere client that FreeNAS had consumed all but 2.5GB of the 14GB RAM allocated to it whilst OmnioOS consumed a grand total of 8GB, both VM’s were given the same amount of RAM.

    Conclusion: FreeNAS has some very serious and major issues, be these inherent to FreeBSD or bugs in its ZFS implementation I can’t say but of the three OS’s tested… it was the only one to experience issues.

    • Good testing Sarah, That’s very interesting the FreeNAS crashed with RDMs while the other OSes did not. I concur with your results on FreeNAS eating up memory compared to OmniOS. OmniOS also has also outperformed FreeNAS in most tests on the same environment in all my testing which includes an HP Microserver N40L, HP Gen 8 Microserver, ASRock Avoton C2750, and X10SL7-F with Xeon E3-1245v3.

      I still prefer OmniOS over FreeNAS for most deployments. FreeNAS is pretty good if you have the memory, I would consider it reliable (except on RDMs apparently) but it isn’t robust like OmniOS is. Especially when you start to have problems like a ZIL failure it’s much easier and faster to get things up and running again with OmniOS. I do think FreeNAS has improved a lot with the 9.3 series, and hopefully that trend will continue with 10. FreeNAS probably has a slight edge on iSCSI for VMware now with the VAAI integration, but I much prefer NFS for it’s simplicity.

      I know exactly what you mean by the OmniOS ACLs being overkill–but the nice thing is for businesses is it integrates perfectly with Windows ACLs. What I do for my home network is setup one user account for the family to share.

      Be very careful with RDM, it may work for awhile but you could run into data corruption even on OmniOS or Debian. It may be okay but be sure to stress test it real good first. I think running on top of VMDKs is a little safer and doesn’t cost more than 1-2% performance (which the RDMs are probably costing you anyway). I’ve also run FreeNAS on the Avoton on top of VMDKs with no data corruption issues. YMMV.

  4. hallo,
    first of all, thank you for your great article. When I install the two dependencies in Freenas 9.3 I get These errros:

    pkg_add -r compat6x-amd64

    Error: Unable to get File unavailable

    I found out, that under the path only the directory packages-8.4-release exists.

    are prior to installation with pkg_add additional commands to execute?

    Many thanks in advance

  5. Ben, this is a great guide and I’ve referred to it endlessly over the last couple of weeks. Kudos, sir, and many thanks!

    I have a nice all-in-one running and FreeNAS 9.3 fires up just as expected. The only problem is that ESXi doesn’t see the FreeNAS VM’s iSCSI datastore until I manually rescan for datastores with the vSphere client.

    Is there a way to set up ESXi to rescan datastores after the FreeNAS VM has started up?

    • You’re welcome, Keith. I’m just running NFS (it’s a lot easier to manage if you don’t need the performance) so I don’t run into that problem… but you can have FreeNAS tell VMware to rescan the iSCSI adapter once it boots…

      First, under Configure -> Virtual Machine Startup/Shutdown configure your FreeNAS VM to boot first.

      Then give FreeNAS a post init Task (like the nutscript above) to ssh into VMware and run a command like this:

      ssh root@vmwarehost esxi storage core adapter rescan –all (or you can specify the iSCSI adapter).

      You’ll need to setup a public/private ssh key (see: so that FreeNAS can ssh into VMware without being prompting for a password.

  6. This guide was extremely helpful. thank-you. I have nearly doubled my performance on my server.

    I don’t understand why we added 2 nic though, I don’t ever see myself using the second. I’m not complaining mind you, I went from 45 Mps to 80+Mps on WD greens that is what I was getting on bare metal.

    • Hi, Randy. You’re welcome. The reason for the 2nd virtual nic is to separate Storage traffic from LAN traffic. One nice thing about VMware is virtual switches are free.

  7. Ben – I’ve been delving pretty deeply into FreeNAS 9.3 on VMware 6.0 for the last few weeks and thought I would share my impressions.

    I am running the lastest version (FreeNAS 9.3-STABLE-201506042008) virtualized on VMware 6.0 with 4 vCPUs and 16GB RAM on:
    Supermicro X10SL7, Intel Xeon ES-1241v3 @3.5GHz, 32GB ECC RAM
    6 x 2TB HGST 7K4000 drives set up as 3 mirrored vdevs for a 6TB zpool (‘tank’)
    Motherboard’s LSI 2308 controller passed through to FreeNAS VM via VT-d per best practices

    I used your guide here as a roadmap to configure FreeNAS networking and shares.

    It seems to work fine as a CIFS/SMB file server. It doesn’t work so well as a VMware datastore. In all of my testing below, I used two Windows VMs – one running Windows 7, the other Windows Server 2012R2 – homed on the appropriate datastore.

    At first I thought iSCSI was going to be the ticket and it certainly was blazingly fast running the ATTO benchmark on a single Windows VM. But it turns out that it breaks down badly under any kind of load. When running the ATTO benchmark simultaneously on both Windows VMs I get this sequence of error messages:

    WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
    WARNING: ( connection error; dropping connection

    …after which the VMs either take a very long time to finish the benchmark or lock up/become unresponsive. There is an unresolved Bug report about this very problem here:

    Next I tried NFS, which seemed to work well as long as you go against best practices and disable ‘sync’ on the NFS share. But, lo and behold, NFS too fails under load; just as with iSCSI, both Windows VMs lock up when running ATTO simultaneously, only in this instance there is no enlightening error message. In addition, NFS datastores sometimes fail to register with VMware after a reboot, despite my scripting a datastore rescan as we discussed earlier here.

    So, after high expectations, much testing, and a great deal of disappointment, I’m now going to abandon FreeNAS for a while and give something else a try, perhaps OmniOS/NappIt…

    • Hi, Keith. Thanks for the update. I did not run into the iSCSI issue but I mainly run NFS in my environment. From that bug It looks like there is an issue with iSCSI but here are a few ideas to check on NFS performance:

      1. Are you running SSDs for ZIL? VMware’s NFS performs horribly without ZIL, or you can leave sync disabled for testing.
      2. Try increasing number of NFS servers in FreeNAS. See Josh’s May 22nd comment on this page:
      3. You only have 4 physical cores so if you give FreeNAS 4 vCPUs you’re forcing otherwise you’re forcing either your test VMs or FreeNAS to run on hyperthreaded cores. For your test I would give FreeNAS 2 vCPUs and each of your VMs one vCPU. On my setup I found it best to give FreeNAS 2 vCPUs and all other VMs 1 vCPU.
      4. Any kind of vibration, from washing machines in the next room or even having more than one non-enterprise class drive in the same chassis, or even having your server near loud noise can kill performance.
      5. Make sure you gave FreeNAS at least 8GB, also if you over-commit memory make sure FreeNAS has “reserve all memory (locked to VM)” checked under the resources in the VM options.

      Also, I would be very curious on your OmniOS / Napp-It performance if you go that route and how it compares to FreeNAS, I have gotten much better performance out of OmniOS (especially with NFS) but I’d like to see those results validated.

  8. Hi, Ben — thanks for your suggestions.

    RE: 1> I tried using a 120GB Samsung 850 EVO for L2ARC and a 60GB Edge Boost Pro Plus (provisioned down to 4GB) for SLOG and honestly couldn’t see any improvement on my system. At the same time, I understand that these aren’t ideal SSDs for these purposes. Also, I’m only using 16GB for FreeNAS and certain knowledgable posters (ahem) at the FreeNAS forums claim that L2ARC and/or SLOG are useless unless you have a gazillion Terabytes of RAM. :-)

    At any rate, I run the NFS tests with sync disabled when there is no SLOG device.

    RE: 2> I tried increasing the NFS servers from 4 to 6, 16, and 32, again with no real difference in outcome. Adding servers only seems to delay the point at which NFS breaks down and the ATTO benchmarks stall.

    RE: 3> I was running FreeNAS with 2 sockets and 2 cores per socket for a total of 4 CPUs. I thought by doing this I was allocating half the system’s CPU to FreeNAS. Perhaps a naive assumption on my part? I have since tried other settings: 1 socket w/ 4 cores, 4 sockets with 1 core, etc., including combinations giving the lower vCPU count of 2 which you suggested. Again, this doesn’t seem to make any discernible difference in the outcome.

    RE: 4> Hmmmm… Interesting that ambient noise can have such a profound effect… and there IS a window AC near the FreeNAS system. I will try running the benchmarks again with the AC turned off!

    RE: 5> For all of these tests I’ve run FreeNAS with 16GB and the Windows VMs with 1 vCPU and 1GB (WS2012R2) or 2GB (Windows 7) of RAM.

    Interestingly, the FreeNAS NFS service seems to work fine across the LAN. I’ve configured another ESXi server on a SuperMicro X8SIE-LN4F system and clones of the Windows test VMs successfully complete the ATTO tests simultaneously, albeit only at gigabit speeds.

    I’m a developer, and my intended use for a VMware+FreeNAS all-in-one is 1> as a reliable file server and 2> as a platform to run a very few VMs: an Oracle database, various versions of Windows and other OSes as the need arises, etc.

    I’m tempted to ignore these oddball ATTO benchmark stalls and just go ahead with my plans to use this all-in-one I’ve built. In all other respects it works great; I love FreeBSD, FreeNAS, and ZFS; I like the fact that there is an active support community; and, like you, I have developed a certain respect for iXsystems and the people who work there.

    • Hi, Keith. If I get a chance (no promises) I’m going to try to deploy a couple of Windows VMs and run ATTO on my setup to see what happens.

  9. I am running the lastest version (FreeNAS 9.3-STABLE-201506042008) virtualized on VMware 6.0 but can’t add vmxnet3 adapters in FreeNAS. Followed your guide, and even tried compiling drivers . Any thoughts?

  10. Nevermind, I started from scratch again using version 11 VM, I now get vmx3f0 & vmx3f1 interfaces as expected… Thanks for the great blog…

        • You’re welcome. I meant to mention that in the article so I’m glad you brought up the reminder about promiscuous mode. How do you like Plex? I was looking at purchasing it but $150 is a little steep so I”m using Emby at the moment.

      • Hi Ben. First off, great blog and great article, very informative and helpful. So I’m in Peter’s boat and I’m running the latest FreeNAS (FreeNAS-9.3-STABLE-201512121950) on VMware 6.0, and I also can’t add vmxnet3 adapters in FreeNAS. I’ve tried numerous times, using both the FTP site and the binaries, and both ways It states that VMware Tools are running. I’ve setup the Tunables correctly as well. This is also the 5 time of starting from scratch. The only thing that is different is that I’m running PH19-IT for the LSI, but should not effect anything. Any suggestions on what I should look for would be greatly appreciated.

        • One think you might try is going to an older version of VMware 6 or older version of FreeNAS (not to run in production obviously, just to troubleshoot). Also, see if the vmxnet3 drivers work with a normal FreeBSD 9.3 install.

          • Apologies, I forgot to reply back on here to let you know I got it working. Oddly enough, I ended up starting from scratch a 6th time, not changing how I was installing everything from the previous attempts and for whatever reason it worked this 6th time. Not sure why it decided to work this time, but I’ll take it. Also, I want to mention that I ended up flashing version 20 for the LSI and so far it’s working wonderfully with excellent read/write throughput.

            So, everything is working with VMware 6.0 using version 11, FreeNAS-9.3-STABLE-201512121950, PH20-IT for the LSI, and vmxnet3.ko with both interfaces showing up as vmx3f0 & vmx3f1.

            On a side note, have you tried adding an addition physical NIC to your system and running a lagg for Storage in FreeNAS? In my old FreeNAS box I had 1 for my management & 2 in a lagg for my storage, and was hoping to do the same with this setup.

          • Glad to hear you got it working. I have not setup lagg with FreeNAS under VMware–I haven’t had the need to in my home environment. But I see no reason why it wouldn’t work. Thanks for the report on version 20. I might remove my warning note and flash that myself soon.

        • No issues with VMXNET3 on OmniOS.

          Been running OmniOS for months, works like a champ. My primary SSD which has ESXi + OmniOS failed and the system kept going because of the OmniOS mirror.

          Systems runs like a champ. I don’t think I would switch (hard to find a reason). Happy Holidays Ben!

    • Hi, Keith. Excellent troubleshooting! Thanks for posting that, hopefully someone from FreeNAS can respond there to give some insight into the issue. I’m actually not surprised it was the MTU as I’ve seen that hurt performance before, but it slipped my mind. The good thing is on modern hardware running the default MTU is probably not going to hurt your performance that much–I think I measured around 2% improvement with 9000 on even lesser hardware. I am very curious now–once I get some Windows VMs setup I’ll try to run that ATTO test on both MTU settings to see if I can reproduce your issue.

      • Ahem… well, I was only half right. This afternoon I confirmed that reverting to the standard MTU fixes the problem w/ NFS datastores, but not iSCSI. So, for now, I have to conclude that iSCSI is basically unreliable, and therefore unusable, on VMware 6 + FreeNAS 9.3.

        I wonder if reverting to VMware 5.x would make any difference?

        • Hi, Keith. I was able to duplicate your problem. I deployed two Win 10 VMs with ATTO. On NFS very slow I/O with multiple VMs hitting the storage. And on iSCSI multiple instances of this:

          WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
          WARNING: ( connection error; dropping connectiWARNING: ( connection error; dropping connection

          And pretty awful performance.

          Disabling segmentation offloading appears to have fixed the issue for me. You can turn it off with:

          # ifconfig vmx3f0 -tso
          # ifconfig vmx3f1 -tso

          Can you test in your environment?

          Note that this setting does not survive a reboot, so if it does help add it to the post init scripts or network init options under FreeNAS.

  11. Just for the record I am also having the same “no ping reply (NOP-Out) after 5 seconds; dropping connection” as Keith. I’ve taken the MTU back to 1500 but this has made no difference. I am on a current FreeNAS patch level and also running VMWare 6.0 2715440. I am also running good quality hardware with Intel / bcom nics.

    Two of my 3 hosts had Delayed ACK enabled and they went down, with the above issue. The one that did not stayed up however it was under the least amount of load. I’m going to disable Delayed ACKs again on my VMWare iSCSI initiators however it does seem to seriously kill the performance. From what I can see the iSCSI in FreeNAS does fall over when under heavy load, like when I have backups running. My environment is used 24 hours a day.

    Basically for me everything was stable on FreeNAS 9.2. I suspect the new performance features in the 9.3 iSCSI target have issues when put under load.

    • Thanks for the confirmation of the issue Richard. I remembered at work we had to disable segmentation offloading because of a bug in the 10GB Intel drivers (which had not been fixed by Intel as of Feb at least), and may be the same issue on the VMXNET3 driver. See my comment above responding to Keith and let me know if that helps your situation at all.

  12. Ben (and Richard) — Sorry I’m just now seeing your questions and updates.

    To answer your question Ben: the problem doesn’t happen with the E1000 (see below).

    Over the last few days I’ve learned that the VMware tools are automatically installed by FreeNAS 9.3, but NOT the VMXNET3 NIC driver! And according to Dru Lavigne (FreeNAS Core Team member), the VMXNET3 driver will not be supported by FreeNAS until version 10 (see this thread):

    So, since reliability is more important to me than performance, I have reverted to the somewhat slower E1000 NIC drivers. For now, E1000-based NFS datastores w/ sync disabled are suitable for my needs. I’m a developer and pretty much the only user of all of the systems here at my home, with the exception of backing up the family’s iPads’n’gizmos. I will probably round up an Intel DC S3700 and install it as a ZIL SLOG at some point. This would let me restore synchronous writes on the VMware datastores without sacrificing too much performance. Both NFS and iSCSI are unusably slow w/ synchronous writes enabled, at least on my system.

    Regarding the VMware tools installation… Not knowing they were already being installed by FreeNAS, I originally used Ben’s instructions above, fighting through the missing archive issues and so forth. But I eventually ran the ‘System->Update->Verify Install’ tool and FreeNAS told me that quite a few of my modules were out-of-whack. I suspect this was because I’d installed Perl and such from the slightly out-of-date archive during the course of installing the tools. I re-installed FreeNAS just a few days ago and this problem ‘went away’.

    FreeNAS 9.3 seems to be going through a rough patch recently; I think they may have ‘picked it a little green’, as they say here in the South. There have been a slough of updates and bug fixes since I started testing it in early May. I like it and want to use it, but I’m hoping to see the team get it a little more stabilized before I put it into ‘production’ use. For now, I’m still relying on my rock-steady and utterly reliable Synology DS411+II NAS.

    • Hi Ben,

      Just for everyones benefit and clarity, I am running FreeNAS on an HP xw4600 workstation with a Quad Port Intel card using the igb driver. I also have link aggregation configured however I am only using fault tolerant teams, so this should be safe.

      I’ve not made any additional changes since my post the other day however here is a sample of todays issues from the syslog output. Our system has not actually gone down again however this may be down to disabling the Delayed ACK option in the iSCSI initiator in VMWare.

      Jun 20 17:53:41 xw4600 WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
      Jun 20 17:53:41 xw4600 WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
      Jun 20 18:26:54 xw4600 WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
      Jun 20 18:26:55 xw4600 WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
      Jun 21 00:00:01 xw4600 syslog-ng[4232]: Configuration reload request received, reloading configuration;
      Jun 21 17:36:12 xw4600 WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
      Jun 22 00:00:01 xw4600 syslog-ng[4232]: Configuration reload request received, reloading configuration;
      Jun 23 00:00:01 xw4600 syslog-ng[4232]: Configuration reload request received, reloading configuration;
      Jun 23 15:01:41 xw4600 WARNING: ( no ping reply (NOP-Out) after 5 seconds; dropping connection
      Jun 24 00:00:01 xw4600 syslog-ng[4232]: Configuration reload request received, reloading configuration;
      Jun 24 11:31:28 xw4600 ctl_datamove: tag 0x1799cf6 on (3:4:0:1) aborted
      Jun 24 11:31:33 xw4600 ctl_datamove: tag 0x1798fe4 on (3:4:0:1) aborted
      Jun 24 11:31:33 xw4600 ctl_datamove: tag 0x1798cd7 on (3:4:0:1) aborted
      Jun 24 11:31:34 xw4600 ctl_datamove: tag 0x17a50bf on (2:4:0:1) aborted
      Jun 24 11:31:34 xw4600 ctl_datamove: tag 0x1799b84 on (3:4:0:1) aborted
      Jun 24 11:32:34 xw4600 ctl_datamove: tag 0x179bbf3 on (3:4:0:1) aborted
      Jun 24 11:32:52 xw4600 ctl_datamove: tag 0x17a5b70 on (2:4:0:1) aborted
      Jun 24 11:32:52 xw4600 (3:4:1/0): READ(10). CDB: 28 00 1f 31 34 a0 00 01 00 00
      Jun 24 11:32:52 xw4600 (3:4:1/0): Tag: 0x1798872, type 1
      Jun 24 11:32:52 xw4600 (3:4:1/0): ctl_datamove: 150 seconds
      Jun 24 11:32:52 xw4600 ctl_datamove: tag 0x1798872 on (3:4:0:1) aborted
      Jun 24 11:32:52 xw4600 (3:4:1/0): READ(10). CDB: 28 00 1f 31 34 a0 00 01 00 00
      Jun 24 11:32:52 xw4600 (3:4:1/0): Tag: 0x1798872, type 1
      Jun 24 11:32:52 xw4600 (3:4:1/0): ctl_process_done: 150 seconds
      Jun 24 11:32:52 xw4600 (2:4:1/0): READ(10). CDB: 28 00 1f 31 35 a0 00 00 80 00
      Jun 24 11:32:52 xw4600 (2:4:1/0): Tag: 0x17a528b, type 1
      Jun 24 11:32:52 xw4600 (2:4:1/0): ctl_process_done: 149 seconds

      Just to clarify you are suggesting I disable TSO/LRO on FreeNAS box, not on the VMWare hosts? I’ve issued “ifconfig lagg0 -tso -vlanhwtso” which appears to have disable the TSO options of all lagg and member igb adapters in the system.

      Will see how things progress and if they work out then I will add them to the nic options for all nics.



      • Hi, Rich. Correct, I have not had to disable TSO/LRO on VMware, just on FreeNAS. In our case we had two Intel 10GB NICs configured in a LAGG and were getting great write performance but reads were pretty bad. We traced it to a bug in the Intel FreeBSD drivers. I don’t think we saw the connection drop, but definitely saw poor read performance from FreeNAS. I do recall we had to set the -tso switch on the individual interfaces for some reason, it didn’t seem to take effect when setting it on the lagg interface. I think in the FreeNAS post init script we have “ifconfig ixgbe0 -tso; ifconfig ixgbe1 -tso” which does the trick. Another thing we had to do was lower the MTU to 1500 on storage network but I’m thinking that may have been because of a limitation on the switch, or another device that we needed on the storage network. Either way 1500 MTU will hardly hurt your performance so I’d start there and make sure you get good performance before trying to go higher. This may not be the issue in your environment but worth a shot.

        • Hi Ben and Keith,

          Ben’s assumption is right, bare metal in my case and I would only use a virtualised SAN for testing, as I’m sure you are Keith. The systems I am having issues with is our non critical production and testing systems.

          We are a Cisco (Catalyst) house for the network but I went back to 1500 MTU fairly early on as there was a suggestion that frames of 9000 bytes might not be equal in all systems and drivers.

          I’ve checked the syslogs today and so far so good but I think I need a good week first before I feel safe.

          I do wish FreeNAS would chill out on the regular updates a bit and work on the stability aspect. 9.2 was a good release for me. As Ben already said you don’t want to bother messing with updating a SAN that often and when you do you want the stable release you are installing to be rock solid.

          It almost feels like I’m on the nightly dev build release tree but I double checked and I’m not. :-)

          An impressive little system anyway and I do appreciate all the hard work that goes into it, plus the community that comes with it.

          Thanks for your time chaps with the suggestions and thanks Devs!

    • Hi, Keith. Thanks for the confirmation on the E1000. FreeBSD 9 is officially supported with the VMXNET3 driver from ESXi 6 so I wonder if there’s something specific to FreeNAS that’s giving them trouble? I just tried a verify install on mine and it found no inconsistencies so the older FTP source probably did it. But I think the E1000 driver is fine, on OmniOS last year I did a test between E1000 and VMXNET3 and the VMXNET3 was about 5% faster and this was on an Avoton so probably even narrower on an E3.

      I’ve got four DC S3700s and they work very well in both FreeNAS and OmniOS so I think you’ll do well there. There’s also the new NVMe drives that are coming out which should be faster.

      Regarding stability I agree, it doesn’t seem to be as robust as the OmniOS LTS releases and the update cycle is a lot faster–and storage is really something you want to turn on and rarely, if ever, update. On the other hand FreeNAS is pretty quick at fixing issues being found by a very wide and diverse user base. I would guess that a lot of updates are fixes for edge cases that aren’t even considered on most storage systems. I would like to see a FreeNAS LTS train on a slower more tested update cycle, but that would probably be what TrueNAS is considered. I stay a little behind on the FreeNAS updates to be on the safe side.

  13. Morning Chaps,

    Well this morning at around 3am FreeNAS started causing issues with VMWare which once again cause our VI to slow to a crawl. No errors on the syslog console but as soon as we powered off the FreeNAS everything else sprang into life. Power it back on and started the iSCSI services but I’ve had a few of the error below.

    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26f99fe on (3:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26eaf0c on (5:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26f99ff on (3:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26f9a00 on (3:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26f9a01 on (3:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26eaf33 on (5:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26f9a02 on (3:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26f9a03 on (3:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x26eaf36 on (5:4:0:1) aborted
    Jun 29 09:40:37 xw4600 ctl_datamove: tag 0x4f701 on (4:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaec1 on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf37 on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf39 on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf3a on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf3b on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x4f702 on (4:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x4f6ed on (6:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf3c on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf3e on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x26eaf38 on (5:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x4f6f8 on (6:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x4f708 on (4:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x4f70d on (4:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x4f70b on (4:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x548e9 on (2:4:0:1) aborted
    Jun 29 09:40:51 xw4600 ctl_datamove: tag 0x548ea on (2:4:0:1) aborted
    Jun 29 09:42:09 xw4600 ctl_datamove: tag 0x4f73d on (4:4:0:1) aborted

    I’ve been doing some diags on my hosts with esxtop then u to show my storage by LUN. I’ve observed the DAVG (Average disk Queue) is often between 17 (best) and 550 (worse). I had the opportunity to speak to a senior VMWare engineer about this last week who advised be anything over 30 indicates an issue. Our main iSCSI Starwind SAN which is heavily loaded and processes around 400 CMDS/s has an DAVG of about 4 – 6.

    I’m ditching FreeNAS 9.3. I may go back to 9.2 or try OmniOS LTS but what ever I do I can’t leave things like this and hope it gets fixed quickly. I still think it’s a great little product but the releases updates so seem to come a little thick and fast for me. As Ben said before storage is something you don’t really want to be messing with once you have it commissioned and I need something stable.

    Thanks for your help everyone, it been a fun and informative ride!


    • Thanks for the update, Rich. I’ve been running several FreeNAS 9.3 servers and haven’t run into these problems, but it seems you’re issue isn’t entirely uncommon either. If you had unlimited time (which I’m sure you don’t) it would be curious to see if you ran into the same issue on FreeBSD 10 (which uses the kernel based iSCSI target that was implemented in FreeNAS 9.3).

  14. Outstanding Guide!

    First, excuse my english. It’s not my first language.

    I read it with great enthusiasm as I am building a similar setup.

    HP ml310e
    IBM 1015 in IT mode with LSI firmware. (fw mathches freenas driver)
    esxi 6.0 – 1015 passthrough of 1015 to Freenas VM
    Freenas VM located on SSD datastore.
    Around 18 TB data disks with ZFS attached to the 1015. No ZIL SLOG
    Using vmware tool that came with Freenas install. So E1000 NICs.

    Ran out of SSD data store space and wanted to use the freenas exported NFS or iSCSI handed back to ESCi as datastore for less important/demanding VM’s.

    Horrible performance :) So started reading your guys comments. And all the discussion on the freenas forums you linked too. I got a bit confused as to what your actual conclusions where? Am I right in assuming that the following is what you came up with?

    Use E1000 driver (slower but stable)
    Use SSD ZIL SLOG if you want w/sync enabled. Else, disable w/sync.
    Keep MTU on 1500
    Use NFS, don’t use iSCSI
    Disable tso offloads (or was that only seen on bare metal?)

    Or are you still having issues in spite of these corrections?

    Best regards

    • Hi, Kenneth.

      The problem with ESXi on NFS is it forces an o_sync (cache flush) after every write. ESXi does not do this with iSCSI by default does not do this (but it means you will lose the last few seconds of writes if power is lost unless you set sync=always which then gives it the same disadvantage as NFS).

      For performance with ESXi and ZFS here’s what you’ll want to consider:

      – Get a fast SSD for SLOG such as the HGST ZeusRAM or S840z, or if on a budget the Intel DC S3700 or DC S3500 for the log. There are other SSDs but I’ve found they lack power loss protection or performance. Before buying a log device you can also try running with sync=disabled (which will result in data loss of the last few seconds if you lose power), if you see a large amount of improvement in write performance then this will help. My guess is this will make the largest difference.
      – Consider your ZFS layout. For ZFS each vdev has the IOPS performance of the slowest single disk in that vdev. So maximize the number of vdevs. Mirrors will get you the most performance. If you have 18 disks you could also consider 3 vdevs of 6 disks in RAID-Z2, but mirrors would be far better.
      – For read performance the more ram the better, try to get your working set to fit into ARC. You can look at your ARC cache hit ratio on the graphs section in FreeNAS, on the ZFS tab. If your arc cache hit ratio is consistently less than 90 or 95% you will benefit from more RAM.
      – If you have a lot of RAM, say 64GB and are still low on your ARC cache hit ratio you may consider getting an SSD for an L2ARC.
      – Make sure you disabled atime on the dataset with the VMs.
      – iSCSI should have better performance than NFS, but it seems to be causing problems for some people and resulting in worse performance. I’ve always run NFS as it’s easier to manage and not as bad on fragmenting.
      – I wouldn’t worry about MTU 9000, it makes very little difference with modern hardware and has the risk of degradation if everything isn’t set just right.
      – The VMXNET3 driver works fine for me, but others seem to be having issues with it. The main difference is VMXNET3 has less CPU overhead so if you have a fast CPU this won’t make a large difference.

      Also, make sure you’re not over-committing memory or CPU. If FreeNAS is contenting for resources from the VMs it can cause serious performance issues. One common mistake is if you’re running a CPU with hyper-threading it appears as though you have twice as many cores as you do. It’s probably wise to lock the memory to the FreeNAS guest and also give it the highest priority on CPU.

      Hope that helps.

  15. Hi, I found your blog from Google search and is reading few posts now. It happened that you have quite similar All-in-One system like mine, except I use all SSD environment (I also setup VMXNET3 using binary driver so actually we are pretty on the very same road of finding optimized way of setting up the system). I dont want to be locked into any kind of HW RAID so ZFS is my choice.

    The connection drop out is a problem of new FreeBSD iSCSI kernel target (it will not happen if you use FreeNAS 9.2). Who face this problem with FreeNAS 9.3 can check my thread on FreeNAS forum since 2014 here:

    Using sysctl tweak sysctl kern.icl.coalesce=0 could help reduce the connection dropping (but latency sometime rise very very high).

    Basically I think FreeNAS 9.3 is not good for production as it has lot of trouble with kernel target. We might need to wait till FreeNAS 10. At few first update, FreeNAS 9.3 is really buggies. I still dont know why my FreeNAS has quite low write speed (if I fire up other SAN VM and handover the SAS controller + my pool to it, it has better write) (all use same setting: 02 vCPU, 8GB RAM, VT-d LSI SAS controller, ZFS mirror with 04 SSD)

    • Hi, abcslayer, sounds like you like ZFS for the same reason I do. Thanks for the info on the coalesce. I can also confirm the issues doesn’t occur on 9.2, and doesn’t occur in OmniOS so I think you’re right that it’s an issue with the FreeNAS 9.3 iSCSI kernel target. Hopefully iX can trace it down.

      So far it seems the best platform for robustness and performance on ZFS is OmniOS with NFS. FreeNAS 9.3 has fixed some NFS write performance issues recently, not quite as fast as OmniOS yet but close enough it doesn’t matter unless you’re doing some pretty intensive write I/O.

      • Ben do you have any standard test that one can run to validate performance with OmniOS? Hoping to do an apples to apples like comparison.

        Also, what drive are you using with OmniOS VMXNET3 or E1000? Thanks again for the great blog!

        • Hi, Dave. I posted the sysbench script that I use in the comments here: Another test I use is CrystalDiskMark Keep in mind with Crystal that I’m measuring the IOPS and it outputs in MBps, to get the IOPS after a test go to File, Copy, then paste into Notepad. Also, my test is entirely within VMware, so I have OmniOS and FreeNAS installed inside VMware, use VT-d to passthrough the HBA to the ZFS server, and from the ZFS server I make an NFS share and mount it to VMware. The guest OS where I’m performing the test is a Windows or Linux system running on a vmdk file on that NFS share. Hope that helps, let me know if I can give more details on the test. I’d love to see how your results compare.

          For OmniOS I used VMXNET3 using Napp-It’s instructions. I tested the E1000 and noticed about a 1-2% performance degradation. Not enough to make a big difference.

          • I did not find a guide where Napp-IT Details how to setup their server for VMXNET3 but between your guide (which is great) and the OmniOS documentation I was able to figure it out.

            I have no L2Arc but I do have an S3700 it is provisioned for 8GB as you suggested. Also, I have passed through a Supermicro 2308 that is running firmware V20.

            Here are some of the tests, similar setup as you mention. OmniOS installed on a Satadom and server Windows 2012 R2 through a datastore that is mounted via NFS.

            Using 5 tests 1GiB

            Random Read 4k Q1T1 10722.7
            Random Write 4k Q1T1 1667.0

            Random Read 4k Q1T32 79347.9
            Random Write 4k Q1T32 19895

            My Q32 IOPS are in sync, but the Q1 are not. They are closer to Freenas numbers. Any thoughts?

          • Hi, Dave.

            Thanks for posting the results, it’s always good to see other people validate or invalidate my results on similar setups.

            One difference is you are on firmware P20 where I was on P19. You may want to consider downgrading. See: and and

            I’m not sure that’s the difference, but that’s the first place I’d look. There are a million other things that could explain the difference, but the fact that Q1 is slower but Q32 is not almost points to a latency or clock frequency difference: perhaps a slightly faster CPU, maybe I had lower latency ram. It could also be any number of other things: slightly faster seeking HDDs, perhaps you missed a 9000 MTU setting somewhere and you’re fragmenting packets on the VMware network, different firmware version on the motherboard, different motherboard with different bus speed, you were using Windows 2012 where I was using a beta Windows 10 build, etc. Also, I was using 2 x DCS3700s striped, if you’re only using one that could be the difference (although setting sync=disabled should get you as fast or faster than my results if that’s the bottleneck).

            I forgot to mention it, but did you disable a-time on the zfs dataset with the NFS share in Napp-It?


  16. Thanks for pointing out the firmware! I was reading the freenas forums and it seems they recommend P20 so I installed that. I removed it and put on P19. I’ve compared my sysbench results to the ones you published and the numbers all look very similar! So perhaps something with OS, version who knows?

    May I ask why you chose to stripe your Slog? Have you found a big difference in doing so? Also, have you tried NVMe drives or do you think this is overkill?

    Thanks again!

    • Glad to hear P19 solved it for you! Also thanks for confirming my results.

      I get a little more performance out of striped log so I run that way on my home storage. Mirroring striping does help with throughput a little since NFS is essentially sync always with VMware but to be honest 99% of the time I don’t notice the difference between mirror and stripe ZIL on my home setup. For mission critical storage I always mirror (or do a stripe of mirrors).

      I’ve been thinking about NVMe but haven’t gotten to it yet–it’s hard to justify for my home lab since I already get more than enough performance out of a S3700.

    • I was a bit confused when you said FreeNAS was recommending P20, since they have been on P16 for as long as I can remember. It looks like FreeNAS recommends P20 with the latest 9.3.1 update–P20 is very new and I’m not quite sure I’d consider it stable yet. I’ve updated the post to address this.

  17. I had the opportunity to try out a S3710 – seems like it has a bit of an advantage over the S3700. Could be due to the size as well. Numbers aren’t that bad at all. Scores are now much more inline and sometimes over. So definitely your slog setup had an impact on your benchmarks.

      • Ended up getting the 200 gigabyte S3710. The S3700 200gb will do 365 sequential write vs 300 sequential write on the S3710. But the S3710 will do higher random write.

        Both are faster than the 100GB S3700 which is 200 sequential write and a lot less with random write.

        I don’t know why but the S3710 200GB is almost $100 less than the S3700 200GB so it seemed like a reasonable deal.

        • Hi, Dave. Yeah, for that $100 price difference the S3710 is the way to go. Sometimes the supply/demand gets kind of odd after merchants have lowered their inventories of old hardware which is what appears to be happening here.

  18. Nice write up !

    I have been running an all in one freenas / vmware box for 2 years now but recently experienced an issue when I upgraded to latest freenas and latest vmware 6.0.

    After rebooting, 75% of the time the NFS storage never shows up so i can’t boot VMs automatically, issue is resolved after turning off and turning back on the NFS service in freenas, datastore shows up right away. So based on some freenas forum posts I switched to iSCSI and now i hit the NOP issue as well … and found this post … do you have the latest freenas and vmware 6 ? and your all in one is booting up okay ?

    • Hi, Reqlez,

      Are you talking about rebooting just the FreeNAS vm or rebooting the ESXi 6 server? I have restarted FreeNAS quite a few times after installing an update and it has always shown back up in ESXi 6. It has been a long time since I’ve rebooted ESXi so my memory is a little vague but I had to power it on from cold boot after moving to a new house back in May and I don’t recall having any trouble with it–I do have my boot priority set to boot FreeNAS up first.

      I updated FreeNAS 6 days ago so I’m on FreeNAS-9.3-STABLE-201509022158, and ESXi 6.0.0 2494585 (haven’t updated it since the initial install).

      If an NFS restart fixes it, one thing you could do is add a “service nfsd restart” post init script in FreeNAS.

      • yea it’s restarting the ESXi host. I’m just doing test to make sure when my APC Smart UPS turns power on i want server to actually come on because i will be running some VoIP on it as well.

        service nfsd restart on init in freenas is not a bad idea, I will try it. are those init scripts ran after all the services boot up i’m assuming. I’m running ESXi 6 build 2809209 but happened also with 2494585 i’m pretty sure.

        I don’t know if its freenas related or the new ESXi NFS service has issues when it boots up with NFS server down. Maybe it “locks up” until a connection is restarted by doesn’t really make sense. Ill try the init script in freenas and report back.

        • A suggestion. I have found that if I make some modifications while ESXi is active the NFS mount will sometimes move into a non accessible state.

          To get the system out of this I either need to reboot or connect to ESXI host terminal and issue the following commands.

          // List network shares
          esxcli storage nfs list
          // Unmount share which shows as mounted but state is not accessible
          esxcli storage filesystem unmount -l datastore2
          // Remount datastore
          esxcli storage filesystem mount -l datastore2

          This brings things back to life for me. I never experience what you are describing on boot. But perhaps something is going on in the background where your mount is coming online disconnecting and then coming back online again causing ESXi to put the share into an inaccessible state.

      • Ben, you are a true geek ! “service nfsd restart” post init script in FreeNAS worked like a charm. i’m going to make it my default config from now on. By the way … i’m upgrading my ESXi LAB to D-1540 soon, this 16GB RAM limit is killing me, i’m running a D3700 as “host cache” but its not optimal.

        • Glad that worked, Reqlez. If you get the D-1540 be sure to let me know how it works for you–being able to go to 128GB, even 64GB really frees up a lot of memory constraints. I only got 32GB but there’s plenty of room for growth. Also having those 4 extra cores really helps with CPU contention between VMs if your server is loaded up–especially for these all-in-one setups.

          • Gave me more the reason to want to use OmniOS as I’ve read not a single good thing about P20. Also, it seems like there is a few revisions of the firmware and they give no indication as to what version or revision is considered safe. My OmniOS + ESXi 6.0 combo using a Xeon 1540D has been rock solid. Couldn’t be happier and probably would have never figured out how to set it up if it weren’t for your great guide!

  19. apparently there are several versions of the P20 firmware, but the one that has 04 in it is apparently safe. The one from last year apparently had some complications. i’m running the 04 version and nothing is going bad so far… and i put drives thru stress.

      • i downloaded latest from LSI site i’m pretty sure. i cross flashed to LSI firmware but i don’t have 2308 i have older model, 9211 that has 2008 chip i’m pretty sure ?

        But your model, you can download here… but you might have to cross flash somehow ?

        I don’t know how to cross flash from Supermicro to LSI firmware, i did IBM to LSI … Also every time i cross flash or flash in general, i don’t flash the stupid BIOS file, that way there is no raid screen that comes up every time you boot ( quicker boot )

        • Thanks for the info and also for confirming that 04 is working for you. I will stick with firmware 16 until Supermicro releases the 04 version for P20. I’ve cross flashed a few IBM 1015s to LSI 9211s but I’m not so sure you’d want to do that with the Supermicro LSI 2308, especially since it’s built into the motherboard.

  20. Does anybody know if there is a way to store swap file on the Pool ? I just got hit by the stupid swap error because a drive failed, but i also don’t want to run out of swap due to memory leak or something. I see you posted that swap can be disabled and created under /usr ? but isn’t that also non-redundant ?

    • /usr is on the boot drive where FreeNAS is installed, so if you lose /usr you’re going down anyway. That said I mirror my boot drives. I mentioned it briefly in step 7 but didn’t go into detail, I have two local drives so I put one VMDK on each and during the FreeNAS install you can set them up as a mirror so it’s redundant. Even with my method, if the drive housing the configuration files (e.g. .vmx) dies you will go down and have to rebuild the vm configuration on the other drive (which isn’t hard to do). Only way around that is to hardware RAID-1. I personally don’t do RAID-1. I just have two DC S3700s which are extremely reliable, the VM config (.vmx file) is on one, the vmdks are mirrored.

  21. oh yea i forgot … i mirror also inside freenas but i put both VMDKs on same drive lol Now with my new set-up ( finally, D1540 ! ), i only have an SLC 32GB DOM, so .. i don’t think i can make swap on that … i have 21GB left after i installed ESXi 6.0U1 I mean, i could do 8GB x 2 and then reserve RAM ( since i do that anyway ) so that there is no swap file on that drive form freenas, but that leaves no space for my pfsense router that i wanted to run on that SLC storage… maybe ill just run vMA and freenas on that and run rest on my 10K SAS NFS pool

    • Another option is you can create the file on your ZFS pool instead of under /usr, and run swap on that and then it will be on top of whatever redundancy your pool has. I’m not sure if ZFS comes up before the tunables run so you may need to do a postinit script to enable the swap using the swapon command.

      Obviously you wouldn’t want to do that if you’re going to utilize the swap heavily. I have enough RAM where the swap should never get used, but I’ve found even with systems with lots of ram (256GB) for some reason the swap still gets used even if it’s just small files in the KB range. I’m not sure why.

      • okay i will experiment when i get my 2.5in to 3.5in adapters and my second S3700 ( tomorrow ) so i can finish building this thing.

        I would prefer doing it on my cheap 6TB array since there is lots of space and i can make a big swap file just in case. i might have to experiment with the postinit script.

        As for swap usage with lots of ram … maybe Unix is learning from Microsoft Exchange ? Lots of ram???… but swap available ? lets swap out hard and destroy IOPS !

    • Also, I’d suggest putting a pfsense router outside the ZFS pool if you can so it can be up and running providing network services like DNS and dhcp before FreeNAS boots. You can always have a backup pfsense server running on the pool.

      • actually … that pfsense router is not needed for ESXi but only for VMs inside of it … I kind of have a router on that same LAN already ( Ubiquity pro-8 model ) but i use pfsense to do 1 to 1 NAT with a /29 subnet i was given by my ISP, but i use a dynamic IP on my main house LAN … its weird … but it works, primarily because i don’t really know how to program that Ubiquity router yet lol

  22. Hi folks,
    I do have a Microserver Gen8 and I would like to experiment with sharing storage back to hypervisor. As this is a home lab, I do not have 2 x good SSDs a nad a rack machine I could put a lot of other harddrives as well.
    Microserver has four SATA connected to B120i and a single SATA for ODD.

    I think I can use 1 x cheaper SSD attached to ODD port, 2 x cheaper SSDs, 2 x Seagate Consteallation (ES.2 and CS.2, two different drives but that’s all I got) in the four SATA bays,

    I want to experiment but later on my configuration should work as a my home NAS server and a place I can put my tests VMs.

    What is best I can do with that Microserver?

    Create a VM for sharing back files to the host on the 1st SSD connected to ODD port
    Then, in that first VM create a RAID 0 from two SSDs for other VM’s OS parttitions and RAID1 from Constellation drives for data storage for my home nas?

    I need some safest solution I can get that is beeing as flexible as possible.


    • Hi, Steve.

      When I had a Gen8 Microserver I gave it one SSD for ZIL and 3 HDDS for RAID-Z and didn’t use the ODD, but your configuration could work. I would not go cheap on the SSD that you use for ZIL. Get either DC S3500 or DC S3700. Since you mentioned you want a safe config I should mention that running on VMDKs is generally not recommended. I never tried passing the B120i to the the ZFS guest, I’m not sure it’s possible. It may be possible if you can use the ODD as VMware storage. That said, I setup my ZFS server on VMDKs and it performed well under light load with a handful of VMs.

      Hope that helps,

  23. Hi, thanks For this Great Tutorial!

    I was just wondering: instead of installing freenas on two ssds I just would have the esxi VM storage on two RAID-mirrored SSDs, thus all VMs would be mirrored not only freenas.

    I assume that you have passthroughed your ssds to the freenas VM as well.

    Do you see anything wrong with my idea?


    • After far as freenas is concerned, passing raid volumes to use for data storage in freenas is asking for issues… Can you clarify more what you are trying to do ? Are you only storing the freenas VM “boot” volume on that raided SSD as an VMDK? or other “critical” VMs as well ?

      • All ESXi vms Are stored on a raid 1 SSD Volume on a separate Controller, no pass through here.

        The HDDs are attached to an IBM 1015 in IT mode which is pass throughed (vt-d) to Freenas’ VM (this VM is located on the raid-1 SSDs).

        I am aware of possible issues of virtualised but it has been pointed out by Ben and various other sources, that these concerns may be outdated.

        • Oh. So you are storing all VMs on the hardware raid 1 volume attached to esxi as a local data store and just use the Freenas attached/VT-D storage for data ? Like SMB ?

          I just put as many VMs as a can on the Freenas volume because it’s faster than any raid controller with cache, you can do NFS or iscsi. NFS you need an SLOG.

          • Exactly, the SSDs run as Hardware RAID 1 and serve as a data storage for the Virtual Machines of ESXI. ESXi itself runs from a USB-Stick.

            Freenas runs in a VM stored on the SSDs.

            The thing is, until now, I did not fully understand what you were aiming for with the NFS set up in Freenas and making it available in ESXI. Now I do.

            So, you’re basically saying you do not see any substantial advante of my setup over having the VMs on NFS like you are suggesting?

            Oh, and thanks for your answers so far!

          • Hi Gerrit.

            So … basically what i’m saying is … with the proper configuration ( meaning that you at least give the freenas VM 8GB RAM, and lock it down with the option in ESXi “all memory” lock or something like that, it reserves the whole 8GB for that VM ) AND a proper SLOG if you are using NFS ( Intel S3700 is good SLOG, you can mirror if you want to be safe also I would reduce the capacity of the SLOG SSDs to about 5 or 10GB using HPA, you can old how to set HPA ) you will have better performance and SECURITY than using your hardware raid 1. I would use the hardware raid 1 to boot ESXi and store the freenas VM and maybe store the vmware management assistant VM and thats it. then you store the rest of VMs on your NFS or iSCSI volume that freenas will provide. There are some issues with iSCSI that people reported so maybe its best to use NFS but make sure you have a good SLOG attached to your NFS ZPOOL ( again, Intel S3700 ).

          • Okay here is a bit more:

            The whole point of running this Hybrid ESXi / freenas setup is to run the most VMs you can on ZFS storage. Why ZFS ? You have checksums on every data block, you can setup weekly scrubs to make sure your data is not corrupt, you can enable vmware-snpashot feature in freenas and then run snapshots on your ZFS dataset that you provide to ESXi so you have a filesystem consistent, and some times even application consistent snapshot that you can restore to. You have LZ4 compression, LZ4 will save you on storage capacity costs and it even make performance higher in some scenarios, there are MANY benefits to running on ZFS than running on hardware raid 1. You just have to make sure that you set a DELAY in ESXi after freenas boots so that the NFS storage is available by the time your other VMs boot from that NFS storage.

  24. Hey Ben,

    Recently I lost one of my ESXi SataDoms which got me thinking.

    Currently I have 1 SataDOM used for both ESXI, Vsphere and my OmniOS. I also have my rpool mirrored to another SataDOM which is just another ESXi data store.

    When I lost the DOM I was able to restore ESXi and boot OmniOS off the mirrored rpool. Worked like a charm. But I had some downtime until I was able to restore ESXI. I noticed during this that my motherboard supports software raid. In your opinion would it be a bad idea to just Software Raid the drive that holds OmniOS and everything else or is that a bad idea?



    • I might as well add my 2 cents while Ben replies but … ESXi does not support software raid ( you mirror volume won’t show up ), only hardware raid. Unless I missed a very important announcement. In my experience if you don’t want to buy a hardware raid card for booting ESXi ( you can get one for 200$ ) but with that cheap card since it’s not LSI you have to “inject” drivers into the ESXi install disk, basically you have to use ESXi customizer.

    • sorry … i didn’t finish the last sentence … if you don’t want to buy a hardware raid card, you can just get a very reliable SSD like the Intel DC S3500 series.

    • What I mean by Software raid is the Intel motherboard raid. ESXi can or can’t detect this? I can’t go hardware cause I only have 1 slot and it’s already in use for my LSI 2308.

      • Well … some server motherboard actually have hardware raid chip integrated, if you got one of those you are in luck … But i’m not sure about the intel … what model do you have ? The Intel raids are “usually” software and are not supported by ESXi or linux or unix for that matter.

        • I’m using a SuperMicro X10SDV-TLN4F. It looks like you are correct, I don’t believe it is supported outside of AHCI mode. So looks like I am running optimum setup. Just need to backup ESXi config.

    • Yes, you can use it to experiment. You can create a VMDK file on each drive and give that to ZFS to create RAID on top of it. I don’t know how safe it would be under production use…particularly heavy load.

    • The issue with using freenas on SATA ports is that you CANNOT pass thru the sata controller ports direct to freenas VM … you would have to use freenas with VMDKs attached stored on SATA VMFS datastore … and … that is … NOT safe. not for production use. You can use something called Raw Device Mapping, a bit “better” but still… not good for production if you care about your data.

  25. Ben,

    Any thoughts on Solaris 11.3? Not sure if it supports VAAI but it seems Oracle has caught up quite a bit with their ZFS implementation.

    • Hi, Dave. I haven’t look at Solaris 11.3. The only reason I don’t consider it is licensing… a few things I use my ZFS storage for at home are borderline business related so I’d probably have to buy a license to be legit–I believe it runs $1,000/year. For a more Solaris type environment I’ve been using OmniOS.

    • Hi, Hans. The Megaraid series are RAID cards and aren’t technically HBAs so they are a bit risky to use with ZFS. You can set the drives up in JBOD mode but this isn’t as good. You might be able to solve that error by disabling the read-ahead cache–but no guarantees that it will work–and if it does you may run into other stability issues. Probably the better thing to look into is see if it’s possible to crossflash it to a 9211 in IT mode… if you can it may work better in that configuration.

      Hope that helps.

  26. I should crossflash a LSI MegaRAID SAS 9240-8i to 9211 (HBA – in IT mode)? I could download the firmware 9211 version 20? From the Avago Website? Is this correct?

  27. Great write up! Helps tremendously. I am still working on getting mine setup, but I noticed a possible typo “Value=/usr/swap” and in the screen shot you show “/usr/swap0” I believe the “0” is kind of important? :)

  28. Hello Ben.

    Thanks for sharing, it is helpful, thanks also to those who ask.

    separate traffic on the LAN and Storage traffic is good decision.

    Sorry for the question I ask.

    Why in FreeNAS Storage Network IP is, should not be on the same network segment as VMware Storage Network? ie 10.55.0.X? Ping from FreeNas ( to unresponsive.

    My HP ProLiant has 4 ethernet cards

    They are two networks:
       1) LAN VMware management OK (physical NIC)
       2) STORAGE: VMware IP ( and FreeNas (

    Create a virtual storage Network with jumbo frames enabled OK (swtich 1)
    VMware Storage Network ip is OK (Kernel, switch 1)
    FreeNAS Storage Network IP is ???

    • Hi, Pedro. I setup my Storage network as /16. If you set the netmask to on both VMware and FreeNAS then your subnet’s address range is from to On my storage network I like to give all my storage servers a 10.55.1.x address and all my hypervisors a 10.55.0.x address. This is just a personal preference, there’s no reason you couldn’t have it on a /24 ( with one on and the other on Hope that helps.

  29. Hi Ben,
    First of all thanks for all wonderful guides. I found your blog after searching about Hp microserver gen8. I need some help because the more i read the harder it gets to decide :)

    Briefly, i need a small home server with low noise with two VM no it- one for NAS and one for LAMP(debian/ubuntu).

    So my first idea was to start with minimum configuration of Hp microserver with 8Gb ram, Xeon E3-1220L v2, 2x 2TB WD reds and 1x160GB SSD.

    The setup: boot ESXi from USB, install FreeNas and Debian onto SSD drive attached to the ODD port. Make ZFS pools with FreeNas and share them back to the Lamp server using NFS.

    So far so good, but then i read that FreeNas needs 8gb ram and i need another SSD for SLOG. Can i build my server with this configuration? Im not inspecting heavy loads. HP microserver or supermicro?


    • Hi, Meneldor. FreeNAS needs officially needs 8GB of memory so you would probably want to up your memory to at least 12GB (to leave 4GB for your other VM) or use OmniOS instead of FreeNAS. (I’ve run OmniOS+Napp-It with 4GB with no trouble).

      On HP vs Supermicro it depends on what’s important to you… The HP Microserver is great, lately I’ve been preferring Supermicro for the following reasons:

      – It’s more widespread–especially in the ZFS and Storage community, almost every storage vendor uses Supermicro hardware. There’s a larger number of Supermicro users in FreeNAS and OmniOS running on Supermicro hardware.
      – IPMI features like KVM over IP doesn’t require a license on SM.
      – Supermicro provides their firmware updates for free–with HP you have to be under a support contract.
      – You can use a DIY build to customize it–in the past Supermicro keeps pretty standard sizes on most of their hardware so chances are 5 years down the line if you wanted to upgrade to a new motherboard you could use your existing case. The HP Chassis has custom cutouts for the ports so it will probably only ever fit the motherboard that comes with it.
      – On the HP Gen8 Microserver there’s an issue where the IBM M1015 can’t be passed as a storage adapter in VMware using VT-d.
      – The Supermicro Mini Tower chassis is a little quieter then the Gen8 Microserver chassis.
      – Motherboard can go to 64GB or 128GB on the mitx SM boards vs only 16GB on the Gen8 Microserver.
      – more options on SM–you can get four networking ports if you want.
      – The Gen8 Microserver is a few CPU generations old…

      The main reason I’d pick HP is if you already have HP and just want to keep everything under one vendor, or if you need something like a same day on-side support contract (although there are plenty of SM vendors that will provide such). I generally prefer to buy double the hardware than pay for onsite support but that’s just me.

      If you decide to go Supermicro you have quite a few options. You can buy a pre-built server like or you can build it yourself like I did. See:

      If you’re not planning on using VT-d passthrough and don’t need a lot of processing power you can get away with a CPU like the Atom C2758: to save a few dollars. If you’re just running FreeNAS and a light VM with a LAMP server this would probably be the route I’d choose. If you’re doing heavy I/O you’d want to be using VT-d passthrough.

      Let me know if I can be of further assistance.

      • Thank you very much Ben,
        Im starting from scratch so i dont have any HP hardware at all.
        – If i use non vt-d setup how would I share the ZFS pools back to the Lamp server?
        – how pricy is the Supermicro equivalent compared to HP gen8?
        – if i use HP do i really need an additional card like M1015 or can use its smartArray in AHCI mode?

        Best Regards

      • Forgot to mention the price. I can buy HP gen8 with xeon 1220L v2 and 8gb ram for 490$. I cant see anything in this class which can beat that price in my country.

        • Price and availability in your country is certainly a consideration. If something were to break it would be easier to source a part from the vendor in your country instead of having it shipped internationally.

  30. Hi! thnx for a excellent guide. I’m having problems with the vmxnet3 driver tho.

    I mounted the VM tool cd and copied the drivers

    [root@freenas /boot/modules]# ls
    arcsas.ko geom_raid5.ko vboxdrv.ko vboxnetflt.ko vmxnet.ko
    fuse.ko linker.hints vboxnetadp.ko vmmemctl.ko vmxnet3.ko

    Added tunables

    but they don’t load.

    the hardware is there
    pciconf -lv

    none2@pci0:11:0:0: class=0x020000 card=0x07b015ad chip=0x07b015ad rev=0x01
    vendor = ‘VMware’
    device = ‘VMXNET3 Ethernet Controller’
    class = network
    subclass = ethernet
    none3@pci0:19:0:0: class=0x020000 card=0x07b015ad chip=0x07b015ad rev=0x01
    vendor = ‘VMware’
    device = ‘VMXNET3 Ethernet Controller’
    class = network
    subclass = ethernet

    Any ideas?

  31. First of all Ben, Thanks for an excellent tutorial for getting started with Freenas and ESXi.

    I finally got my freenas up and running, ran into an issue and wondering if I could get some further guidance.

    HP microserver Gen8
    E3-1265L 2.40GHz processor (VT-d capable)
    4x 3TB WD Red (RDM/.vmdk mounted on Freenas VM),
    128GB ssd (esxi datastore), 32GB usb flash drive (hypervisor)
    12GB ECC RAM

    Problem: CIFS write speed degrades for large files. When copying files greater than 2 GB (from a win7 host over GiGi cable directly connected to Freenas) the download speed gradually drops from 120 MBps to 13 MBps and eventually times out with an error message, “There is a problem accessing \\freenas\share. Make sure you are connected to the network and try again”.

    CPU Utilization is below 40% while copying file.
    Memory usage is about 6GB out of 8GB dedicated to freenas.
    TSO offloading is disabled.
    Swap space is 14GB (unused)
    MTU 1500

    ARC Size: 21.81% 1.46 GiB
    Target Size: (Adaptive) 100.00% 6.71 GiB
    Min Size (Hard Limit): 12.50% 858.87 MiB
    Max Size (High Water): 8:1 6.71 GiB

    While this is happening with CIFS, NFS share (mounted as datastore on esxi) on the other is able to cope with large file without any issue. Copying 2.7GB takes about 10 mins.

    I suspect there is something amiss in the CIFS configuration. I’ve played with server minimum and maximum protocol settings, currently both set to SMB2, without much success. I’ve also tried adding Log(1GB) & Cache(8GB) vmdks mounted on ssd but still no joy.

    It seems like ARC buffer is filling up and is not being flushed in timely manner. When the ARC table gets flushed after 10 mins i’m again able to copy the large file once again (still to no joy of course). Why would CIFS cause and issue to ARC but NFS won’t? I fail to understand that. I’m happy to dive more deeper and look into logs and traces but need to guidance on what and how to go about it. I’d appreciate any help in this regard.

    Thanks in advance.

    • Interesting … I wonder if this has something to do with RDM or VMDK mounting. I always passed thru disks direct to freenas via VT-D on a SAS HBA controller. CIFS currently until Freenas 10 get released is a single thread process … however you have good single thread performance on that CPU. Mind you, I have used CIFS on an ATOM processor and it was not that bad. I’m suspecting that for some reason the data won’t write to disks from ARC because of some weird bottleneck at the hard drives ? It would be really nice to try if you had a SAS HBA … maybe Ben has more experience with freenas than me and can provide some input to troubleshoot.

      maybe try something like here … but with a bigger test file size, like 30G for example ( increase the count of the DD command ) ? that would eliminate any issues with CIFS, etc.

      • Sorry just noticed you said NFS works fine. Mind you, because you do not have a SSD ZIL for NFS, its going to slow as hell. I would try a test direct on ZFS filesystem anyway … just to see how many MB/S you are getting direct on dataset versus going thru CIFS or NFS. I’m assuming the 8GB RAM is LOCKED to the freenas VM ? or not ? because 12GB … is kind of low if you want to run any VMs on there. I ran with 16GB and it was too low because I had to lock down 8GB to freenas out of 16GB. If RAM is low, ESXi will start using the SWAP ( the internal freenas swap won’t be used because ESXi handles the swap as well outside of freenas ).

        • Thanks for the pointers Reqlez & Ben!

          Yes the Freenas VM has dedicated 8GB. I do intend to add another firewall VM later on once i’m happy with freenas performance, but as of now its the sole VM on ESXi at the moment.
          As for getting a SAS HBA, I’m returning the faulty M1015 as it wasn’t detected by system BIOS. I’ve ordered LSI 9210-8i instead and hope the china man is quick enough to dispatch it. :)
          Its a home lab server and not intended for any serious business. I’ll add another 8GB RAM if thats whats needed.

          And the dd results are here.

          [root@freenas] /mnt/nas-storage/nfs_vmware# dd if=/dev/zero of=test.img bs=2048k count=50k
          51200+0 records in
          51200+0 records out
          107374182400 bytes transferred in 167.012341 secs (642911666 bytes/sec)

          [root@freenas] /mnt/nas-storage/home/Movies# dd if=/dev/zero of=test.img bs=2048k count=50k
          51200+0 records in
          51200+0 records out
          107374182400 bytes transferred in 161.783835 secs (663689190 bytes/sec)

          I take this eliminates hard discs being the bottleneck as the throughput is well over 600MBps+.
          I didn’t use /dev/random for the test as i gather that itself is a bottleneck.

          Disabling sync on CIFs did the trick! Thanks for that Ben!

          It was set to standard but changing it to disable resulted in files getting transferred.
          The write speed drops down to 26MBps gradually and no sudden drops thereafter.
          For CIFs datasets, I’m now able to write/tranfer 5GB+ files without any interruption.

          The output of “zfs get all tank/cifsdataset” can be still viewed here

          So what does that prove? Do i have an issue with SSD or is it L2ARC/LOG contention?
          I’m using Kingston 120GB SSD 2.5-inch V300 SATA which i now realize has pretty rubbish speed for being cheap.
          I have not configured L2ARC or LOG at the moment. How do i go about improving things from here?

          Maybe add more RAM for ARC and buy another SSD for ZIL?

          On a separate note, how does ARC optimization work in Freenas?
          By the looks of the Freenas report, it seems as if used ARC Size is flushed ocassionaly and not the frequently.

          • Okay. If you have compression enabled, the ZERO you did instead of RANDOM just tests you ram/CPU speed and not the hard drives. I would do random. Maybe do the tests with sync standard and sync disabled ? Just for reference. I’ll review the rest this evening.

          • Here we go again.

            With sync disabled on cifs

            [root@freenas] /mnt/nas-storage/home# dd if=/dev/random of=test.img bs=2048k count=20k
            20480+0 records in
            20480+0 records out
            42949672960 bytes transferred in 1929.220762 secs (22262705 bytes/sec) ~ 178Mbps

            with sync set to standard on cifs

            [root@freenas] /mnt/nas-storage/home# dd if=/dev/random of=test.img bs=2048k count=20k
            20480+0 records in
            20480+0 records out
            42949672960 bytes transferred in 4008.510371 secs (10714622 bytes/sec) ~ 85Mbps

          • So disabling sync tells me it’s probably your ZIL/SLOG that’s slowing your down. The Kingston 120GB SSD 2.5-inch V300 SATA might be your bottleneck since it isn’t designed to be a ZIL. You might try to get something like an Intel DC S3500 or DC S3700. See: it was awhile ago that I did that comparison so there are probably newer equivalents to look at as well. Also HGST sells drives specifically designed for ZIL like the HGST Ultrastar SSD800MH.B but they’re a but pricey.

            I should mention that I don’t know of anyone (including myself) that was ever able to get the IBM M1015 working as a passthrough (VT-d) device with VMware on the HP Gen8 Microserver. That said I never ran into issues running on top of VMDKs on the HP Microserver like you’re doing…but now I run exclusively Supermicro hardware for servers.

            For you I think getting a better SSD for ZIL should be your first priority to get your write speeds performing better, and if you have issues with read performance maybe more memory for ARC.

            For the small amount of RAM you are using I suggest not using L2ARC, just one SSD for LOG/ZIL is fine. On my main home servers I don’t use L2ARC anymore. For servers at work we also find it fairly inexpensive to equip them with 256GB or 512GB–the thing with using an SSD for L2ARC is it uses up a slot in your chassis that could be used for more spindles. Those slots are not cheap.

          • Thanks Ben!

            I’ve just ordered Samsung EVO 250GB. I reckon currently its got impressive random read/write IOPS stats for its price tag.
            It is nowhere closer to HGST but still 5 times improvement from my cheap Kingston ssd.

            Just to be sure, I will use this as esxi datastore to host freenas VM. It isn’t going to be a “dedicated” ZIL drive.
            As freenas vm itself is “living” on ssd, assuming if any there will be very little overhead than using a dedicated ZIL drive.

            I don’t have any problems with read speeds at all. The RAM is still underutilized as per the pretty graphs.

            Are there any significant performance gains with a SAS HBA using VT-d? I’m tempted to think no. The only advantage it provides is SMART statistics for the drives right?


          • I’ve done the same thing (shared an SSD for an ESXi data store to host FreeNAS / OmniOS and also use as a ZIL) and didn’t run into any issues with it. One thing I would be concerned about with the Samsung EVO is I don’t know if it has power loss protection to prevent data corruption on power loss. You may be able to mitigate that the risk with a battery backup–but that’s the reason I use the Intel DC series.

            On not passing an HBA using VT-d you have the overhead of running ZFS on top of VMDK which isn’t going to make a big difference on a modern Xeon processor, the performance would probably be nearly identical, not really noticeable as long as the CPU can keep up. Other than SMART data you also just have uptime issues–like if a drive fails VMware will refuse to boot the FreeNAS VM until you remove the dead drive from the VM. I have heard of hypothetical issues with heavy I/O but haven’t actually run into that myself.

          • By the way … The reason I use Intel DC series is because I got SSDs for ZIL before ( crucial ) and it made writes so slow on my NFS data store that I just went nuts ! Got DC series, fast as hell.

            It’s not just IOPS and MB/S … It’s how the SSD handles lots of small writes … Etc …

          • Cancelled my order of Samsung EVO. Here is the today’s performance/price(UK) table for SSD with Power Loss Prevention.

            Intel s3500
            model Random 4kB Write Sequential Reads Sequential Writes Price
            120GB 4600 IOPS 445 MB/s 135 MB/s 94£
            160GB 7500 IOPS 475 MB/s 175 MB/s 122£
            240GB 7500 IOPS 500 MB/s 260 MB/s 153£

            Intel s3700
            model Random 4kB Write Sequential Reads Sequential Writes
            100GB 19000 IOPS 500 MB/s 200 MB/s 183£
            200GB 32000 IOPS 500 MB/s 365 MB/s 306£

            Intel 730
            model Random 4kB Write Sequential Reads Sequential Writes Price
            240GB 56000 IOPS 550 MB/s 270 MB/s 127£

            Samsung SM863
            model Random 4kB Write Sequential Reads Sequential Writes Price
            120GB 12000 IOPS 500 MB/s 460 MB/s 109£
            240GB 20000 IOPS 520 MB/s 485 MB/s 141£

            I do need to 160GB+ datastore. By the looks of the above table it makes sense to go for either
            Intel 730 series 240GB or Samsung SM863 240GB. Intel 730 has the same controller ssd controller as S3X000 series.
            Still a bit apprehensive about Samsung SM863, but the it statistics are making me drool over it.
            The random write performance gain is that of S7000, sequential write twice better for half the price.
            I think I’m gonna order this.

            P.S. I’ve not listed all models in each series.

          • Glad to hear OmniOS is working well for you. OmniOS is a very good platform. I do manage a few OmniOS boxes and you can really squeeze a lot out of the hardware and I’ve had no issues with them. Right now my main home ZFS server is running on FreeNAS, but if I was to start from scratch I might redo it on OmniOS again, it’s hard to say–I really like both platforms–for anything critical like a business I prefer the stability of OmniOS with Napp-It’s control panel. For home use I really miss some of the very simple services that Napp-It provides like the MediaTomb DLNA service–you can get DLNA services running in FreeNAS but you have to run it in a jail and that’s just another thing to maintain.

          • Like Reqlez mentioned you’re not necessarily looking for IOPS or MB/s (although it’s somewhat important), what you want in a ZIL device is the lowest write latency possible. Also, in my testing I found the DC S3500 and S3700 well out performed other brands that claimed to have better IOPS and throughput than the Intel drives.

          • oh bummer! By the looks for the following comparison where samsung sm863 beat intel dc 3500 in almost all statistics


            It hasn’t done so well as a ZIL. Pretty much the same 26MBps write speed. doh!

            The is something more in a ZIL ssd than just lower random write latency, higher random write IOPS.
            Probably Intel has a better ‘random concurrent read/write’ algorithm in their ssd controller than anyone else.

            Intel 730 added to cart!

          • On further prolonged testing, it appears that everything is working fine with Samsung SM863.

            On cifs, with sync=disabled

            Using Fastcopy, I was able to copy 55GB of data

            TotalWrite = 54,522 MB
            TotalFiles = 241 (33)
            TotalTime = 42:19
            TransRate = 21.5 MB/s

            21.5MB/s equates to about 172Mbps. Which seems pretty decent i think.

            The iostats shows throughput on
            2xwd red 3TB drives to be roughly 7500KBps i.e. 62 Mbps
            2xwd red 3TB drives to be roughly 9300KBps i.e. 76 Mbps

            Difference is because two bays are 3Gbps SATA and the other two being 6Gbps.

            The above iostats are pretty much inline with the dd write test ( dd if=/dev/random of=test.img bs=2048k count=20k)

            I think Samsung does give you the best speed for the buck. =)

          • I might be misunderstanding… but, if sync=disabled you aren’t using the ZIL Log device, writes are just going direct to the spinners. That is probably the throughput of all your spinners combined. If you want to test the ZIL/SLOG on your Samsung you want sync=standard or sync=always.

          • Ben! Just to get an idea, What kind of write throughputs have you been getting on intel ssds? You ssd benchmark article doesn’t mention any throughputs other than IOPS.

          • Obviously the DC S3700 100MB version is slower on throughput than the larger versions–if I had a choice I’d get 200MB or 400MB, but I got these for cheap on eBay and I don’t really need anything faster.

          • So pretty much the same statistics with Intel 730.

            On HP Microserver Gen8, with 8GB RAM, raidz, 4xwd 3TB drives mounted as vmdks in ESXi 6.0, Freenas 9.3 running on VM (2vcpu) hosted either on 240GB Intel 730 SDD or 240GB Samsung SM863 datastore

            21.2MBps is the best you can get on Cifs, with sync=disabled,
            14.7MBps is the best you can get on Cifs, with sync=standard

            NFS write speeds are also the same as above.

            Test was carried out by copying 50GB of data using Fastcopy client.

            Now just waiting for LSI HBA to arrive. My gut feeling is with HBA passthrough, I’ll get further improvement in writing speeds.

          • Its decided, I’m keeping Intel 730 and returning Samsung SM863.

            Performance aside, it turns out Intel and LSI go together hand in hand. Someone on the web mentioned Intel SSD controller has been designed by LSI. LSI HBA/Raid Controller work better with Intel than Samsung or any other drives.

          • Why would you get a 730 when Ben recommends a 3500 per 3700. Personally I am using a 3710 which gave me better results than the 3700.

          • Because the 3710 is an enterprise drive and also the successor to the 3700. Not sure about power loss protection, latency or endurance on the 730. Let us know how your tests go.

          • Intel 730 being a consumer grade SSD lacks the endurance of an enterprise SSD. WIth that said, it has the same ssd controller as Intel S3X000 series, higher write throughput even more than S3710 and built in power loss protection which surely is the icing on the top. Less endurance at the cost of more speed, i’ll buy it cause 5years down the road I’m likely to building another home storage device that utilizes faster interfaces and disk arrays (SAS or SATA not designed for microsecond or less latencies).

    • Hi, Wasif. You’re welcome. So, a couple of things could cause this… but first thing I’d do is temporarily disable sync on the CIFS dataset to see if that makes a difference by running “zfs set sync=disabled tank/yourcifsdataset” This will prevent ZFS from trying to use the on disk ZIL when writing data, and write it directly to RAM. After changing that setting if you get better performance you may not have a fast enough SSD, or perhaps the L2ARC and LOG are having contention issues being on the same device. If on the other hand you find that your performance is unchanged at least we know it’s not the ZIL and can start looking at other things like the Samba service.

      Have you tried other clients to make sure it’s not a client-side issue?

      Can you provide the output of “zfs get all tank/yourcifsdataset”?


      • HI Ben. I don’t think he is using an SLOG ZIL, just the on disk ZIL. Also, CIFS writes are usually not SYNC unless he specifically added the SYNC Always property on the ZFS dataset … contrary to NFS where SYNC Standard makes it use ZIL on every write unless you set it to SYNC Disabled. Definitely looks like some kind of contention issue tho. Correct me if i’m wrong. I mean, there are some SYNC writes in play while using CIFS, like … filesystem changes… but … thats pretty minuscule unless he is writing 100,000 4KB files into the dataset, in this case he is writing 1 big file … ZIL usage will be pretty low in that scenario with CIFS assuming sync is set to standard. I’m leaning more towards memory not LOCKED in ESXi for the freenas VM or maybe something with RDM or VMDK passthru ? By the way … ESXi write everything as SYNC as far as I know … so maybe thats the issue… VMDK is being used for dataset and it’s placed on a NON CACHE raid controller or direct on drives … that would do it?

        • I believe VMware honors sync requests on VMDKs that are on local storage, so that shouldn’t be an issue.

          Even though it’s not best practice, if you create a large VMDK on each physical drive and give those to the FreeNAS VM it should more or less have the same performance as spindles. The vmdk overhead is very small these days, I imagine any Xeon class processor should have no trouble driving hundreds of VMDKs with lots of I/O and not seeing more than a few percentage points of performance degradation verses an HBA.

      • Wait a sec … i’m contradicting myself here lol but this makes sense! ESXi does not know if a write is supposed to be SYNC or NOT when it’s coming from a VMDK file because it has no knowledge about SYNC or non SYNC writes, so if a VMDK is residing direct on a single disk, then ESXi would make every write as SYNC ? and if there is nothing to cache the SYNC write in memory, like a raid controller with BBU or an SLOG SSD directly attached to freenas via VT-D … then there is no way to make writes ASYNC ? Now if the disk was directly attached to freenas via VT-D on a SAS controller … then freenas knows what is SYNC and what is ASYNC and would only request SYNC form the hard drive when needed ? Does this make sense ?

        • Right, the issue has nothing to do with FreeNAS or ZFS. It’s the way VMware handles writes. VMware doesn’t know what guest writes are sync or not so it treats every NFS write as a SYNC. Even if FreeNAS us using VT-D, because it’s getting the sync command from VMware on every write ZFS is going to honor that and treat it as such. So the general rule is you have to have a fast ZIL/SLOG device if you’re going to run VMware on ZFS. Only way around it is to set sync=disabled on the dataset. This is one area that bhyve is an improvement over VMware since bhyve is aware of the guest VMs sync request it will wait to request a sync until the guest does.

          • I should mention you can also use iSCSI instead of NFS, but then believe VMware does not honor sync requests at all, even sync writes from the guest are treated as async (unless you set sync=always on the iSCSI dataset but then you’re back in the situation you were in with NFS)–so with iSCSI you have a rollback risk where you can lose some data during a power failure or crash, but I don’t think you can end up with an inconsistent disk (other than what you’d get from a normal unexpected power failure) because ZFS guarantees write order. I have found that iSCSI isn’t as a good a solution as NFS. on FreeNAS Especially on FreeNAS 9.3 it seems iSCSI isn’t robust yet, but also iSCSI is just dumb block storage and it will fragment the drives in a hurry–I think just about everyone that uses iSCSI learns they need to do mirrors instead of RAID-Z[x] to maintain performance.

          • Hey Ben,

            I tested my setup with Xpenology DSM instead. Provided Xpenology DSM VM with four vmdks to perform software raid 5 configuration. I’m hitting 95MBps contant write speeds on it. This just show the write speed bottleneck that i’ve been witnessing in Freenas has nothing to do with Disk speed limitation. On the other hand, iperf tests shows Freenas NIC hitting 645Mbps on 1Gbps link. I’m tempted to think that Freenas has not been fine tuned and refined enough to utilise the vmdk disks at full throttle. I’m yet to receive my LSI HBA so keeping my fingers crossed.


          • I’m not surprised since the whole freenas forum says don’t use VMDKs. On another hand maybe freenas handles the data write requests differently that’s why you are seeing the performance increase.

          • Yes. It seems as if the Freenas community is not interested in supporting VMDKs (physical RDM) on ESXI to acceptable performance levels and continue to uphold anal requirements for greater control and visibility on raw physical disks via HBA.

          • I agree… I should have mentioned when I ran on VMDKs I was using OmniOS which works much better than FreeNAS in that configuration. On my C2750 OmniOS performed very well on VMDKs while FreeNAS couldn’t even max out a gigabit connection in the same configuration.

            What may even work better with VMDKs is a Linux distro that supports ZFS like Debian or the upcoming Ubuntu 16.04 LTS, because they can use the Paravirtual disk driver which is even more efficient than the SCSI driver.

          • Finally installed LSI SAS 9211-8i (IT mode firmware), passed through the card to Freenas VM and the CIFS write speeds with sync=standard are up to 95MBps. ZIL not being utilised at all although I’ve provisioned a ZIL mirror.

            Moral of the story, HBA is Viagra and VMDKs is constipation for Freenas.

  32. Hi Ben,

    Thanks for your guide. I intend to set up my Freenas VM very soon (I have ESXi ready to go!). Forgive my ignorance, but what is the benefit of the ESXi internal network and NFS or iSCSI share (i.e. the second NIC)? Does it let one mount the share as storage directly in another VM? Is it faster?


    • Hi, Amro. You’re welcome. The separate internal network is just a best practice to separate storage traffic from the rest of your network. It may have some performance improvements and also is better security.

  33. Hi Ben! Thank you very much for this informative tutorial! This might be the solution that we are looking for with regards to the issues we encountering with FreeNAS.

    To give you an overview, our school recently bought an HP Proliant ML150 G9 server. With 16GB ECC RAM and 3 x 3TB HD. We installed FreeNAS on a USB thumbdrive and was able to boot up the thing. Unfortunately, the network interface card (Broadcom BCM5717) was not detected. And based on the forums that I have read soo far, most Broadcom NICs doesn’t play well with FreeNAS. I figured that running FreeNAS on VMware might solve this problem.

    Also, using your procedures on this tutorial, would it be possible still to have the FreeNAS OS run on a thumbdrive?

    Apologies, Im still a newbie with regards to NAS and Virtualization but I was able to dabble with it using regular desktops.

    Thank you very much!

    • Hi, Darwin. You’re welcome. If your goal is to setup a storage server and you don’t need virtualization I’d steer you away from VMware–especially if this is a production system. You have a few options:

      It looks like the BCM5717 is compatible with OmniOS: so if you want a robust ZFS server on bare metal that might be preferable to running FreeNAS under VMware. Napp-It is a great control panel for OmniOS and there’s some decent guides to get you started on OmniOS as well.

      Another option is you could purchase a FreeNAS compatible NIC. The best NIC brand for FreeNAS is Chelsio in my opinion.

      If you decide to run it under VMware–you might be able to pass a USB drive to a FreeNAS VM and boot off of it–but I’m not sure that would work reliably, I probably wouldn’t go that route. I think it would be better to buy an extra HDD, install VMware on that drive, and then you can also use it as a local data store for VMware so you can boot FreeNAS off of that. It’s possible to create VMDKs on each of your data drives like Wasif Basharat is doing, but for a virtual FreeNAS best practice is to get a dedicated HBA like the IBM M1015 for the data drives and pass the controller to FreeNAS directly via VT-d. But by the time you do all that it would have been easier, cheaper, to get a new NIC or use OmniOS.

      I’m not sure what your requirements are, but you may want to consider getting an SSD for a SLOG/ZIL.

      Hope that’s helpful.

      • Hi Ben,

        Good day to you! Hope everything is alright and well.

        We went ahead and setup VMware on our machine since we will be virtualizing other systems as well. As you have suggested, we got another hard drive to install VMware to. After numerous changes on the BIOS as well as countless reboots I was able to ‘barely’ run FreeNAS virtually. I did follow most of your procedures except the part where the RAID card is configured for Passthrough. What we have is an HP Dynamic Smart Array B140i Controller. On the part ‘Mark Devices for Passthrough’ other devices are linked with it such as the HP iLO. And selecting the RAID Controller practically selects the others to. Rebooting after results to a system halt on the ‘dma_mapper_iommu loaded successfully’ part of ESXi. Anyway, I disabled the RAID controller and enabled it as AHCI SATA controller instead. VMware vSphere identified the controller as ‘Intel Wellsburg sSATA RAID Controller’, After which FreeNAS was then able to see all drives. As you suggested, I then added the drives as VMDKs. But I was in a dillemna of either having it as Physical RDM or Virtual RDM, since I am not really sure which one is better or preferred. So I tried out both. Physical RDMs resulted with FreeNAS not listing the drives in ‘Volume Manager’ but on ‘View Disk’ the drives are listed but with 0 (zero) Disk Space. Maybe someone can explain to me why this is so, thanks in advance. With Virtual RDMs, FreeNAS listed the 3 drives and I was able to create a ZFS volume.

        Since I am having fun with all this, I wanted to try out OmniOS with Napp-It as well. Followed the setup instructions in their site and the installation went through without any errors. I was also able to configure the disks in ZFS whether they were Physical RDM or Virtual.

        Having gone through all this, I wanted to ask you and everyone for your opinion. Although one of the option would still be getting an HBA card (that works well with HP servers), I want to find out which setup is preferred.

        Thank you!

        • So, what I know about RDM is that it can work well on some servers (I had pretty good luck with it using the HP Microserver N40L) but it /can/ cause data corruption under load on some servers. Instead of RDMs, generally I think it’s safer to add each of your drives as VMware volumes, then edit your FreeNAS VM and add a virtual hard drive located on each volume to make up your pool. But far far better is just to get an HBA and give it to FreeNAS.

          • Hi Ben, unfortunately I might not be able to go through having the drives as VMDKs since the drives are more than 2TB in capacity. So the only options I have at the moment are either have them as Virtual RDM or Physical RDM. Not really sure which one to use though.


  34. Hello Ben, the tutorial is put together very well and I have a working FreeNAS install inside VMware. The only problem is, the second NIC, “Storage”, isn’t seeing any traffic and isn’t ping-able. I have vmx3f0 setup as the management interface with DHCP enabled,, and I can access the webgui no issue. vmx3f1, however, isn’t responding to anything. It’s in VMware set to a manual IP of, and is visible and set the same within FreeNAS.

    Where would I start?

    • Hi, Mark. You need to setup the management and storage networks on different subnets–the way you have it setup now, both FreeNAS and VMware are attempting to route the storage network through the default route (probably the management interface) but neither one is listening with their storage network IP on the management interface so they won’t be able to communicate.

      In your case I would do this. Leave your management interface on Change your storage network to something like (netmask for VMware and (netmask for FreeNAS. Then they should be able to ping each other on their 10.0.0.x addresses. Note, that the storage network is completely segregated from your management interface with no routing between management and storage, so FreeNAS and VMware will be able to communicate with each other on that IP, but you won’t be able to access it from any computers on your management network. This is on purpose.

  35. Great post Ben.

    I am trying this on a smaller scale and I was wondering if there is a cheaper SATA controller I could use? I am planing in having 3 HDD in ZFS1 so I would need a controller with 4 ports the most. Is SYBE controllers compatible with passthrough?

    Best regards,


  36. ” I usually over-provision down to 8GB. “. Do you mean you juste have 8GB of space remaining on your drive and everything else is “Reserved”?

    • Hi, Pat. Yes, for ZIL I over-provision my 100GB SSD so that it’s only 8GB. This does improve performance slightly and 8GB is overkill for a ZIL anyway. You also don’t need that much space on SLOG device. Oracle recommends up to 1/2 the amount of RAM you have which is very conservative–the max amount of data you could write in 5 seconds is probably the max size you need:

      • Thanks Ben! I’m glad I didn’t shell out for a large SSD!!! I ended up buying an 80GB drive. One other question. From your instructions for creating a swapfile, you create it under /usr, which from my understanding means it resides on the VMWare virtual disk of the FreeNAS installation? Would this affect performance?

        • That’s correct, obviously it will depend on what kind of disks you have on VMware. But you shouldn’t be swapping heavily anyway, it’s best to have enough memory to not need the swap, but it’s there just in case.

  37. Great How to! thanks for sharing.
    However on freeness 9.10 swapfile via rc.conf does not work anymore. This is due to freeBSD 10 which has a different way of creating swap spaces.
    Should you have any tips to make this through the webUI, I am interested to know.

      • Hi, Ben – I see you’re keeping this outstanding tutorial up-to-date. Thanks, and Keep Up The Good Work!

        I have upgraded from 9.3 to 9.10 on my testbed system, which I had configured from your tutorial last summer, with a swap file at /usr/swap0 and related tunables.

        Regarding the swap file: I can’t get your suggested swap file settings for 9.10 to work… I tried the init/shutdown command according to your instructions above without success.

        I also tried the following shell script as a ‘Post-Init’ init/shutdown task of type ‘Script’. It doesn’t work, either, though it DOES work when I run it from the command line.

        echo “md99 none swap sw,file=/usr/swap0,late 0 0” >> /etc/fstab
        swapon -aL

        I’m stumped… do you have any ideas?

        • FWIW, I was finally able to get this swap script to execute; I had to delete all defined startup/shutdown tasks and recreate them from scratch. Until I did that, FreeNAS would only execute the first defined script. Odd…

          I also found that 9.10 seems to stop the NFS service before it executes shutdown task scripts. At least, that’s my guess. A VM shutdown script I ran on 9.3 stopped working on 9.10, finding no VMs on my NFS data store. I created a bug ticket for this:

  38. Hi Ben,
    thank you for this great turorial. I try to setup a very similar configuration like you and have some questions about freenas network config.

    You wrote under
    “7. Create the FreeNAS VM.
    On Networking give it three adapters, the 1st NIC should be assigned to the VM Network, 2nd NIC to the Storage network.”

    How does the third NIC get configured? I did not find that.

    As I understand, vmx3f0 is a network port under “VM Network” in ESX and is the first network interface in freenas,named as “Management”. The IP address it gets from dhcp shoud be on the network, right ?

    • Hi, Tamas. That should have been two adapters (I just fixed it). FreeNAS 9.3 didn’t come with VMXNET3 drivers so I setup a 3rd E1000 adapter to use until I could get the network drivers isntalled.

  39. Hell Ben,

    Another contribution if I may…
    I just solved (at last) an issue I was having with my plugins not showing on freeNAS 9.10

    it seems the wrong vmxnet driver is loaded at boot (vmxnet.ko instead of vmxnet3.ko). Hence it still sounds necessary to install and load the proper VMXNet 3 driver manually and disable the embedded vmxnet driver. more details on the bug ticket I opened

  40. Hi guys,

    So far I had to re-install ESXi few times because after setting private IP ( here that is out of my DHCP server range (– after reboot I could no longer connect to ESXi although its running without any issue and I can ping it.

    Only solution that I found was to instead of entering private IP is to check “Obtain IP settings automatically” and that way I can reboot and won’t have issue. I did noticed after reboot it still does not get ip from DHCP and it shows something like 192.169.x.x What is purpose of this screen and why do I get locked out?

  41. I noticed that in the image you have after the comment “I have on numerous occasions had the Log get changed to Stripe after I set it to Log, so just double-check by clicking on the top level tank, then the volume status icon and make sure it looks like this”; it shows them “striped”…

    Is this correct or shouldn’t they be “mirrored”?

  42. I’ve never really worked with any of the VMWare products so naturally after reading your _great_ article I have a couple of questions.

    – Is there any reason not to use USB thumb drive for FreeNAS installation.
    – Is using vSphere replication even better than mirroring the FreeNAS boot? I am guessing that this would enable FreeNAS to boot without any user intervention after USB thumb drive failure. My plan is to use 2 or 3 thumb drives for FreeNAS VM.

    My current setup is:
    – Dell Poweredge T110 II (Xeon E3-1220 v2, 24GB ECC RAM)
    – LSI SAS9211-i8 flashed to IT mode
    – 3 x WD RED NAS 3TB drives for ZFS pool
    – 16GB Sandisk Cruzer for ESXi installation
    – 3 x 16GB Sandisk Cruzer for Freenas VM (and vSphere server appliance..?)
    – 1 x Intel DC S3700 100GB SSD for SLOG (ordered, not received yet)
    – 1 x TBS6205 quad tuner DVB card (ordered, not received yet)
    – 1 x Seagate 4TB drive for non redundant temp storage
    – 2-3 x older drives for

    PCI passthrough of the LSI card seems to work, I was able to see it in FreeNAS and create a pool.
    In addition to FreeNAS am planning to run the following VMs (these will reside in the FreeNAS pool which is shared back to ESXi via iSCSI or NFS):
    – yaVDR (Ubuntu PVR distro). I am hoping to passthrough the TBS tuner card to this VM
    – Ubuntu server for VPN server and other things
    – Debian for home automation with OpenHAB
    – Windows for Blue Iris video surveillance

    Will this work? :)

    • Hi, Marz. If you can get FreeNAS inside a VM to boot off a USB drive I don’t see why you couldn’t do it–although I’m not sure a VMware guest can boot off a USB. To me it’s too complicated and I never really tried it since I have extra HDDs. Let me know if you end up trying it. I’m sure the free version of ESXi so vSphere replication isn’t free.

  43. Hi All!
    I’m from Ukraine. And I work system administrator in a middle IT company. My main task is integrate virtualization and for this we use vmware. I like Ben’s guide. And I have a question.
    Who uses this schema on non critical product? Because I want install and try to use on our new server in data center. And whether it is a good idea for me?

    Thank’s and sorry for my english. :)

  44. Hi Ben and thank you for this great article! It has been most informative.

    I am part of a small web development team (with 6 devs) in Greece where we have been running VMWare ESXi on an HP Microserver N40L the following VMs:

    1. OpenLDAP as authentication backend.
    2. MySQL Server with multiple DBs (mostly WordPress)
    3. Apache as Staging Server with multiple VHOSTs.
    4. GitLab Server with some CI runners for testing.
    5. OwnCloud Server for shared storage and document management.

    We recently acquired an HP Microserver Gen8 where we want to move everything and have been looking into dockerizing these services while having a common storage backend –thats how I got to your article.

    I was thinking to continue using VMWare ESXi but with the following VMs:
    1 x NFS server (FreeNAS or equivalent)
    2 x Docker Hosts

    The main usage of our server is for the GitLab service that we use for Project Versioning and CI and the OwnCloud instance for shared storage. The staging environment (apache/mysql) is only used for reference by our customers while developing the sites.

    Here is a list of hardware that we have available:
    – 1 x HP Microserver Gen8 (G1610T/2.3GHz – no VT/d)
    – 1 x HP Microserver N40L
    – 2 x 8GB ECC Memory
    – 2 x 4GB ECC Memory
    – 1 x HP Smart Array p212 Controller (no cache/battery)
    – 3 x 2TB Seagate HDD
    – 2 x 500GB Samsung 850 EVO

    After reading your article and comments I seem to have a problem since VT/d is not supported on either servers and I cant easily/readily find a CPU replacement for the Gen8 that will enable this (but I am still looking).

    I don’t have any experience with ZFS so I would greatly appreciate your feedback on how to setup our server storage and any suggestions/tips on how to approach this.

    • Hi, Ioannis. Thanks for the comment, I’ve also been running Gitlab in a VM for awhile (although I have been using the hosted Microsoft Visual Team Services lately so I don’t have to worry about keeping Gitlab up to date). I had the HP Microserver Gen8, and even though I put in a CPU that supported VT-d I couldn’t enable VT-d on an IBM M1015–it seems that’s an issue with the Gen8 HP Microservers. What did work well for me on the Gen8 is using Napp-It with OmniOS, I followed Gea’s all-in-one guide (linked to at the very top of this post). OmniOS seems to outperform FreeNAS and be a little more stable when you’re using vmdks. You can create one vmdk for each drive and get near the same performance as VT-d. Some say VMDKs are not as stable as direct access to the drives so you’ll want to do some heavy load testing to make sure it’s stable–I haven’t run into any stability issues with vmdk on OmniOS.

      One thing I would change on hardware is getting a good SSD for SLOG/ZIL (to cache random writes). The SSDs you have I don’t believe have power loss protection and probably don’t have low enough latency. I like the Intel DC S3700 (there are also some newer versions available that should be just as good, S3710 or something). If you can’t get a good SSD you can disable the ZIL on a ZFS data set but you’ll risk losing a few seconds of writes if you lose power or the system crashes.

    • Don’t invest in Samsung EVO at all, Check my test results, previously reported in comments.
      I went with Intel 730 SSD and it gives you the best results for the buck. Intel 730 is consumer grade SSD while DC S37XX are enterprise hard disks with more endurance (lifespan). Consumer ones should also last a lifetime.

      As for VT-D support with LSI SAS 9211-8i card in HP Gen8 Microserver, works like a treat.

      Here is a list of VT-d CPU for Microserver.
      I was lucky enough to find a cheap E3-1260L of ebay. E3-1220L v2 is your ideal choice for low power consumption.

      The only thing I miss on the server is one additional PCI slot for graphics, but heck it is still awesome.


      Winter is coming!

    • Thank you guys for your reply. Much appreciated! Let me know if i got it right:

      1. I can use the existing hardware without VT-d and use the built-in B120i Dynamic Smart Array in ACHI mode and connect the 3x2TB HDDS with the HDDs and get an Intel SSD on the ODD for SLOG/ZIL + VM Datastore. This will require to put a vmdk on each hdd and connect it to the FreeNAS or OmniOS for NFS.

      2. Alternatively, I must get a VT-d enabled CPU and an additional HBA to pass through to the NFS VM.

      Do you know whether the HP Smart Array p212 Controller can be used instead of IBM M1015 or SI SAS 9211-8i?

      Also I’ve read somewhere that the Samsung 850 EVO and PRO have basic/partial power loss protection. Is this correct?

      • The write speed is horrible with vmdk’s. I was getting around 15Mbps. With VT-d CPU and HBA, i was getting 95MBps on a gigabit link. You option 1 could be viable if you install Freenas on baremetal but then again Freenas doesn’t have drivers to support P212 controller in HBA mode.

        HP Smart Array can’t be used when I researched it about few months ago. M1015 or 9211-8i is the only viable option in Microserver.

        If you are to buy Samsung then buy Samsung SM863 SSD instead. These are designed for random write speeds (which is what you need for ZIL) and not EVO. I guess this might convince you of Intel 730 as well.

        • hmm… I got much better performance than you on vmdks… I was able to push 98MBps using CIFS over the LAN with OmniOS on vmdks (much much slower results with FreeNAS but not down to 15MBs, I think I was between 60-70MBs on FreeNAS). This was using a Xeon E3-1230v2 with a vmdk on an Intel DC S3700 for ZIL on the HP Microserver. I believe it had 3 x 7200RPM Seagates in RAID-Z. The specs on that SM863 are pretty nice.

  45. I’m on step 9 and have a question that may seem stupid. How do I “Mount the CD-ROM and copy the VMware install files to FreeNAS”. I tried running the commands in the console but it says no files exist. So I want to make sure that I am doing this properly could someone give me a little more details. This is my first time using FreeNas. Thank you.

    • Never mind I did know I wasn’t in the root folder. I just had to type cd / and that took me to where I needed to go.

        • Ben, I was hoping you can give me advice on a different Dell Server hardware setup. Came with the PERC H700 raid card so I was thinking use RAID 10 for VMs and let FreeNas manage my NetApp Fibre Channel Disk Array. The only concern I have is what would be the best way to configure read and write cache with the 2 SDs I purchased.

          Dell 2U Build
          PowerEdge R815
          ECC Memory
          2 SD Cards (mirrored for ESXi)
          PERC H700 / LSI 9260-16i raid card (came with server)
          6 hotswap bays
          *4 2.5″ SAS (currently raid 10 using PERC H700 for VMs)
          *2 Intel DC S3700 (connected to PERC H700 for .. don’t know was hoping ZIL and SLOG )
          NetApp Fibre Channel DiskShelf DS14MK4 (14 X 300GB at 10K rpm)

          Your insight would be truly appreciated

          • Hi, Trenton. ZFS isn’t really designed to work with RAID cards… but that’s a nice RAID card. Probably the main risk is losing/corrupting ZIL data if the RAID card doesn’t guarantee write order for some reason (not sure if that can happen on that card) but probably the risk is mitigated if you’re only running the ZIL and none of your actual data on the H700. If you want to take the risk I’d start with write cache enabled on ZIL and L2ARC, and read cache enabled only on the L2ARC (disable read cache on ZIL). The NVRAM from the H700 should provide a nice performance boost on top of the DC S3700s. The one thing I would test is physically pulling the ZIL during heavy writes (do this around 10 times) to make sure the setup doesn’t fail under that scenario. Also you might try hard powering off the server a few times to see how it handles–try a few variations to simulate various power failure scenarios… power off and immediately back on, power off for 5 minutes, power off for 24 hours, etc. and make sure your pool comes back up okay in all those scenarios.

  46. Ben, thank’s for the advice. From your advice, I believe my best option is to keep the 4 rotating SAS drives plugged into the PERC H700 and plug my SSDs into another HBA card that supports JBOD (maybe LSI SAS9211-i8) and use passthrough to give Freenas direct control to the SSDs…. unless there is a way to allow Freenas to use SSDs controlled by ESXi for L2ARC (please send info on this if this is possible).

    I have two Intel DC S3710 200GB SSDs… If my only solution is to passthrough the SSDs to Freenas, I am considering returning them and getting 2 X 80GB Intel DC S3510. I hate to see storage space go to waste if I need to overprovision down to 8GB.

    With these configuration choices, my goal is to hopefully I can use this server for storage and virtualization.

    Which of these configs you think may work best for me? Your input much appreciated.

  47. I have a question in step 17, I noticed we are using the IP address to connect to the NFS share. Is it possible to use the hostname to connect to the NFS share. I tried to do it but I was unable to get ESXi to connect to the hostname.

    • Hi, Joe. It is possible to use a hostname… but the hostname must resolve–I don’t have DNS on my storage network so that’s why I used the IP.

      • Thank you, after a few hours i was able to get it setup using DNSMasq. I have a DD-WRT router so it just took some playing in the router configs.

  48. Ben,

    My ESXi has static IP and I would like to have
    1. VMware Storage Network IP
    2. FreeNAS Storage Network IP
    but when I reboot I cannot longer access my ESXi and I had to reinstall it multiple times.
    Why is this? Here’s screenshot before doing reboot

    • Hi, JohnnyBeGood, yep, that won’t work. Your storage network can’t be on the same subnet as your main network or you’ll have routing issues. If you have 192.168.x.x/16 for your main network, setup a 10.0.x.x/16 for your storage network. Or if having a separate storage network is overkill for your setup just forgo that part.

      • Ben,

        Thanks for the reply!

        I installed old Intel PRO/1000 PT Dual Port Server Adapter on my ESXi server and I need assistance with configuring it. What differently I have to do now to get it to work?

  49. I did some network testing last night. With a private NAS network, the VMXNET3 drivers aren’t really much faster than the driver thats installed by default with 9.10. Setting my MTU to 9000 made a much larger difference. Just found that interesting as I always thought MTU shouldn’t make a difference on the virtual switch interfaces.

    • Hi, Zach. Thanks for posting your results, interesting find on the MTU. Would you mined sharing what hardware you’re using? My understanding is the VMXNET3 drivers aren’t that much faster (and probably none if your CPU is fast enough), it just takes less CPU for the same amount of work.

  50. Hey Ben!

    This is Ben! I have found this guide very helpful as I prepare to implement a similar system. Currently I have a Dell PowerEdge T20 with Intel Xeon E3 1246V3 CPU, 16GB of ECC RAM, 4x WD Red 3TB hard drives, and a Dell Gigabit ET Quad Port server. I am also using an LSI 9260-4i which I recently discovered can’t do IT mode, so I have a 9211-8i arriving by this weekend. I also ordered an Intel DC S3700 to serve as the SLOG after reading your great blog and I have other various SSDs that I can use as the ESXi boot drive and VM storage.

    Today I am running ESXi 6 with a pfSense VM and a FreeNAS VM that I have Plex, Murmur, and the NAS with a Samba share on hardware raid with the four red drives. I’ll completely reconfigure the machine when the new parts arrive. I plan on reinstalling ESXi 6 and pfSense (1 socket, 2 cores, 1GB RAM), plus FreeNAS (1 socket, 2 cores, 12GB RAM) and then a Linux distro for Plex (2 sockets, 2 cores, 2GB RAM). I will probably use NFS and not iSCSI.

    My questions are around the networking as I am not used to using /16 networking. Currently I use a netmask of so all of my addressing is with /24. My admin interface for ESXi at, my pfSense is at All of my wired devices are at 192.168.29.x (x=static from 2-99, DHCP from 100-249) and my wireless network is on 192.168.30.x (x=static again from 2-99, DHCP from 100-249). I have a firewall rule to allow traffic from the wireless net to the WAN and some select services on the wired LAN.

    I have read many newbie guides on networking but it is still not clicking in my old, tired brain. Knowing my configuration above, can you explain how and why using /16 instead of /24 would be beneficial to me and what your recommended setup would be specifically for the storage network getting along with and not interfering with my wired and wireless traffic? I saw you said you did not end up using the storage network, so how do you configure sharing without it? Could I just put it on 192.168.31.x to keep them separated? I will need to access the NAS from the 192.168.29.x network (and possibly wirelessly from the 192.168.30.x network) primarily on Windows devices using a CIFS share.

    If this is too lengthy to explain, if you could point me to toward helpful networking sites, that would be great.

    Ben H.

    • Hi, Ben! You’re setup sounds very similar to mine, I also run Mumble and Emby (similar to Plex). |:-) You might also be interested in something like for your setup.

      As far as networking I would just run it on your existing /24 network. The only reason to use /16 is if you need more than 254 devices on the same subnet. I figured I might want over 254 on the same subnet in the future (even though I haven’t gotten close to 254 yet) so I always just build my subnets with a /16, but /24 will work fine…especially the way you have it setup routing between /24 subnets.

      If you want to segregate storage traffic from the rest of your main network like I do, you could setup that network on 192.168.31.x like you suggested. You wouldn’t run this network through your router, it only exists between VMware and FreeNAS and it only exists inside VMware’s virtual networking infrastructure.

      >> I will need to access the NAS from the 192.168.29.x network…

      This is done by having two virtual interfaces on FreeNAS, once connected to the storage network and one to your main network. You would setup NFS storage for VMware to only be available from your storage network and you setup your main network shares (NFS, CIFS, whatever) for your LAN to be available to and So for example VMware would have two adapters. One on connected to your main network (you would do this by connecting the virtual adapter to a vSwitch in VMware that’s connected to one or more physical NICs), and another on a VMware virtual storage network at Then Your FreeNAS server would have two adapters, and, connected to the appropriate networks.

      Hope that makes sense… you could also just keep it simple and only give FreeNAS one IP address and run everything through that. I’ve also run that way to no ill effect… it’s just best practice to separate it but may be overkill for home networks.


      • Ben, I got it all setup, and for fun I tried napp-it in another VM. Guess what? With the same hardware, configuration, and everything, FreeNAS peaks at about 95 MB/sec. on my network, but napp-it is able to do 105+. Looks like I may be sticking with napp-it!

        • Great to hear, Ben! Also, thanks for mentioning the performance difference between Napp-It and FreeNAS, those must be CIFS/SMB numbers, I got similar results to yours.

          • Yep, with CIFS/SMB I was actually getting a sustained 110-111 MB/sec. when I was copying large amounts of data. There were some peaks and valleys depending on the filetypes but it was definitely faster than FreeNAS. Also the clients were faster to update if a file was deleted from the server.

            Any disadvantages to OmniOS/napp-it vs. FreeNAS on ESXi? OmniOS/napp-it was tougher for me to get running and understand the GUI because the instructions I found weren’t quite as good as yours and I am not nearly as adept with Solaris as I am with FreeBSD and Linux. Are there any Linux-based NAS-oriented products? I liked Proxmox VE but it doesn’t seem to have ZFS.

          • I posted the main differences here: both are great, I use both. Right now I’m running FreeNAS as my main system. Main reason is I still use CrashPlan for backups and CrashPlan dropped Solaris support awhile back. If you’re planning on using iSCSI you’ll definitely want to use OmniOS.

            I don’t know of any decent Linux NAS OSes, Napp-It has basic support for ZFS on Ubuntu Linux but I don’t think it will setup shares like it does on Solaris.

  51. Napp-it’s sharing on Solaris is great. I also tried NexentaStor last night and it was just as fast as napp-it but configuring the NFS and CIFS shares to the same file system proved problematic. I liked Nexenta’s GUI compared to napp-it, though, but I won’t be in there very often so it isn’t a deal-breaker. I was also able to get Plex running on an Ubuntu Server 16.04 VM and it seemed to perform great with napp-it using an NFS share to the media files. I read about your issues with the backup software and that’s something I haven’t even pondered yet. As of right now, if the ZFS pool was full, I don’t even have somewhere to backup the files to on-site, so unless it was cloud-based, that’s not happening. I’ll have to think about that and ponder some backup solutions!

    Thanks again for all the help!

  52. Last night I put up a Debian 8.4 VM and configured a raidz2 pool. It was just as fast, if not faster, than anything I’ve tried. I am starting to wonder why any of these FreeNAS-like OS’s are needed at all when things can be precisely controlled at the CLI level?

        • The main reason I prefer to use something like FreeNAS or Napp-It over do it yourself is the built in reporting and monitoring and email alerts if something goes wrong. I don’t login to my NAS very often and I’d hate to have a drive failure and not notice for a month. I know you can setup your own scripts for monitoring but that’s work I don’t have time for when something already exists. Another consideration is if something happens to me it’s going to be pretty easy for my wife to get help on a widely used setup. That said, I think ZFS on Linux is a great thing and hope someone (OpenMediaVault perhaps?) builds on top of it.

          • Ben, I actually agree as I was thinking about this later. The reporting and monitoring I don’t really want to have to worry about having to do something home-grown. And I wish somebody would write something awesome on Linux.

  53. Hi Ben,

    You may like to see my setup.
    ESXi & NAS in One Box

    Lenovo TS140, Xeon E3-1226V3 (3.3 GHz) 4 core.
    20 GB memory
    1 x SSD 240 GB Kingston as the ESXi boot/datastore1.
    3 x HDD 2 TB WD NAS drives.
    2 x UPSs with USB interfaces.

    Intel 1Gbe Server Adapter I210-T1
    SYBA SATA III 4 Internal 6Gbps Ports PCI-e Controller Card.
    (Marvell Technology Group Ltd.)

    On the SSD I have installed ESXi VMware 6.0 and two VMs.
    1, ns0 – CentOS 6.8, 1GB memory, 10GB disk, 1 CPU.
    2, nas0 – NAS4Free, 8 GB memory, 16 GB Virtual Disk, 1 CPU.

    The SATA card is ‘DirectPath I/O’ (VT-d) mapped to the NAS4Free VM.
    The 3 x HDDs are connected to the card and configured as ZFS RAIDZ1 in NAS4Free.
    The NFS mapped back to ESXi as a 3.51 TB datastore.

    On the NFS I have my test VMs

    I have VMware ESXi configured to start up ns0 wait 30 seconds, start nas0 wait 200 seconds then start-up the VMs on the NAS4Free NFS datastore.

    My DNS name server ‘ns0’ is also running as a NUT master.
    The each UPS was added as a ‘USB Device’. There is no need for ‘DirectPath I/O’ mapping of the USB controllers.

    Basiclly I followed one of the answers in this post
    See the native NUT client for ESXi –
    I needed to reboot the ESXi server before the NUT client would show up in the VMware client.

    The HBA just worked and I got AHCI & SMART with no need to configure or change the BIOS!

    • Hi, Michael. Thanks for sharing your setup! Do you have an SSD for ZIL? If not that would really improve your performance on VMs. Nice work on the UPS auto-shutdown setup, I didn’t know about the NUT client for ESXi but that makes perfect since to initiate shutdown from VMware …just make sure you have shutdown order (with ZFS server being last to go down) defined in ESXi.

  54. So! I know this is an old post, BUT, I had to post here.

    I followed this setup, and created my own setup here. It allowed me to consolidate from 5 servers in my home setup to 1. This cut my electric bill at home dramatically, and your guide is fantastic. My first reason to posting is to say THANK YOU.

    My second reason to post was to let everyone else know that there is a critical part of this guide missing and that I figured it out.

    Basically, once you move your environment into this setup, any updates to FreeNAS trigger a reboot… Which triggers the persistent storage for the VMs to suddenly drop out, which leads to crashing and data corruption.

    If anyone is interested, I have a script that I wrote to stop and start the designated VMs automatically with the reboots of the FreeNAS box.

    • Or you could have let ESXI only start freenas’ VM right after reboot and all VMs that depend on freenas after 5 minutes or so. That’s what I would do, if I hadn’t two factor encryption enabled in freenas which requires manual interaction and renders any automatic startup useless. ;)

      Setting available in vspehre e.g.


    • ah, now I see what you did there. That would indeed spare me some headaches as I occasionally forget to shut down the depending VMs with the described results.

      Do you mind to pastebin?


      • It’s not complicated, so i’ll just include the logic below…

        The shutdown scripts are SUPER easy. I have passwordless ssh setup between all my nodes, so I just issue:

        ssh root@server_name.local “shutdown -h now”
        sleep 60

        Note the sleep is necessary, as FreeNAS doesn’t wait for the shutdown to happen, so you could still end up with a crashed machine. Through multiple tests, I know that it takes 30 seconds for my server to shutdown, so I get it an extra 30 seconds. I’m working on a ping script in combination with a power status script to ensure that it’s actually down prior to continuing… I’ll update once thats done.

        The Startup script is here:

        Again, via passwordless ssh, I issue this:

        ssh [email protected] “/bin/vim-cmd vmsvc/power.on 5”
        sleep 15

        To get the ID, on the hypervisor ssh cli issue:

        vim-cmd vmsvc/getallvms

        The ID is the left-most column.

        The other reason why I like this method, is that it doesn’t depend on time to start-up. So if the FreeNAS hangs for whatever reason, ESX won’t try to start the machines anyway.

  55. Hi Ben,

    I am using ESXi Version 6.Update 1 with FreeNAS-9.10.
    During configuration of a new datastore as NFS share in ESXi I am asked
    to choose the NFS version, 3 or 4. Which version would you recommend?


  56. Hi Ben,

    First i like to thank you for the cool and detailed guide here.

    I’m having an issue that iv’e couldn’t find a solution for it over the net:

    When im configuring I/O passthrough from vmware client the LSI card keeps showing the orange mark even after multiple reboots, so i can’t add the LSI as PCI to a VM.

    Here my specs:

    I’m using ESXi 6.0 U2 before I’ve used 6.0 with the same result

    Model: Precision WorkStation T5400
    Processors: 2 CPU x 2,992 GHz. 8 cores
    Processor Type: Intel(R) Xeon(R) CPU E5450 @ 3.00GHz
    Hyperthreading: Inactive
    Total Memory: 10,00 GB
    Number of NICs: 1
    State: Connected
    Virtual Machines: 0
    vMotion Enabled: No

    [root@localhost:~] lspci -vvv
    0000:00:00.0 Host bridge Bridge: Intel Corporation 5400 Chipset Memory Controller Hub [PCIe RP[0000:00:00.0]]
    0000:00:01.0 PCI bridge Bridge: Intel Corporation 5400 Chipset PCI Express Port 1 [PCIe RP[0000:00:01.0]]
    0000:00:05.0 PCI bridge Bridge: Intel Corporation 5400 Chipset PCI Express Port 5 [PCIe RP[0000:00:05.0]]
    0000:00:09.0 PCI bridge Bridge: Intel Corporation 5400 Chipset PCI Express Port 9 [PCIe RP[0000:00:09.0]]
    0000:00:10.0 Host bridge Bridge: Intel Corporation 5400 Chipset FSB Registers
    0000:00:10.1 Host bridge Bridge: Intel Corporation 5400 Chipset FSB Registers
    0000:00:10.2 Host bridge Bridge: Intel Corporation 5400 Chipset FSB Registers
    0000:00:10.3 Host bridge Bridge: Intel Corporation 5400 Chipset FSB Registers
    0000:00:10.4 Host bridge Bridge: Intel Corporation 5400 Chipset FSB Registers
    0000:00:11.0 Host bridge Bridge: Intel Corporation 5400 Chipset CE/SF Registers
    0000:00:15.0 Host bridge Bridge: Intel Corporation 5400 Chipset FBD Registers
    0000:00:15.1 Host bridge Bridge: Intel Corporation 5400 Chipset FBD Registers
    0000:00:16.0 Host bridge Bridge: Intel Corporation 5400 Chipset FBD Registers
    0000:00:16.1 Host bridge Bridge: Intel Corporation 5400 Chipset FBD Registers
    0000:00:1b.0 Audio device Multimedia controller: Intel Corporation 631xESB/632xESB High Definition Audio Controller
    0000:00:1c.0 PCI bridge Bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 1 [PCIe RP[0000:00:1c.0]]
    0000:00:1d.0 USB controller Serial bus controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #1
    0000:00:1d.1 USB controller Serial bus controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #2
    0000:00:1d.2 USB controller Serial bus controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #3
    0000:00:1d.3 USB controller Serial bus controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #4
    0000:00:1d.7 USB controller Serial bus controller: Intel Corporation 631xESB/632xESB/3100 Chipset EHCI USB2 Controller
    0000:00:1e.0 PCI bridge Bridge: Intel Corporation 82801 PCI Bridge
    0000:00:1f.0 ISA bridge Bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC Interface Controller
    0000:00:1f.1 IDE interface Mass storage controller: Intel Corporation 631xESB/632xESB IDE Controller [vmhba0]
    0000:00:1f.2 SATA controller Mass storage controller: Intel Corporation 631xESB/632xESB SATA Storage Controller AHCI [vmhba1]
    0000:00:1f.3 SMBus Serial bus controller: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus Controller
    0000:02:00.0 VGA compatible controller Display controller: NVIDIA Corporation G80GL [Quadro FX 4600]
    0000:03:00.0 PCI bridge Bridge: Intel Corporation 6311ESB/6321ESB PCI Express Upstream Port
    0000:03:00.3 PCI bridge Bridge: Intel Corporation 6311ESB/6321ESB PCI Express to PCI-X Bridge
    0000:04:00.0 PCI bridge Bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E1
    0000:04:01.0 PCI bridge Bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E2
    0000:06:00.0 Serial Attached SCSI controller Mass storage controller: LSI Logic / Symbios Logic LSI2008 [vmhba2] *Flashed to IT-MODE*
    0000:08:00.0 Ethernet controller Network controller: Broadcom Corporation NetXtreme BCM5754 Gigabit Ethernet [vmnic0]


  57. What is hilarious ?

    Ps, i made it.
    Crazy but it’s working now.
    I’m going to post the details tomorrow but to make a story short what I did was updating the m1015 firmware and it’s working

  58. Thanks guys,

    What i did to make it work was to disable by unticking the parameter VMkernel.Boot.disableACSCheck
    Configuration > Advanced Settings (Software) > VMkernal > Boot and.

    Also i have flashed my M1050 lsi IT Mode to a new firmware:
    NVDATA Vendor : LSI
    NVDATA Product ID : SAS9211-8i
    Firmware Version : to Firmware Version :


  59. I stumbled across this guide via a post on the FreeNAS forums. Fantastic guide!

    I currently have a working FreeNAS 9.10 setup for my home server needs (Plex and file storage). What route, if any exists, would I take to move to ESXi with FreeNAS in a VM? Is there a simple path to move my current setup into a VM or would it mean starting from scratch? I have a ton of media already and would not want to move that if possible. Likely my questions reveal my lack of experience and knowledge here.

    • Good question, it should be possible to migrate your pool to FreeNAS under VMware. Make sure you have backups first. Export the pool from your existing FreeNAS install. Do a fresh install of FreeNAS under VMware. Make sure the drives on the pool are connected to an HBA you can pass to the FreeNAS VM, like the IBM M1015. You’ll pass that to the FreeNAS VM using VT-d and it will then have raw access to the disks… and you can re-import the pool. You will need to re-setup any network shares, etc. Make sure you have good backups before doing this.

      • I’ve done some further reading in the forums and links that Google was able to match my searches with. I’m on the fence as far as virtualization right now. I use the home server for media management primarily, serving out media to a couple set-top players (shield and cubox-i) as well as various Android and iOS devices. I’ve been using Plex in a Jail quite happily for quite some time, having moved from a linux setup to FreeNAS almost two years ago. I see people using Plex from within a VM, so I know it will work, I am just not clear as to the performance hit I will take.

        I guess I need to add that my whole interest in virtualizing the current setup lies in my desire to add an Untangle or pfSense VM to my current network. I am prohibited from simply ordering a new machine suited for this task, and I believe I have plenty of resources to share on the current server: Xeon 1231v3, 32GB RAM, 2 intel nics on the Supermicro MB plus an Intel quad port 1gb nic.

        • I run Emby (similar to Plex) inside a VM and have no performance issues with transcoding, etc. on a Xeon D-1540. I’ve given the emby VM 4 cores but that’s probably overkill.

          Like Wasif said, FreeNAS 10 should be coming out fairly soon (maybe not this year, but certainly next year) so when you upgrade to that you will be able to run VMs under FreeNAS using bhyve, so one option is to wait for that. I’m not sure that it’s going to be as robost as VMware out of the gate though.

          One thing you might consider is getting a second server to play with. Then you can test out things like virtualization and pfSense without any risk to your production box and then implement it on your main box once you’re comfortable with it. When you get to the point you’re not testing much you can set them up to replicate to each other for backup.

  60. sweet! didn’t know that. Now looking forward to it more than ever before. Could really use that extra pci slot for a good graphics card. Been drooling for HTC vive but got no space for a graphics card.

      • Well one of the requirements of using a VR headset is a decent graphics card. I thought i could probably stick the card in my Microserver and get rid of HBA. The basic minimum requirement for HTC vive are

        Graphics processor: Nvidia GeForce GTX970, or AMD Radeon R9 290 equivalent or greater
        CPU: Intel i5-4590 or AMD FX 8350 equivalent or greater
        RAM: At least 4GB
        Video output: HDMI 1.4 or DisplayPort 1.2 or newer
        USB port: One USB 2.0 or greater
        Operating system: Windows 7 SP1 or newer

        I guess i got excited for no reason. Direct access to storage on behyve doesnt’ mean it will also support GPU passthrough. =/

        Back to square one and i guess i’ll have to wait a bit more to find budget for the above specs.

  61. Thank you for the guide, hoping you can point me in the right direction.

    I have 2 Mellanox Dual Port x3 cards in my server, for direct peer to peer connection to my vmware hosts for iscsi communication. How would i go about properly configuring the storage network in this case?

  62. Hi, in the HBA IT mode flashing portion of your tutorial, I believe you’re missing the unzip step, after the wget and before calling the flashing utility

  63. Hey Benjamin,

    Lots of thanks for this guide.
    But i have a few questions regarding the “configure freenas networking” section.
    I am kinda struggling with this.

    My ip’s given by vmware are:
    Management network:
    storage kernel: (netmask:
    to connect to freenas (webui):

    A few times you are talkin about your Freenas storage network (in your case: but where can i find this?

    What I also dont get.
    I have the following 2 VMswitches: vmswitch0 and vmswitch1

    To the vmswitch0 VM networks and VM management are connected on the left side. And on the right vmnic0 and vmnic1 (both nics of my X11SSL-cf) motherboard

    On the vmwitch1 i have Storage network and storage kernel on the left and no nics on the right.

    So far for my info.

    When i try to do the first step of the “configure freenas networking” (add management interface)
    I can only choose between vmx0 and vmx1 nic…. not the vmx3f0 nics you mentioned.
    Why is this? (I have VMXNET3 selected for both nics in the VMware edit section, like you mentioned).

    I tried to get it to work anyway with the vmx0 nic, but when i do that i get the “and error occurred” message, the webgui locks up and i am not able to reach freenas with the .2.15 ip.
    On reboot of freenas it is not showing any IP and i have to restart with factory defaults to get back into freenas.

    Can you please help me out, I’m kinda stuck…

    Thanks in advance!

  64. Hey, Ben.

    I finally got it working (i guess).

    Maybe it was smart to mention i use VMware 6.5.

    The problem what i had was that within vswitch1, the storage network and storage kernel were not linked to each other…….
    At first i had 2 physical nics connected to vmswitch0. But when i removed one from vswitch0 and add him to vmswitch1.
    Then the storage network and storage kernel were connected to each other……
    Then i finally was able to create a NFS share without an error…..

    I couldnt get them connected to each other without adding a physical nic.
    Do you know if there is a fix for this or if there are any risks in running it like this?

    Thanks in advance

  65. Hi, Benjamin

    Thanks for your respons.
    But I have it working right now.
    I had made some mistakes myself and didn’t completely understand how IP adresses work.
    With some help on the FreeNAS forum I found out wat was wrong and then i was able to fix it!
    Here is the link of the forum where it is all explained: (at the end of it)

    Thanks for you willing to help anyway!

  66. Hi Ben – Thanks for this guide! It has been very helpful.

    I want to be able to SSH into my VMs from my Windows PC on my main home network – – so vSwitch0 has a port group on and vSwitch1 has a port group on

    And every VM gets two NICs – One on the network so the VM has internet access and tx/rx with the rest of my network, and one on exclusively for communication with other VMs within the NFS share.

    How do I know that when VM1 sends a packet to VM2, it uses the much faster > path instead of going over >

    Also, do you have a gateway on your network? Is it a pfSense VM? My main network is running off a FortiGate 90D-POE that acts as my gateway/router, and I have two Intel 10/100/1000 ports on my ESXi server, so I suppose I could hook up that second port and tell the FortiGate’s interface to be, broadcast DHCP on the /16 over that interface, and be the gateway for the storage network…but if every storage VM has internet access over, then is this really necessary?

    I just can’t figure out how the VMs use the NIC on the storage network for internal communication and the NIC on the main network for internet access.

    • Hi, Will. I actually don’t put my VMs on the storage network, I use the storage network for NFS communication between FreeNAS and VMware and I also share NFS on my VM network for VMs, so my VMs only have one interface. What you are doing is probably better and should work. VMs will automatically route out of the correct interface so if you ping from VM1 it will come out on the interface. If you ping it will come out on the interface. Everything else (e.g. internet traffic) will go out the default route which should be your VM network, The storage network shouldn’t be routable, I don’t have my storage network connected to my pfSense routers.

  67. Johnny – I first installed FreeNAS on two 16 GB USB drives (mirrored), but then it broke. I don’t recommend it. Install it on your 256 GB SSD, and use your WD REDs for a mirrored vdev/pool. Alternatively, buy two more WD REDs and create a single vdev with the four drives as a RAID-Z2. This way, if a drive fails, and while you are replacing it, a second drive fails, you are still okay.

  68. make a 16 to 20 gb virtual drive on the 256 gb drive, that will leave you with lots of room on the 256 ssd to for storage or more virtual machines.

    • Good idea, but…if the idea is speed between VMs and storage by having VMs and storage live in the same zpool, the VMs or storage he keeps on the 240GB left over on the SSD won’t achieve the “storage network” speeds. Depends what his goals are.

    • Cool! When you are creating a pool and make it four drives, FreeNAS will want to default to creating two vdevs for the pool. This is technically two mirrors to create a single pool instead of creating a single vdev of four drives to create a single pool.

      Just drag the corner of the box (I think that’s how I did it) to select a single row of four drives instead of two rows of two drives.

      You’ll want to do RAID-Z2 to be extra cautious with your data. Of course this depends on how risky you want to be. You’ll only be able to use half of the 12 TB. You can decide to gamble a bit and make it a four drive RAID-Z1, which will let you use around 9TB of your storage, but if a drive fails while you are replacing a failed drive, all the data is gone.

    • I’m glad I’m not the only one! I can’t tell you how many times I had to destroy and recreate the pool the first time I tried FreeNAS because of that interface… I’d find things like my SLOG drive was striped with my pool instead of added as a ZIL. Which is pretty bad considering that I knew how to set up pools and vdevs from the CLI on Solaris before trying FreeNAS… Fortunately the UI for zpool creation is much improved in FreeNAS 10.

  69. Hi thanks for this guide! Can you help me with a question regarding the SLOG. If I am not planning on using (sharing back to esxi) the Freenas volume as a Datastore for the VM’s is a SLOG necessary? All my VM’s are on a single 240GB SSD. Or do the same sync/write problems exist?

    • You’re welcome, Brent. I generally recommend using a SLOG, but you can always see how your write performance is without one and then add it later if necessary. The main reason I like a SLOG is every write will occur twice… once to the ZFS Intent Log, aka ZIL (which is on the same disks as your pool if you don’t have a SLOG, otherwise the ZIL is on the SLOG), and then again to your pool. Because ZFS is copy on write this can lead to fragmentation. Now, this is generally not recommended but if your storage is in a situation where you would be fine if you had to rollback to 30 seconds earlier in the event of a power loss or kernel panic, then you can just disable the ZIL. Then if you lose power you just lose the writes/changes cached in memory which get flushed out to the pool periodically. Since ZFS is a copy on write filesystem this is probably safe–although it’s not a common configuration so you may run into edge cases that can cause data loss.

      Another thing you could try that would be safer than disabling the ZIL is setting the ZFS logbias property to throughput instead of the default latency. This will avoid writing to the ZIL and just write directly to the pool so you’re not having the double-write penalty.

  70. Thanks for the article! It is very usefull.

    When I reboot offcourse the Freenas server is not online so the ESXI can’t connect to the iSCSI server.
    And the Datastore will not show up.

    It looks like that the connection is only restored if i resave the iSCSI config in ESXI.
    Maybe the problem is my patience.

    Did I miss a trick how to solve this issue.

    • Hi, Sjaak. I use NFS so I don’t have a solution to that problem, but I believe in the comments someone up above mentioned they have a script that runs on FreeNAS at boot to ssh into the VMware server and rescan the drives.

    • Hi, Damian. I read through the document you linked to, it does have some good points, most of them I think are a little dated and not an issue today, NFS is well supported by ESXi, the benefits of NFS are only greater when backed by ZFS–many of the issues with NFS are mute points when backed by ZFS with the ZIL on an SSD. I have used FreeNAS and TrueNAS to share out NFS shares to ESXi in some pretty heavy IO environments and it’s been a very robust solution. I think one issue still valid in that document is scalability… you can only go so far expanding ZFS with JBOD units but you can get pretty far… well into the pettabyte level before this is a concern. In that case you’re probably going to be looking at something like Ceph, GlusterFS, or S3.

  71. Re: Setting up proper swap.

    I noticed that you recommend enabling the swap as a post-init task. A post-init task is run after the zpools are mounted, as far as I know (on FN11 at least), but before services. One of the most critical reasons to have swap is that recovering a pool can use a lot of memory. I was concerned that with the swap being activated after recovering a pool (potentially) this could cause a problem. Of course, you could increase the VM’s allocaiton in this case.

    But the good news is that changing the swap to Pre-init seems to work just fine :)

    Love your guide.

  72. Re: Passing in L2ARC from ESXi

    I have ESXi AIO booting off an M.2 (Samsung 960 Evo), and the only thing on that M.2 datastore is the FreeNAS boot disk + install iso. I thought this was a terrible waste, so I created another disk and passed that into the FreeNAS and use it as an L2ARC. This performs quite well, over a GB/s, and as far as I can tell, because L2ARC is not critical to the system, FreeNAS should not *need* direct access to the underlying hardware for the L2ARC for data reliability, unlike your pool disks or SLOG.

    A neat trick I thought, and it allows me to play with L2ARC in a virtual way with an actual work-load. Exactly what a home-lab should be.

  73. Hi Benjamin. Many thanks for this article, I used it for setting up my system and it was very helpful. I’m having a bit of trouble and I don’t know what else to do.. I’ve had many discussions with people on the FreeNAS IRC (most unwilling to help as I’m virtualized) and I can’t seem to pin down my problem.

    Problem: Suddenly, out of nowhere, friends were reporting bandwidth issues on my Plex server, and I’ve narrowed it down to disk performance on my VMs. My VMs are on mirrored 550MB/s SSDs, but writes are about 50MB/s

    If I do something like this on my VMs: dd if=~/some2GBfile1.ext of=~/some2GBfile2.ext .. performance is terrible. I’m getting about 1/10th of the performance I expect out of my VMs’ virtual storage. It’s happening on ALL of my VMs.

    I’ve ruled out any network performance.. all tests show I’m saturating the 1Gbps network on each VM. But disk writes are awful.

    This is my setup:
    Entire on-board storage controller passed through to FreeNAS VM (Intel Wellsburg AHCI)
    2x SSD mirrored in FreeNAS, shared back to ESXi over storage network/NFS, and used as a datastore to house my VMs.
    6x HDD RAIDz2, shared over NFS
    ESXi installed on USB
    FreeNAS installed (mirrored) on 2x USB
    No SLOG but disabling sync writes has almost no impact on this issue.

    I’m horribly frustrated by this as Plex is 90% of the function of my server… but my Plex VM (Ubuntu Server 16.04) can’t write data to disk fast enough to keep up with a video stream from the RAIDz2

    From all my physical systems on my physical network get good speeds.

    Any thoughts as to why this could be happening?

  74. Im trying FreeNAS 11 to ESXi 6.0 but didnt connect, their is no add adapter setting in ESXI6.0, please help if you encounter this.

  75. Thank you for the detail procedure. I wonder if I could migrate my current running Freenas to a ESXi? I don’t want to build a new machine. I plan to install the ESXi on a USB drive on my Freenas setup. But I don’t know how I can migrate the Freenas into a VM on the ESXi. The Freenas is currently installed on a SSD drive with 3x4T with RaidZ. My goal is to keep the Zpool intact. Is this even possible?

    • Hi, Kai. Yes. I’ve done it before. You can import the existing zpool. If you want to try to save all your FreeNAS settings: Under System, General you can Save Config, then do a fresh install of FreeNAS under VMware and Upload Config to restore it.

    • I completed my move from bare metal to ESXi last week. Most are successful thank to the great guide. I did encounter 2 issues though.
      1. I have to use ESXi 6.0, not 6.5 or 6.7 as the web client somehow cannot add a vSwitch for virtual machine, only the vsphere client can do that and that only work for 6.0.

      2. since I use vnet in my iocage jail, I have to enable promiscuous mode in ESXi for the vSwitch that provides internet access, otherwise my jails cannot access internet.

      Other than these two, I am a happy owner of ESXi server with Freenas. Next project will be build a VM for pfSense on the same box.


Leave a Comment