So long, CrashPlan! After using it for 5 years, CrashPlan with less than a day notice decided to delete many of my files I had backed up. Once again, the deal got altered. Deleting files with no advanced notice is something I might expect from a totalitarian leader, but it isn’t acceptable for a backup service.
CrashPlan used to be the best offering for backups by far, but those days are gone. I needed to find something else. To start with I noted my requirements for a backup solution:
Fully Automated. I am not going to remember to do something like take a backup on a regular basis. Between the demands from all aspects of life I already have trouble doing the thousands of things I should already be doing and I don’t need another thing to remember.
Should alert me on failure. If my backups start failing. I want to know. I don’t want to check on the status periodically.
Efficient with bandwidth, time, and price.
Protect against my backup threat model (below).
Not Unlimited. I’m tired of “unlimited” backup providers like CrashPlan not being able to handle unlimited and going out of business or altering the deal. I either want to provide my own hardware or pay by the GB.
This also gave me a good opportunity to review my backup strategy. I had been using a strategy where all local and cloud devices backed up to a NAS on my network, and then those backups were relayed to a remote (formerly CrashPlan) backup service. The other model is a direct backup. I like this a little better because living in North Idaho I don’t have a good upload speed so in several cases I’ve been in situations where my remote backups from the NAS would never complete because I don’t have enough bandwidth to keep up.
Now if Ting could get permission to run fiber under the railroad tracks and to my house I’d have gigabit upload speed, but until then the less I have to upload from home the better.
Backup Threat Model
It’s best practice to think through all the threats you are protecting against. If you don’t do this exercise you may not think about something important… like keeping your only backup in the same location as your computer. My backup threat model (these are the threats which my backups should protect against):
Disasters. If a fire sweeps through North Idaho burning every building but I somehow survive I want my data. So must have offsite backups in a different geo-location. We can assume that all keys and hardware tokens will be lost in a disaster so those must not be required to restore. At least one backup should be in a geographically separate area from me.
Malware or ransomware. Must have an unavailable or offline backup.
Physical theft or data leaks. Backups must be encrypted.
Silent Data Corruption. Data integrity must be verified regularly and protected against bitrot.
Time. I do not ever want to lose more than a days worth of work so backups must run on a daily basis and must not consume too much of my time maintaining them.
Fast and easy targeted restores. I may need to recover an individual file I have accidentally deleted.
Accidental Corruption. I may have a file corrupted or accidentally overwrite it and may not realize it until a week later or even a year alter. Therefore I need versioned backups to be able to restore a file from points in time up to several years.
Complexity. If something were to happen to me, the workstation backups must be simple enough that Kris would be able to get to them. It’s okay if she has to call one of my tech friends for help, but it should be simple enough that they could figure it out.
Non-payment of backup services. Backups must persist on their own in the event that I am unaware of failed payments or unable to pay for backups. If I’m traveling and my CC gets compromised I don’t want to not have backups.
Bad backup software. The last thing you need is your backup software corrupting all your data because of some bug (I have seen this happen with rsync) so it should be stable. Looking at the git history I should be seeing minor fixes and infrequent releases instead of major rewrites and data corruption bug fixes.
My friend Meredith had contacted me about swapping backup storage. We’re geographically separated so that works to cover local disasters. So that’s what we did, each of us setup an SSH/SFTP server for the other to backup to. I had plenty of space on my Proxmox environment so I created a VM for him and put it in an isolated DMZ. He had a Raspberry Pi and bought a new 4TB western digital external USB drive that he setup at his house for me.
Duplicati Backup Solution for Workstations
For Windows desktops I chose Duplicati 2. It also works with Mac, and Linux but for my purposes I just evaluated Windows.
Duplicati has a nice local web interface. It’s simple and easy to use. Adding a new backup job is simple and gives plenty of options for my backup sets and destinations (this allows me to backup not only to a remote SFTP server, but also to any cloud service such as Backblaze B2 or Amazon S3).
Duplicati 2 has status icons in the system tray that quickly indicate any issues. The first few runs I was seeing a red icon indicating the backup had an error. Looking at the log it was because I had left programs open locking files it was trying to back up. I like that it warns about this instead of silently not backing up files.
Green=In Progress, Grey=Paused, Black=Idle, Red=Error on the last backup.
Duplicati 2 seems to work well. I have tested restores and they come back pretty quickly. I can backup to my NAS as well as a remote server and a cloud server.
Two things I don’t care for Duplicati 2.
It is still labeled Beta. That said it is a lot more stable than some GA software I’ve used.
There are too many projects with similar names. Duplicati, Duplicity, Duplicacy. It’s hard to keep them straight.
Other considerations for workstation backups:
rsync – no gui
restic- no gui
Borg backup – Windows not officially supported
Duplicacy- License only allows personal
Restic Backup for Linux Servers
I settled on Restic for Linux servers. I have used Restic on several small projects over the years and it is a solid backup program. Once the environment variables are set it’s one command to backup or restore which can be run from cron.
It’s also easy to mount any point in time snapshot as a read-only filesystem.
Borg backup came in pretty close to Restic, the main reason I chose Restic is the support for backends other than sftp. The cheapest storage these days is object storage such as Backblaze B2 and Wasabi. If Meredith’s server goes down, with Borg backup I’d have to redo my backup strategy entirely. With restic I have the option to quickly add a new cloud backup target.
Looking at my threat model there are two potential issues with Restic:
A compromised server would have access to delete it’s own backups. This can be mitigated by storing the backup on a VM that is backed by storage configured with periodic immutable ZFS snapshots.
Because restic uses a push instead of a pull model, a compromised server would also have access to other server’s backups increasing the risk of data exfiltration. At the cost of some deduplication benefits this can be mitigated by setting up one backup repository per host, or at the very least by creating separate repos for groups of hosts. (e.g. a restic repo set for minecraft servers and separate restic repo for web servers).
I’ve minimized manual steps but some still must be performed:
Backup to cold storage. This is archiving everything to an external hard drive and then leaving it offline. I do this manually once a year on world backup day and also after major events (e.g. doing taxes, taking awesome photos, etc.). This is my safety in case online backups get destroyed.
Test restores. I do this once a year on world backup day.
Verify backups are running. I have a reminder set to do this once a quarter. With Duplicati I can check in the web UI, and with a single Restic command it can get a list of hosts with the most recent backup date for each.
Cast your bread upon the waters, for you will find it after many days. Give a portion to seven, or even to eight, for you know not what disaster may happen on earth.
4 x 2TB HGST RAID-Z, 100GB Intel DC S3700s for ZIL (over-provisioned at 8GB) on an M1015. In Environments 1 and 2 this was passed to FreeNAS via VT-d.
2 x Samsung FIT USBs for booting OS (either ESXi or FreeNAS)
1 x extra DC S3700 used as ESXi storage for the FreeNAS VM to be installed on in environments 1 and 2 (not used in environment 3).
E1. ESXi + FreeNAS 11 All-in-one.
Setup per my FreeNAS on VMware Guide. Ubuntu VM with Paravirtual is installed as an ESXi guest, on NFS storage backed by ZFS on FreeNAS which has raw access to disks running under the same ESXi hypervisor using virtual networking. FreeNAS given 2 cores and 10GB memory. Guest gets 1GB memory. Guest tested with 1C and 2C.
E2. Nested bhyve + ESXi + FreeNAS 11 All-in-one.
Nested virtualization test. Ubuntu VM with VirtIO is installed as a bhyve guest on FreeNAS which has raw access to disks running under the ESXi Hypervisor. FreeNAS given 4 cores and 12GB memory. Guest gets 1GB memory. Guest tested with 1C and 2C. What is neat about this environment is it could be used as a stepping stone if migrating from environment 1 to environment 3 or vice-versa (I actually tested migrating with success).
E3. bhyve + FreeNAS 11
Ubuntu VM with VirtIO is installed as a bhyve guest on FreeNAS on bare metal. Guest gets 1GB memory. Guest was backed with a ZVOL since that was the only option. Tested wih 1C and 2C.
All environments used FreeNAS 11, E1 and E2 used VMware ESXi 6.5
A reboot of the guest and FreeNAS was performed between each test so as to clear ZFS’s ARC (in memory read cache). The sysbench test files were recreated at the start of each test. The script I used for testing is https://github.com/ahnooie/meta-vps-bench with networking tests removed.
No attempts on tuning were made in any environment. Just used the sensible defaults.
Disclaimer on comparing Apples to Oranges
This is not a business or enterprise level comparison. This test is meant to show how an Ubuntu guest performs in various configurations on the same hardware with constraints of a typical budget home server running a free “hyperconverged” solution–a hypervisor and FreeNAS storage on the same physical box. Not all environments are meant to perform identically…my goal is just to see if the environments perform “good enough” for home use. An obvious example of this is environments using NFS backed storage are going to perform slower than environments with local storage… but it should still at the very least max out a 1Gbps ethernet. This set of tests is designed to benchmark how I would setup each environment given the constraint of one physical box running both the hypervisor and FreeNAS + ZFS as the storage backend. The test is limited to a single guest VM. In the real world dozens, if not hundreds or even thousands of VMs are running simultaneously so advanced hypervisor features like memory deduplication are going to make a big difference. This test made no attempt to benchmark such. This is not an apples to apples test, so be careful what conclusions you derive from it.
CPU 1 and 2 threaded test
I’d say these are equivalent, which probably shows how little overhead there is from the hypervisor these days, though nested virtualization is a bit slower.
CPU 4 threaded test
Good to see that 2 cores actually performs faster than 1 core on a 4 threaded test. Nothing to see here…
Memory Operations Per Second
Horrible performance with nested, but with the hypervisor on bare metal ESXi and bhyve performed identically.
Once again nested virtualization was slow.. other than that neck and neck performance.
OLTP Transactions Per Second
The ESXi environment clearly takes the lead over bhyve, especially as the number of cores / threads started increasing. This is interesting because ESXi outperforms despite an I/O penalty from using NFS so ESXi is more than making up for that somewhere else.
Disk I/O Requests per Second
Clearly there’s an advantage to using local ZFS storage vs NFS. I’m a bit disappointing in the nested virtualization performance since from a storage standpoint it should be equivalent to bare metal FreeNAS, but may be due to the slow memory performance in that environment.
Disk Sequential Read/Write MBps
No surprises, ZFS local storage is going to outperform NFS
Well there you have it. I think it’s safe to say that bhyve is a viable solution for home (although I would like to see more people using it in the wild before considering it robust–I imagine we’ll see more of that now that FreeNAS has a UI for it). For low resource VMs E2 (nested virtualization) is a way to migrate between E1 and E3–but it’s not going to work for high performance VMs because of the memory performance hit.
This guide will install FreeNAS 10 (Corral) under VMware 6.5 ESXi, then via NFS share ZFS backed storage back to VMware. This is an update of my FreeNAS 9.10 on VMware 6.0 Guide.
“Hyperconverged” Design Overview
FreeNAS is installed as a Virtual Machine on the VMware Hypervisor. An LSI HBA in IT Mode is passed to FreeNAS via VT-d Passthrough. A ZFS pool is created on the disks attacked to the HBA. ZFS provides RAID-Z redundancy and an NFS dataset is then shared from FreeNAS and mounted from VMware which is used to provide storage for the remaining guests. Optionally containers and VM guests can run directly on FreeNAS itself using bhyve.
FreeNAS 10 (now called FreeNAS Corral) is a major rewrite over FreeNAS 9.10, the GUI has been overhauled, it has a CLI interface, and an API. I think the best feature is the bhyve hypervisor and docker support. To some degree for a single all-in-one hypervisor+NAS server you may not even need VMware and be able to get away with bhyve and docker.
Like anything new I advise caution against running it in a production environment. I do see quite a few rough edges and a few missing features that are available in FreeNAS 9.10. I imagine we’ll see frequent updates with polishing and features added. A good rule of thumb is to wait until TrueNAS hardware is shipping with the “Corral” version. I think this is the best release of FreeNAS yet, and it is going to be a great platform moving forward!
1. Get Hardware
This is based on my Supermicro X10SDV Build. For drives I used 4 x White Label NAS class HDDs (see ZFS Hard Drive Guide) and two Intel DC S3700s (similar models between S3500 and S3720 should be fine), which often show up for a decent price on Ebay. One SSD will be used to boot VMware and provide the initial data storage and the other used as a ZIL.
Go ahead and plug in the network cables to the IPMI management port, as well as at least one of the normal ethernet ports.
This should work with just about any server class Supermicro board…. first download the Supermicro IPMIView tool (I just enter “Private” for the company). Once installed run “IPMIView20” from the Start Menu (you may need to run it as Administrator).
Scan for IPMI Devices… once it finds your Supermicro server select it and Save.
Login to IPMI using ADMIN / ADMIN (you’ll want to change that obviously).
KVM Console Tab…
Load the VMware ISO file to the Virtual DVD-ROM drive…
Select ISO file, Open Image, select the VMware ISO file which you can download here, and then hit “Plug In”
Hit Delete repeatedly…
Change the boot order, I made the ATEN Virtual CD/DVD the primary boot devices, and my Intel SSD DC S3700 that I’ll install VMware to secondary, and disabled everything else.
Save and Exit, and it should boot the VMware installer ISO.
3. Install VMware ESXi 6.5.0
Install to the Intel SSD Drive.
Once installation is complete “Plug Out” the Virtual ISO file before rebooting.
Once it comes up get the IP address (or set it if you want it to have a static IP which I highly recommend).
4. PCI Passthrough HBA
Go to that address in your browser (I suggest Chrome). Manage, Hardware, PCI Devices, select the LSI HBA card and Enable Passthrough.
5. Setup VMware Storage Network
In the examples below my LAN / VM Network is on 10.2.0.0/16 (255.255.0.0) and my Storage network is on 10.55.0.0/16. You may need to adjust for your network. My storage network is on VLAN 55.
I like to keep my Storage Network separate from my LAN / VM Network. So we’ll create a VM Storage Network portgroup with a VLAN ID of 55.
Networking, Port groups, Add Port Group
Add VM Storage Network with VLAN ID of 55.
(you can choose a different VLAN ID, my storage network is 10.55.0.0/16 so I use “55” to match the network so that I don’t have to remember what VLAN goes to what network, but it doesn’t have to match).
Add a second port group just like it called Storage Network with the same VLAN ID (55).
Add VMKernel NIC
Attach it to the Storage Network and give it an address of 10.55.0.4 with a netmask of 255.255.0.0
You should end up with this…
6. Create a FreeNAS Corral VM
Install it to the DC S3700 Datastore that VMware is installed on.
Add PCI Device and Select your LSI Card.
Add a second NIC for the VM Storage Network. You should have two NICS for FreeNAS, a VM Network and a VM Storage Network and you should set the Adapter Type to VMXNET 3 on both.
I usually give my FreeNAS VM 2 cores, if doing anything heavy (especially if you’ll be running docker images or bhyve under it you may want to increase that count). One rule with VMware is do not give VMs more cores than they need. I usually give each VM one core and only consider more if that particular VM needs more resources. This will reduce the risk of CPU co-stops from occurring. Gabrie van zanten’s How too many vCPUs can negatively affect performance is a good read.
To prevent this, change the Virtual Device Node on the hard drive to SATA controller 0, and SCSI Controller 0 should be LSI Logic SAS
Add CD/DVD Drive, under CD/DVD Media hit Browse to upload and select the FreeNAS Corral ISO file which you can download from FreeNAS.
7. Install FreeNAS VM
Power on the VM…
Select the VMware disk to install to. I should note that if you create two VMDKs you can select them both at this screen and it will create a ZFS boot mirror, if you have an extra hard drive you can create another VMware data store there and put the 2nd vmdk there. This would provide some extra redundancy for the FreeNAS boot pool. In my case I know the DC S3700s are extremely reliable, and if I lost the FreeNAS OS I could just re-import the pool or failover to my secondary FreeNAS server.
Boot via BIOS.
Once FreeNAS is installed reboot and you should get the IP from DHCP on the console (once again I suggest setting this to a static IP).
If you hit that IP with a browser you should have a login screen!
8. Update and Reboot
Before doing anything…. System, Updates, Update and Reboot.
(Note: to get better insight into a task progress head over to the Console and type: task show).
9. Setup SSL Certificate
First, set your hostname, and also create a DNS entry pointing at the FreeNAS IP.
Create Internal CA
Untar the file and click the HobbitonCA.crt to install it, install it to the trusted Root Certificate Authorities. I should note that if someone were to compromise your CA or gain the key they could do a MITM attack on you forging SSL certificates for other sites.
Create a Certificate for FreeNAS
Listen on HTTP+HTTPS and select the Certificate. I also increase the token Lifetime since I religiously lock my workstation when I’m away.
And now SSL is Secured
10. Create Pool
Do you want Performance, Capacity, or Redundancy? Drag the white circle thing where you want on the triangle and FreeNAS will suggest a zpool layout. With 4 disks I chose “Optimal” and it suggested RAID-Z which is what I wanted. Be sure to add the other SSD as a SLOG / ZIL / LOG.
11. Create Users
It’s probably best not to be logging in as root all the time. Create some named users with Administrator access.
12. Create Top Level Dataset
I like to create a top level dataset with a unique name for each FreeNAS server, that way it’s easier to replicate datasets to my other FreeNAS servers and perform recursive tasks (such as snapshots, or replication) on that top level dataset without having to micromanage them. I know you can sometimes do recursive tasks on the entire pool, but oftentimes I want to exclude certain datasets from those tasks (such as if those datasets are being replicated from another server).
Services, Sharing, SMB, set the NetBIOS name and Workgroup and Enable.
Storage, SMB3, Share, to create a new dataset with a Samba Share. Be sure to set the ownership to a user.
14. Setup NFS Share for VMware
I believe at this time VMware and FreeNAS don’t work together on NFSv4, so best to stick to NFSv3 for now.
Mount NFS Store in VMware by going to Storage, Datastores, new datastore, Mount NFS datastore.
I setup automatic recursive snapshots on the top level dataset. I like to do pruning snapshots like this:
every 5 minutes -> keep for 2 hours every hour -> keep for keep for 2 days every day -> keep for 1 week every week -> keep for 4 weeks every 4 weeks -> keep for 12 weeks
And SAMBA has Previous Versions integration with ZFS Snapshots, this is great for letting users restore their own files.
16. ZFS Replication to Backup Server
Before putting anything into production setup automatic backups. Preferably one onsite and one offsite.
Peering, New FreeNAS, and enter the details for your secondary FreeNAS server.
Now you’ll see why I created a top level dataset under the pool….
Storage, Tank3, Replications, New, select the stor2.b3n.org Peer, source dataset is your top level dataset, tank3/ds4, and target dataset is tank4/ds4 on the backup FreeNAS server.
Compression should be FAST over a LAN or BEST over a low WAN.
Go to another menu option and then back to Storage, tank3, Replications, replication_ds4, and Start the replication and check back in a couple hours to make sure it’s working. My first replication attempt hung, so I canceled the task and started it again. I also found that adjusting the peer interval from 1 minute to 5 seconds under Peering may have helped.
16.1 Offsite Backups
It’s also a good idea to have Offsite backups, you could use S3, or a CrashPlan Docker Container, etc.
17. Setup Notifications
You want to be notified when something fails. FreeNAS can be configured to send an email or sent out Pushbullet notifications. Here’s how to setup Pushbullet.
Create or Login to your Pushbullet account. Settings, Account, Create an Access Token
Services, Alerts & Reporting, Add the access key (bottom right) and configure the alerts to send out via Pushbullet.
You can use the Pushbullet Chrome extension or Android/iOS apps to receive alerts.
18. bhyve VMs and Docker Containers under FreeNAS under VMware
Add another Port Group on your VM Network which allows Promiscuous mode, MAC address changes, and Forged transmits. You can connect FreeNAS and any VMs you really trust to this port group.
Power down and edit the FreeNAS VM. Change the VM Network to VM Network Promiscuous
Enable Nested Virtualization, under CPU, Hardware virtualization, [x] Expose hardware assisted virtualization to the guest OS.
After booting back up you should be able to create VMs and Docker Containers in FreeNAS under VMware.
Use at your own risk.
More topics may come later if I ever get around to it.
ZFS is flexible and will let you name and organize datasets however you choose–but before you start building datasets there’s some ways to make management easier in the long term. I’ve found the following convention works well for me. It’s not “the” way by any means, but I hope you will find it helpful, I wish some tips like this had been written when I built my first storage system 4 years ago.
Here are my personal ZFS best practices and naming conventions to structure and manage ZFS data sets.
ZFS Pool Naming
I never give two zpools the same name even if they’re in different servers in case there is the off-chance that sometime down the road I’ll need to import two pools into the same system. I generally like to name my zpool tank[n] where is an incremental number that’s unique across all my servers.
So if I have two servers, say stor1 and stor2 I might have two zpools :
stor1.b3n.org: tank1 stor2.b3n.org: tank2
Top Level ZFS Datasets for Simple Recursive Management
Create a top level dataset called ds[n] where n is unique number across all your pools just in case you ever have to bring two separate datasets onto the same zpool. The reason I like to create one main top-level dataset is it makes it easy to manage high level tasks recursively on all sub-datasets (such as snapshots, replication, backups, etc.). If you have more than a handful of datasets you really don’t want to be configuring replication on every single one individually. So on my first server I have:
I usually mount tank/ds1 as readonly from my CrashPlan VM for backups. You can configure snapshot tasks, replication tasks, backups, all at this top level and be done with it.
Name ZFS Datasets for Replication
One of the reasons to have a top level dataset is if you’ll ever have two servers…
stor1.b3n.org | - tank1/ds1
stor2.b3n.org | - tank2/ds2
I replicate them to each other for backup. Having that top level ds[n] dataset lets me manage ds1 (the primary dataset on the server) completely separately from the replicated dataset (ds2) on stor1.
Advice for Data Hoarders. Overkill for the Rest of Us
The ideal is we backup everything. But in reality storage costs money, WAN bandwidth isn’t always available to backup everything remotely. I like to structure my datasets such that I can manage them by importance. So under the ds[n] dataset create sub-datasets.
stor1.b3n.org | - tank1/ds1/kirk – very important – family pictures, personal files | - tank1/ds1/spock – important – ripped media, ISO files, etc. | - tank1/ds1/redshirt – scratch data, tmp data, testing area | - tank1/ds1/archive – archived data | - tank1/ds1/backups – backups
Kirk – Very Important. Family photos, home videos, journal, code, projects, scans, crypto-currency wallets, etc. I like to keep four to five copies of this data using multiple backup methods and multiple locations. It’s backed up to CrashPlan offsite, rsynced to a friend’s remote server, snapshots are replicated to a local ZFS server, plus an annual backup to a local hard drive for cold storage. 3 copies onsite, 2 copies offsite, 2 different file-system types (ZFS, XFS) and 3 different backup technologies (CrashPlan, Rsync, and ZFS replication) . I do not want to lose this data.
Spock – Important. Important data that would be a pain to lose, might cost money to reproduce, but it isn’t catastrophic. If I had to go a few weeks without it I’d be fine. For example, rips of all my movies, downloaded Linux ISO files, Logos library and index, etc. If I lost this data and the house burned down I might have to repurchase my movies and spend a few weeks ripping them again, but I can reproduce the data. For this dataset I want at least 2 copies, everything is backed up offsite to CrashPlan and if I have the space local ZFS snapshots are replicated to a 2nd server giving me 3 copies.
Redshirt – This is my expendable dataset. This might be a staging area to store MakeMKV rips until they’re transcoded, I might do video editing here or test out VMs. This data doesn’t get backed up… I may run snapshots with a short retention policy. Losing this data would mean losing no more than a days worth of work. I might also run zfs sync=disabled to get maximum performance here. And typically I don’t do ZFS snapshot replication to a 2nd server. In many cases it will make sense to pull this out from under the top level ds[n] dataset and have it be by itself.
Backups – Dataset contains backups of workstations, servers, cloud services–I may backup the backups to CrashPlan or some online service and usually that is sufficient as I already have multiple copies elsewhere.
Archive – This is data I no longer use regularly but don’t want to lose. Old school papers that I’ll probably never need again, backup images of old computers, etc. I set set this dataset to compression=gzip9, and back it up to CrashPlan plus a local backup and try to have at least 3 copies.
Now, you don’t have to name the datasets Kirk, Spock, and Redshirt… but the idea is to identify importance so that you’re only managing a few datasets when configuring ZFS snapshots, replication, etc. If you have unlimited cheap storage and bandwidth it may not worth it to do this–but it’s nice to have the option to prioritize.
Now… once I’ve established that hierarchy I start defining my datasets that actually store data which may look something like this:
With this ZFS hierarchy I can manage everything at the top level of ds1 and just setup the same automatic snapshot, replication, and backups for everything. Or if I need to be more precise I have the ability to handle Kirk, Spock, and Redshirt differently.
The pictures show what appears to be equipped with the Asrock C2750d4i motherboard which has an 8-core Atom / Avoton processor. With the upcoming FreeNAS 9.10 (based on FreeBSD 10) it should be able to run the bhyve hypervisor as well (at least from CLI, might have to wait until FreeNAS 10 for a bhyve GUI) meaning a nice all-in-one hypervisor with ZFS without the need for VT-d. This may end up being a great successor to the HP Microserver for those wanting to upgrade with a little more capacity.
The case is the Ablecom CS-T80 so I imagine we’ll start seeing it from Supermicro soon as well. According to Ablecom it has 8 hotswap bays plus 2 x 2.5″ internal bays and still managed to have room for a slim DVD/Blu-Ray drive.
It’s really great to see an 8-bay Mini-ITX NAS case that’s nicer than the existing options out there. I hope the FreeNAS Mini XL will have an option for a more powerful motherboard even if it means having to use up the PCI-E slot with an HBA–I’m not really a fan of the Marvell SATA controllers on that board, and of course a Xeon-D would be nice.
Here’s a look at Gea’s popular All-in-one design which allows VMware to run on top of ZFS on a single box using a virtual 10Gbe storage network. The design requires an HBA, and a CPU that supports VT-d so that the storage can be passed directly to a guest VM running a ZFS server (such as OmniOS or FreeNAS). Then a virtual storage network is used to share the storage back to VMware.
bhyve, can simplify this design since it runs under FreeBSD it already has a ZFS server. This not only simplifies the design, but it could potentially allow a hypervisor to run on simpler less expensive hardware. The same design in bhyve eliminates the need to use a dedicated HBA and a CPU that supports VT-d.
I’ve never understood the advantage of type-1 hypervisors (such as VMware and Xen) over Type-2 hypervisors (like KVM and bhyve). Type-1 proponents say the hypervisor runs on bare metal instead of an OS… I’m not sure how VMware isn’t considered an OS except that it is a purpose-built OS and probably smaller. It seems you could take a Linux distribution running KVM and take away features until at some point it becomes a Type-1 hypervisor. Which is all fine but it could actually be a disadvantage if you wanted some of those features (like ZFS). A type-2 hypervisor that supports ZFS appears to have a clear advantage (at least theoretically) over a type-1 for this type of setup.
In fact, FreeBSD may be the best visualization / storage platform. You get ZFS and bhyve, and also jails. You really only need to run bhyve when virtualizing a different OS.
bhyve is still pretty young, but I thought I’d run some tests to see where it’s at…
OS defaults are left as is, I didn’t try to tweak number of NFS servers, sd.conf, etc.
My tests fit inside of ARC. I ran each test 5 times on each platform to warm up the ARC. The results are the average of the next 5 test runs.
I only tested an Ubuntu guest because it’s the only distribution I run in (in quantity anyway) addition to FreeBSD, I suppose a more thorough test should include other operating systems.
The environments were setup as follows:
1 – VM under ESXi 6 using NFS storage from FreeNAS 9.3 VM via VT-d
FreeNAS 9.3 installed under ESXi.
FreeNAS is given 24GB memory.
HBA is passed to it via VT-d.
Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
Ubuntu guest given VMware para-virtual drivers
2 – VM under ESXi 6 using NFS storage from OmniOS VM via VT-d
OmniOS r151014 LTS installed under ESXi.
OmniOS is given 24GB memory.
HBA is passed to it via VT-d.
Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
Ubuntu guest given VMware para-virtual drivers
3 – VM under FreeBSD bhyve
bhyve running on FreeBSD 10.1-Release
Guest storage is file image on ZFS dataset.
4 – VM under FreeBSD bhyve sync always
bhyve running on FreeBSD 10.1-Release
Guest storage is file image on ZFS dataset.
MariaDB OLTP Load
This test is a mix of CPU and storage I/O. bhyve (yellow) pulls ahead in the 2 threaded test, probably because it doesn’t have to issue a sync after each write. However, it falls behind on the 4 threaded test even with that advantage, probably because it isn’t as efficient at handling CPU processing as VMware (see next chart on finding primes).
Finding prime numbers with a VM under VMware is significantly faster than under bhyve.
byhve has an advantage, probably because it has direct access to ZFS.
With sync=standard bhyve has a clear advantage. I’m not sure why VMware can outperform bhyve sync=always. I am merely speculating but I wonder if VMware over NFS is translating smaller writes into larger blocks (maybe 64k or 128k) before sending them to the NFS server.
Sequential reads are faster with bhyve’s direct storage access.
What not having to sync every write will gain you..
VMware is a very fine virtualization platform that’s been well tuned. All that overhead of VT-d, virtual 10gbe switches for the storage network, VM storage over NFS, etc. are not hurting it’s performance except perhaps on sequential reads.
For as young as bhyve is I’m happy with the performance compared to VMware, it appears to be a slower on the CPU intensive tests. I didn’t intend on comparing CPU performance so I haven’t done enough variety of tests to see what the difference is there but it appears VMware has an advantage.
One thing that is not clear to me is how safe running sync=standard is on bhyve. The ideal scenario would be honoring fsync requests from the guest, however I’m not sure if bhyve has that kind of insight from the guest. Probably the worst case under this scenario with sync=standard is losing the last 5 seconds of writes–but even that risk can be mitigated with battery backup. With standard sync there’s a lot of performance to be gained over VMware with NFS. Even if you run bhyve with sync=always it does not perform badly, and even outperforms VMware All-in-one design on some tests.
The upcoming FreeNAS 10 may be an interesting hypervisor + storage platform, especially if it provides a GUI to manage bhyve.
I don’t have room for a couple of rackmount servers anymore so I was thinking of ways to reduce the footprint and noise from my servers. I’ve been very happy with Supermicro hardware so here’s my Supermicro Mini-ITX Datacenter in a box build.
Unlike most processors, the Xeon D is SOC (System on Chip) meaning that it’s built into the motherboard. Depending on your compute needs, you’ve got a lot of pricing / power flexibility with the Mini-ITX Supermicro X10SDV motherboards with the Xeon D SOC CPU ranging from a budget build of 2 cores to a ridiculous 16 cores rivaling high end Xeon E5 class processors!
How many cores do you want? CPU/Motherbord Options
A few things to keep in mind when choosing a board. Some come with a FAN (normally indicated by a + after the core count), some don’t. I suggest getting it with a fan unless you’re putting some serious air flow (such as with a 1U server) through the heatsink. I got one without a fan and had to do a Noctua mod (below).
Many versions of this board are rated for 7-years lifespan which means they have components designed to last longer than most boards! Usually computers go obsolete before they die anyway, but it’s nice to have that option if you’re looking for a permanent solution. A VMware / NAS server that’ll last you 7-years isn’t bad at all!
On the last 5 digits, you’ll see two options: “-TLN2F” and “-TLN4F” this refers to the number network Ethernet ports (N2 comes with 2 x gigabit ports, and N4 usually comes with 2 gigabit plus 2 x 10 gigabit ports). 10 gbe ports may come in handy for storage, and also having 4 ports may be useful if you’re going to run a router VM such as pfSense.
I bought the first model just known as the “X10SDV-F” which comes with 8 cores and 2 gigabit network ports. This board looks like it’s designed for high density computing. It’s like cramming dual Xeon E5’s into a Mini-ITX board. The Xeon D-1540 will well outperform the Xeon E3-1230v3 in most tests, can handle up to 128GB memory, two nics (this also comes in a model that offers two more 10Gbe providing four nics), IPMI, 6 SATA-3 ports, a PCI-E slot, and an M.2 slot.
IPMI / KVM Over-IP / Out of Band Management
One of the great features of these motherboards is you will never need to plug in a keyboard, mouse, or monitor. In addition to the 2 or 4 normal Ethernet ports, there is one port off to the side, the management port. Unlike HP iLO, this is a free feature on the Supermicro motherboards. The IPMI interface will get a DHCP address. You can download the Free IPMIView software from Supermicro, or use the Android app to scan your network for the IP address. Login as ADMIN / ADMIN (be sure to change the password).
You can even reset or power off, and even if the power is off you can power on the server remotely.
And of course you also get KVM over IP, which is so low level you can get into the BIOS and even load an ISO file from your workstation to boot off of over the network!
When I first saw IPMI I made sure all my new servers have it. I hate messing around with keyboards and mice and monitors and I don’t have room for a hardware based KVM solution. This out of band management port is the perfect answer. And the best part is the ability to manage your server from remote. I have used this to power on servers and load ISO files in California from Idaho.
I should note that I would not be exposing the IPMI port over the internet, make sure it’s on it’s behind a firewall accessible only through VPN.
Cooling issue | heatsink not enough
The first boot was fine but it crashed after about 5 minutes while I was in the BIOS setup…. after a few resets I couldn’t even get it to post. I finally realized the CPU was getting too hot. Supermicro probably meant for this model to be in a 1U case with good air flow. The X10SDV-TLN4F is a little extra but it comes with a CPU fan in addition to the 10Gbe network adapters so keep that in mind if you’re trying to decide between the two boards.
Noctua to the Rescue
I couldn’t find a CPU fan designed to fit this particular socket, so I bought a 60MM Noctua NF-A6x25
UPDATE: Mikaelo commented that the fan is backwards in the pictures! Label should be down.
This is my first Noctua fan and I think it’s the nicest fan I’ve ever owned. It came packaged with screws, rubber leg things, an extension cord, a molex power adapter, and two noise reducer cables that slow the fan down a bit. I actually can’t even hear the fan running at normal speed.
There’s not really a good way to screw the fan and the heatsink into the motherboard together, but I took the four rubber things and sort of tucked them under the heatsink screws. This is surprisingly a secure fit, it’s not ideal but the fan is not going to go anywhere.
This is what you would expect from Supermicro, a quality server-grade case. It comes with a 250 watt 80 plus power supply. Four 3.5″ hotswap bays, trays are the same as you would find on a 16 bay enterprise chassis. Also it comes with labels numbered from 0 to 4 so you could choose to label starting at 0 (the right way) or 1. It is designed to fit two fixed 2.5″ drives, one on the side of the HDD cage, and the other can be used on top instead of an optical drive.
The case is roomy enough to work with, I had no trouble adding an IBM ServerRAID M1015 / LSI 9220-8i
I took this shot just to note that if you could figure out a way to secure an extra drive, there is room to fit three drives, or perhaps two drives even with an optical drive, you’d have to use a Y-splitter to power it. I should also note that you could use the M.2. slot to add another SSD.
The case is pretty quiet, I cannot hear it at all with my other computers running in the same room so I’m not sure how much noise it makes.
This case reminds me of the HP Microserver Gen8 and is probably about the same size and quality but I think a little more roomier and with Supermicro IPMI is free.
Compared to the Silverstone DS380 the Supermicro CS721 is a more compact. The DS380 has the advantage of being able to hold more drives. The DS380 can fit 8 3.5″ or 2.5″ in hotswap bays plus an additional four 2.5″ fixed in a cage. Between the two cases I much prefer the Supermicro CS-721 even with less drive capacity. The DS380 has vibration issues with all the drives populated, and it’s also not as easy to work with. The CS-721 looks and feels much higher quality.
I loaded mine with two Intel DC S3700 SSDs and 4 x 6TB drives in RAID-Z (RAID-5) the case can provide up to 18TB of storage which is a good amount for any data hoarder wanting to get started.
I think the Xeon D platform offers great value with a great range of power and pricing options. The prices on the Xeon D motherboards are reasonable considering the Motherboard and CPU are combined, if you went with a Xeon E3 or E5 platform you’d be paying about the same or more to purchase them separately. You’ll be paying anywhere from $350 to $2500 depending on how many cores you want.
Core Count Recommendations
For a NAS only box such as FreeNAS, OmniOS+NappIt, NAS4Free, etc. or a VMware All in one with FreeNAS and one or two light guest VMs I’d go with a simple 2C CPU.
For a bhyve or VMware + ZFS an all-in-one I think the 4C is a great starter board, it will handle probably a lot more than most people need for a home server running a handful of VMs including the ability to trans-code with a Plex or Emby server.
From there you can get 6C, 8C, 12C, or 16C, as you start getting more cores the clock frequency starts to go down so you don’t want to go overboard unless you really do need to use those cores. Also, consider that you may prefer to get two or three smaller boards to allow failover instead of one powerful server.
I’m pretty happy with the build, I really like how much power you can get into a microserver these days. My build has 8 cores (16 threads) and 32GB memory (can go up to 128GB!), and with 6TB drives in RAID-Z (RAID-5) I have 18TB of usable data (more with ZFS compression). With VMware and ZFS you could run a small datacenter from a box under your desk.
This is a guide which will install FreeNAS 9.10 under VMware ESXi and then using ZFS share the storage back to VMware. This is roughly based on Napp-It’s All-In-One design, except that it uses FreeNAS instead of OminOS.
This post has had over 160,000 visitors, thousands of people have used this setup in their homelabs and small businesses. I should note that I myself would not run FreeNAS virtualized in a production environments. But many have done so successfully. If you run into any problems and ask for help on the FreeNAS forums, I have no doubt that Cyberjock will respond with “So, you want to lose all your data?” So, with that disclaimer aside let’s get going:
This guide was originally written for FreeNAS 9.3, I’ve updated it for FreeNAS 9.10. Also, I believe Avago LSI P20 firmware bugs have been fixed and have been around long enough to be considered stable so I’ve removed my warning on using P20. Added sections 7.1 (Resource reservations) and 16.1 (zpool layouts) and some other minor updates.
1. Get proper hardware
Example 1: Supermicro 2U Build SuperMicro X10SL7-F (which has a built in LSI2308 HBA). Xeon E3-1240v3 ECC Memory 6 hotswap bays with 2TB HGST HDDs (I use RAID-Z2) 4 2.5″ hotswap bays. 2 Intel DC S3700’s for SLOG / ZIL, and 2 drives for installing FreeNAS (mirrored)
Example 2: Mini-ITX Datacenter in a Box Build X10SDV-F (build in Xeon D-1540 8 core broadwell ECC Memory IBM 1015 / LSI 9220-8i HBA 4 hotswap bays with 2TB HGST HDDs (I use RAID-Z) 2 Intel DC S3700’s. 1 for SLOG / ZIL, and one to boot ESXi and install FreeNAS to.
The LSI2308/M1015 has 8 ports, I like do to two DC S3700s for a striped SLOG device and then do a RAID-Z2 of spinners on the other 6 slots. Also get one (preferably two for a mirror) drives that you will plug into the SATA ports (not on the LSI controller) for the local ESXi data store. I’m using DC S3700s because that’s what I have, but this doesn’t need to be fast storage, it’s just to put FreeNAS on.
2. Flash HBA to IT Firmware
As of FreeNAS 9.3.1 or greater you should be flashing to IT mode P20 (looks like it’s P21 now but it’s not available by every vendor yet).
I strongly suggest pulling all drives before flashing.
(If you already have the card passed through to FreeNAS via VT-d (steps 6-8) you can actually flash the card from FreeNAS using the sas2flash utility using the steps below (in this example my card is already in IT mode so I’m just upgrading it):
Copyright(c)2008-2013LSI Corporation.All rights reserved
Advanced Mode Set
Adapter Selected isaLSI SAS:SAS2008(B2)
Executing Operation:Flash Firmware Image
Firmware Image hasaValid Checksum.
Firmware Image compatible with Controller.
Valid NVDATA Image found.
Checking foracompatible NVData image...
NVDATA Device IDandChip Revision match verified.
NVDATA Versions Compatible.
Valid Initialization Image verified.
Valid BootLoader Image verified.
Beginning Firmware Download...
Firmware Download Successful.
Firmware Flash Successful.
(Wait a few minutes, at this point FreeNAS finally crashed. Poweroff. FreeNAS, and then reboot VMware)
Warning on P20 buggy firmware:
Some earlier versions of the P20 firmware were buggy, so make sure it’s version P20.00.04.00 or later. If you can’t P20 in aversion later than P20.00.04.00 then use P19 or P16.
3. Optional: Over-provision ZIL / SLOG SSDs.
If you’re going to use an SSD for SLOG you can over-provision them. You can boot into an Ubuntu LiveCD and use hdparm, instructions are here: https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm You can also do this after after VMware is installed by passing the LSI controller to an Ubuntu VM (FreeNAS doesn’t have hdparm). I usually over-provision down to 8GB.
Update 2016-08-10: But you may want to only go to 20GB depending on your setup! One of my colleagues discovered 8GB over-provisioning wasn’t even maxing out 10Gb network (remember, every write to VMware is a sync so it hits the ZIL no matter what) with 2 x 10Gb fiber lagged connections between VMware and FreeNAS. This was on an HGST 840z so not sure if the same holds true for the Intel DC S3700… and it wasn’t virtualized setup. But thought I’d mention it here.
Create a standard switch (uncheck any physical adapters).
Add Networking again, VMKernel, VMKernel… Select vSwitch1 (which you just created in the previous step), give it a network different than your main network. I use 10.55.0.0/16 for my storage so you’d put 10.55.0.2 for the IP and 255.255.0.0 for the netmask.
Some people are having trouble with an MTU of 9000. I suggest leaving the MTU at 1500 and make sure everything works there before testing an MTU of 9000. Also, if you run into networking issues look at disabling TSO offloading (see comments).
Under vSwitch1 go to Properties, select vSwitch, Edit, change the MTU to 9000. Answer yes to the no active NICs warning.
Then select the Storage Kernel port, edit, and set the MTU to 9000.
Create a new VM, choose custom, put it on one of the drives on the SATA ports, Virtual Machine version 11, Guest OS type is FreeBSD 64-bit, 1 socket and 2 cores. Try to give it at least 8GB of memory. On Networking give it two adapters, the 1st NIC should be assigned to the VM Network, 2nd NIC to the Storage network. Set both to VMXNET3.
SCSI controller should be the default, LSI Logic Parallel.
Choose Edit the Virtual Machine before completion.
If you have a second local drive (not one that you’ll use for your zpool) here you can add a second boot drive for a mirror.
Before finishing the creation of the VM click Add, select PCI Devices, and choose the LSI 2308.
And be sure to go into the CD/DVD drive settings and set it to boot off the FreeNAS iso. Then finish creation of the VM.
7.1 FreeNAS VM Resource allocation
Also, since FreeNAS will be driving the storage for the rest of VMware, it’s a good idea to make sure it has a higher priority for CPU and Memory than other guests. Edit the virtual machine, under Resources set the CPU Shares to “High” to give FreeNAS a higher priority, then under Memory allocation lock the guest memory so that VMware doesn’t ever borrow from it for memory ballooning. You don’t want VMware to swap out ZFS’s ARC (memory read cache).
8. Install FreeNAS.
Boot of the VM, install it to your SATA drive (or two of them to mirror boot).
After it’s finished installing reboot.
9. Install VMware Tools.
SKIP THIS STEP. As of FreeNAS 9.10.1 installing VMware should may no longer be necessary–you can skip step 9 and go to 10. Just leaving this for historical purposes.
In VMware right-click the FreeNAS VM, Choose Guest, then Install/Upgrade VMware Tools. You’ll then choose interactive mode.
Mount the CD-ROM and copy the VMware install files to FreeNAS:
# mkdir /mnt/cdrom
# mount -t cd9660 /dev/iso9660/VMware\ Tools /mnt/cdrom/
# cd vmware-tools-distrib/lib/modules/binary/FreeBSD9.0-amd64
# cp vmxnet3.ko /boot/modules
Once installed Navigate to the WebGUI, it starts out presenting a wizard, I usually set my language and timezone then exit the rest of the wizard.
Under System, Tunables… Add a Tunable. Variables should be: vmxnet3_load. The type should be Loader and the Value YES .
Reboot FreeNAS. On reboot you should notice that the VMXNET3 NICS now work (except the NIC on the storage network can’t find a DHCP server, but we’ll set it to static later), also you should notice that VMware is now reporting that VMware tools are installed.
If all looks well shutdown FreeNAS (you can now choose Shutdown Guest from VMware to safely power it off), remove the E1000 NIC and boot it back up (note that the IP address on the web gui will be different).
10. Update FreeNAS
Before doing anything let’s upgrade FreeNAS to the latest stable under System Update.
This is a great time to make some tea.
Once that’s done it should reboot. Then I always go back again and check for updates again to make sure there’s nothing left.
11. SSL Certificate on the Management Interface (optional)
On my DHCP server I’ll give FreeNAS a static/reserved IP, and setup an entry for it on my local DNS server. So for this example I’ll have a DNS entry on my internal network for stor1.b3n.org.
If you don’t have your own internal Certificate Authority you can create one right in FreeNAS:
System, CAs, Create internal CA. Increase the key length to 4096 and make sure the Digest Algorithm is set to SHA256.
Click on the CA you just created, hit the Export Certificate button, click on it to install the Root certificate you just created on your computer. You can either install it just for your profile or for the local machine, I usually do local machine, and you’ll want to make sure to store it is in the Trusted Root Certificate Authorities store.
Just a warning, that you must keep this Root CA guarded, if a hacker were to access this he could generate certificates to impersonate anyone (including your bank) to initiate a MITM attack.
Also Export the Private Key of the CA and store it some place safe.
Now create the certificate…
System, Certificates, CreateInternalCertificate. Once again bump the key length to 4096. The important part here is the Common Name must match your DNS entry. If you are going to access FreeNAS via IP then you should put the IP address in the Common Name field.
System, Information. Set the hostname to your dns name.
System, General. Change the protocol to HTTPS and select the certificate you created. Now you should be able to go to use https to access the FreeNAS WebGUI.
12. Setup Email Notifications
Account, Users, Root, Change Email, set to the email address you want to receive alerts (like if a drive fails or there’s an update available).
Show console messages in the footer. Enable (I find it useful)
Fill in your SMTP server info… and send a test email to make sure it works.
13. Setup a Proper Swap
FreeNAS by default creates a swap partition on each drive, and then stripes the swap across them so that if any one drive fails there’s a chance your system will crash. We don’t want this.
Swap size on each drive in GiB, affects new disks only. Setting this to 0 disables swap creation completely (STRONGLY DISCOURAGED). Set this to 0.
Open the shell. This will create a 4GB swap file (based on https://www.freebsd.org/doc/handbook/adding-swap-space.html)
Next time you reboot on the left Navigation pane click Display System Processes and make sure the swap shows up. If so it’s working.
14. Configure FreeNAS Networking
Setup the Management Network (which you are currently using to connect to the WebGUI).
Network, Interfaces, Add Interface, choose the Management NIC, vmx3f0, and set to DHCP.
Setup the Storage Network
Add Interface, choose the Storage NIC, vmx3f1, and set to 10.55.1.2 (I setup my VMware hosts on 10.55.0.x and ZFS servers on 10.55.1.x), be sure to select /16 for the netmask. And set the mtu to 9000.
Open a shell and make sure you can ping the ESXi host at 10.55.0.2
Reboot. Let’s make sure the networking and swap stick.
15. Hard Drive Identification Setup
Label Drives. FreeNAS is great at detecting bad drives, but it’s not so great at telling you which physical drive is having an issue. It will tell you the serial number and that’s about it. But how confident are you in knowing which drive fails? If FreeNAS tells you that disk da3 (by the way, all these da numbers can change randomly) is having an issue how do you know which drive to pull? Under Storage, View Disks, you can see the serial number, this still isn’t entirely helpful because chances are you can’t see the serial number without pulling a drive. So we need to map them to slot numbers or labels of some sort.
There are two ways you can deal with this. The first, and my preference, is sas2ircu. Assuming you connected the cables between the LSI 2308 and the backplane in proper sequence sas2ircu will tell you the slot number the drives are plugged into on the LSI controller. Also if you’re using a backplane with an expander that supports SES2 it should also tell you which slots the drives are in. Try running this command:
# sas2ircu 0 display|less
You can see that it tells you the slot number and maps it to the serial number. If you are comfortable that you know which physical drive each slot number is in then you should be okay.
If not, the second method, is remove all the drives from the LSI controller, and put in just the first drive and label it Slot 0 in the GUI by clicking on the drive, Edit, and enter a Description.
Put in the next drive in Slot 1 and label it, then insert the next drive and label it Slot 2 and so on…
The Description will show up in FreeNAS and it will survive reboots. it will also follow the drive even if you move it to a different slot. So it may be more appropriate to make your description match a label on the removable trays rather than the bay number.
It doesn’t matter if you label the drives or use sas2ircu, just make sure you’re confident that you can map a serial number to a physical drive before going forward.
16.1 Choose Pool Layout
For high performance the best configuration is to maximize the number of VDEVs by creating mirrors (essentially RAID-10). That said, with my 6-drive RAID-Z2 array with 2 DC S3700 SSDs for SLOG/ZIL my setup performs very well with VMware in my environment. If you’re running heavy random I/O mirrors are more important, but if you’re just running a handful of VMs RAID-Z / RAID-Z2 will probably offer great performance as long as you have a good SSD for SLOG device. I like to start double parity at 5 or 6 disk VDEVs, and triple parity at 9 disks. Here some some sample configurations:
Example zpool / vdev configurations
2 disks = 1 mirror 3 disks = RAID-Z 4 disks = RAID-Z or 2 mirrors 5 disks = RAID-Z, or RAID-Z2, or 2 mirrors with hot spare. (Don’t configure 5 disks with 4 drives being in RAID-Z plus 1 hot spare–that’s just ridiculous. Make it a RAID-Z2 to begin with). 6 disks = RAID-Z2, or 3 mirrors 7 disks = RAID-Z2, or 3 mirrors plus hot spare 8 disks = RAID-Z2, or 4 mirrors 9 disks = RAID-Z3, or 4 mirrors plus hot spare 10 disks = RAID-Z3, 2 vdevs of 5 disk RAID-Z2 or 5 mirrors 11 disks = RAID-Z3, 2 vdevs of 5 disk RAID-Z2 plus hot spare or 5 mirrors with hot spare 12 disks = 2 vdevs of 6 disk RAID-Z2, or 5 mirrors with 2 hot spares 13 disks = 2 vdevs of 6 disk RAID-Z2 plus hot spare or 5 mirrors with one hot spare 14 disks = 2 vdevs of 7 disk RAID-Z2 or 6 mirrors plus 2 hot spares 15 disks = 3 vdevs of 5 disk RAID-Z2 or 7 mirrors with 1 hot spare 16 disks = 3 vdevs of 5 disk RAID-Z2 plus hot spare or 7 mirrors with 2 hot spares 17 disks = 3 vdevs of 5 disk RAID-Z2 plus hot spares or 7 mirrors with 3 hot spares 18 disks = 2 vdevs of 9 disk RAID-Z3, 3 vdevs of 6 disk RAID-Z2 or 8 mirrors with 2 hot spares 19 disks = 2 vdevs of 9 disk RAID-Z3, 3 vdevs of 6 disk RAID-Z2 plus hot spares or 8 mirrors with 3 hot spares 20 disks = 2 vdevs of 10 disk RAID-Z3 4 vdevs of 5 disk RAID-Z2 plus hot spares or 9 mirrors with 2 hot spares
Anyway, that gives you a rough idea. The more vdevs the better random performance. It’s always a balance between capacity, performance, and safety.
16.2 Create the Pool.
Storage, Volumes, Volume Manager.
Click the + next to your HDDs and add them to the pool as RAID-Z2.
Click the + next to the SSDs and add them to the pool. By default the SSDs will be on one row and two columns. This will create a mirror. If you want a stripe just add one Log device now and add the second one later. Make certain that you change the dropdown on the SSD to “Log (ZIL)” …it seems to lose this setting anytime you make any other changes so change that setting last. If you do not do this you will stripe the SSD with the HDDs and possibly create a situation where any one drive failure can result in data loss.
Back to Volume manager and add the second Log device…
I have on numerous occasions had the Log get changed to Stripe after I set it to Log, so just double-check by clicking on the top level tank, then the volume status icon and make sure it looks like this:
17. Create an NFS Share for VMware
You can create either an NFS share, or iSCSI share (or both) for VMware. First here’s how to setup an NFS share:
Storage, Volumes, Select the nested Tank, Create Data Set
Be sure to disable atime.
Sharing, NFS, Add Unix (NFS) Share. Add the vmware_nfs dataset, and grant access to the storage network, and map the root user to root.
Answer yes to enable the NFS service.
In VMware, Configuration, Add Storage, Network File System and add the storage:
And there’s your storage!
18. Create an iSCSI share for VMware
WARNING: Note that at this time, based on some of the comments below with people having connection drop issues on iSCSI I suggest testing with heavy concurrent loads to make sure it’s stable. Watch dmesg and /var/log/messages on FreeNAS for iSCSI timeouts. Personally I use NFS. But here’s how to enable iSCSI:
Storage, select the nested tank, Create Zvol. Be sure compression is set to lz4. Check Sparse Volume. Choose advanced mode and optionally change the default block size. I use 64K block-size based on some benchmarks I’ve done comparing 16K (the default), 64K, and 128K. 64K blocks didn’t really hurt random I/O but helped some on sequential performance, and also gives a better compression ratio. 128K blocks had the best better compression ratio but random I/O started to suffer so I think 64K is a good middle-ground. Various workloads will probably benefit from different block sizes.
Sharing, Block (iSCSI), Target Global Configuration.
Set the base name to something sensible like: iqn.2011-03.org.b3n.stor1.istgt Set Pool Available Space Threshold to 60%
Portals tab… add a portal on the storage network.
Initiator. Add Initiator.
Targets. Add Target.
Extents. Add Extent.
Associated Targets. Add Target / Extent.
Under Services enable iSCSI.
In VMware Configutration, Storage Adapters, Add Adapter, iSCSI.
Select the iSCSI Software Adapter in the adapters list and choose properties. Dynamic discovery tab. Add…
Close and re-scan the HBA / Adapter.
You should see your iSCSI block device appear…
Configuration, Storage, Add Storage, Disk/LUN, select the FreeBSD iSCSi Disk,
19. Setup ZFS VMware-Snapshot coordination.
This will coordinate with VMware to take clean snapshots of the VMs whenever ZFS takes a snapshot of that dataset.
Storage. Vmware-Snapshot. Add VMware-Snapshot. Map your ZFS dataset to the VMware data store.
ZFS / VMware snapshots of NFS example.
ZFS / VMware snapshots of iSCSI example.
20. Periodic Snapshots
Add periodic snapshot jobs for your VMware storage under Storage, Periodic Snapshot Tasks. You can setup different snapshot jobs with different retention policies.
21. ZFS Replication
If you have a second FreeNAS Server (say stor2.b3n.org) you can replicate the snapshots over to it. On stor1.b3n.org, Replication tasks, view public key. copy the key to the clipboard.
On the server you’re replicating to, stor2.b3n.org, go to Account, View Users, root, Modify User, and paste the public key into the SSH public Key field. Also create a dataset called “replicated”.
Back on stor1.b3n.org:
Add Replication. Do an SSH keyscan.
And repeat for any other datasets. Optionally you could also just replicate the entire pool with the recursive option.
22. Automatic Shutdown on UPS Battery Failure (Work in Progress)
The goal is on power loss, before the battery fails to shutdown all the VMware guests including FreeNAS. So far all I have gotten is the APC working with VMware. Edit the VM settings and add a USB controller, then add a USB device and select the UPS, in my case a APC Back-UPS ES 550G. Power FreeNAS back on.
This will tell you where the APC device is. IN my case it’s showing up on ugen0.4. I ended up having to grant world access to the UPS…
For some reason I could not get the GUI to connect to the UPS, I can selected ugen0.4, but under the drivers dropdown I just have hyphens —— … I set it manually in /usr/local/etc/nut/ups.conf
However, this file gets overwritten on reboot, and also the rc.conf setting doesn’t seem to stick. I added this tunable to get the rc.conf setting…
And I created my ups.conf file in /mnt/tank/ups.conf. Then I created a script to stop the nut service, copy my config file and restart the nut service in /mnt/tank/nutscript.sh
service nut stop
service nut start
Then under tasks, Init/Shutdown Scripts I added a task to run the script post init.
Next step is to configure automatic shutdown of the VMware server and all guests on it… I have not done this yet.
There’s a couple of approaches to take here. One is to install a NUT client on the ESXi, and the other is to have FreeNAS ssh into VMware and tell it to shutdown. I may update this section later if I ever get around to implementing it.
Before going live make sure you have adequate backups! You can use ZFS replication with a fast link. For slow network connections Rsync will work better (Took under Tasks -> Rsync tasks) or use a cloud service like CrashPlan. Here’s a nice CrashPlan on FreeNAS Howto.
BACKUPS BEFORE PRODUCTION. I can’t stress this enough, don’t rely on ZFS’s redundancy alone, always have backups (one offsite, one onsite) in place before putting anything important on it.
Answer: Here’s what I recommend, considering a balance of cost per TB, performance, and reliability. I prefer NAS class drives since they are designed to run 24/7 and also are better at tolerating vibration from other drives. I prefer SATA but SAS drives would be better in some designs (especially when using expanders).
For a home or small business FreeNAS storage server I think these are the best options, and I’ve also included some enterprise class drives.
Updated: July 19, 2015 – Added quieter HGST, and updated prices. Updated: July 30, 2016 – Updated prices, and added WL drives Updated July 15, 2017 – Updated prices, added larger drives, removed drives no longer being sold. Updated September 17, 2018 – Added WD Gold drives. Updated April 27, 2019 — Removed WL and HGST drives, added Seagate, updated all product lines.
Western Digital 3TB, 4TB, 5TB, 6TB, 8TB, 10TB, 12TB, and 14TB Drives
The highest rated and consistently available NAS class drives on the market today are made by Western Digital. The 3 product lines are:
WD Red are tried and true NAS class drives designed to run 24/7. Very stable and popular in FreeNAS systems.
Supported in up to 8 drive bays.
WD Red Pro designed for larger deployments suitable for small/medium businesses.
Supported in up to 24 drive bays
WD HGST Ultrastar DC Datacenter class hard drives designed for heavy workloads (this lineup Replaces WD Golds).
Supported in unlimited drive bays
Seagate IronWolf – up to 14TB drives
Seagate had a bad reputation because of high failure rates in the past, but the newer offerings are more reliable and given the competitive prices they’re worth another look. I would consider them again if building a new server. Seagate has 3 product lines suitable for ZFS, all running at 7200RPM:
Seagate IronWolf (up to 14TB) are NAS class drives targeted at smaller deployments.
Seagate Exos is the enterprise offering designed for enterprise workloads.
Supports unlimited bays
If you read reviews about failures, I discount negative reviews with DOAs or drives that fail within the first few days. You’re going to be able to return those rather quickly. What you want to avoid is a drive that fails a year or two in and have the hassle of dealing with a warranty claim.
Higher RPMs and larger disks are typically going to have faster seek times.
Gone are the days when you need a 24-bay server for large amounts of storage. It’s far simpler to get a 4-bay chassis with 14TB drives. If you don’t need more capacity or IOPS keep it simple.
Or buy a TrueNAS Storage Server from iXsystems
I’m cheap and tend to go with a DIY approach most of the time, but when I’m recommending ZFS systems in environments where availability is important I like the TrueNAS servers from iX Systems which will of course come with drives in configurations that have been well tested. The prices on a TrueNAS are very reasonable compared to other storage systems and it can be setup in an HA cluster. Even a FreeNAS Certified Server is probably not going to cost much more than doing it yourself (more often than not it ends up being less expensive than DIY). And of course for a small server you can grab the 4-bay FreeNAS Mini (which ships with WD REDs).
Careful with “archival” drives
If you don’t get one of the drives above, some larger hard drives are using SMR (Shingled Magnetic Recording) which should not be used with ZFS if you care about performance until drivers are developed. Be careful about any drive that says it’s for archiving purposes.
The ZIL / SLOG and L2ARC
The ZFS Intent Log (ZIL) should be on a SSD with battery backed capacitor that can flush out the cache in case of a drive failure. I have done quite a bit of testing and like the Intel DC SSD series drives and also HGST’s S840Z. These are rated to have their data overwritten many times and will not lose data on power loss. These run on the expensive side, so for a home setup I typically try to find them used on eBay. From a ZIL perspective there’s not a reason to get a large drive–but keep in mind you get better performance with larger drives. In my home I use 100GB DC S3700s and they do just fine.
I generally don’t use an L2ARC (SSH read cache) and instead opt to add more memory. There are a few cases where an L2ARC makes sense when you have very large working sets.
Most drives running 24/7 start having a high failure rate after 5-years, you might be able to squeeze 6 or 7 years out of them if you’re lucky. So a good rule of thumb is to estimate your growth and buy drives big enough that you will start to outgrow them in 5+ years. The price of hard drives is always dropping so you don’t really want to buy more much than you’ll need before they start failing. Consider that in ZFS you shouldn’t run more than 70% full (with 80% being max) for your typical NAS applications including VMs on NFS. But if you’re planning to use iSCSI you shouldn’t run more than 50% full.
ZFS Drive Configurations
My preference at home is almost always RAID-Z2 (RAID-6) with 6 to 8 drives which provides a storage efficiency of .66 to .75. This scales pretty well as far as capacity is concerned and with double-parity I’m not that concerned if a drive fails. 6 drives in RAID-Z2 would net 8TB capacity all the way up to 24TB with 6TB drives. For larger setups use multiple vdevs. E.g. with 60 bays use 10 six drive RAID-Z2 vdevs (each vdev will increase IOPS). For smaller setups I run 3 or 4 drives in RAID-Z (RAID-5). In all cases it’s essential to have backups… and I’d rather have two smaller servers with RAID-Z mirroring to each other than one server with RAID-Z2. The nice thing about smaller setups is the cost of upgrading 4 drives isn’t as bad as 6 or 8! For enterprise setups I like ZFS mirrored pairs (RAID-10) for fast rebuild times and performance at storage efficiency of 0.50.
If you must run desktop drives… On desktop class drives such as the HGST Deskstar, they’re typically not run in RAID mode so by default they are configured to take as long as needed (sometimes several minutes) to try to recover a bad sector of data. This is what you’d want on a desktop, however performance grinds to a halt during this time which can cause your ZFS server to hang for several minutes waiting on a recovery. If you already have ZFS redundancy it’s a pretty low risk to just tell the drive to give up after a few seconds, and let ZFS rebuild the data.
The basic rule of thumb. If you’re running RAID-Z, you have two copies so I’d be a little cautious about enabling TLER. If you’re running RAID-Z2 or RAID-Z3 you have three or four copies of data so in that case there’s very little risk in enabling it.
2015-01-07: I’ve updated this post to to reflect changes in FreeNAS 9.3.
I’ve been using OpenIndiana since late 2011, and switched to OmniOS in 2013. Lately I started testing FreeNAS, what drove me to do this is I use CrashPlan to backup my pool but recently Code 42 announced they’ll be discontinuing Solaris support for Crashplan so I needed to start looking for an alternative OS or an alternative backup solution. I decided to look at FreeNAS because it has a CrashPlan plugin that runs in a jail using Linux emulation. After testing it out for awhile I am likely going to stay on OmniOS since it suits my needs better and instead switch out CrashPlan for ZnapZend for my backup solution. But after running FreeNAS for a few months here are my thoughts on both platforms and their strengths and weaknesses as a ZFS storage server.
Update: 2015-01-07: After a lot of testing ZnapZend ended up not working for me, this is not it’s fault, but because I have limited and bandwidth the snapshots don’t catch up and it gets further and further behind so for now I’m continuing with Crashplan on OmniOS. I am also testing FreeNAS and may consider a switch at some point.
CIFS / SMB Performance for Windows Shares
FreeNAS has a newer implementation of SMB, supporting SMB3, I think OmniOS is at SMB1. FreeNAS can actually function as an Active Directory Domain Controller.
OmniOS is slightly faster, writing a large file over my LAN gets around 115MBps vs 98MBps on FreeNAS. I suspect this is because OmniOS runs NFS SMB at the kernel level and FreeNAS runs it in user space. I tried changing the FreeNAS protocol to SMB2, and even SMB1 but couldn’t get past 99MBps. This is on a Xeon E3-1240V3 so there’s plenty of CPU power, Samba on FreeNAS just can’t keep up.
CIFS / SMB Snapshot Integration with Previous Versions
Previous Versions Snapshot Integration with Windows is far superior in OmniOS. I always use multiple snapshot jobs to do progressive thinning of snapshots. So for example I’ll setup monthly snaps with a 6 month retention, weekly with two month retention, daily with two week, hourly with 1 week, and every 5 minutes for two days. FreeNAS will let you setup the snap jobs this way, but in Windows Previous Versions it will only show the snapshots from one of the snap jobs under Previous Versions (so you may see your every 5 minute snaps but you can’t see the hourly or weekly snaps). OmniOS handles this nicely. As a bonus Napp-It has an option to automatically delete empty snapshots sooner than their retention expiration so I don’t see them in Previous Versions unless some data actually changed.
Both platforms struggle here, FreeNAS has a bit of an edge here… probably the best thing to do is write down the serial number of each drive with the slot number. In FreeNAS drives are given device names like da0, da1, etc. but unfortunately the numbers don’t seem to correspond to anything and they can even change between reboots. FreeNAS does have the ability to label drives so you could insert one drive at a time and label them with the slot they’re in.
OmniOS drives are given names like c3t5000C5005328D67Bd0 which isn’t entirely helpful.
For LSI controllers the sas2irc utility (which works on FreeBSD or Solaris) will map the drives to slots.
The ZFS fault management daemon will automatically replace a failed drive with a hot spare… but it hasn’t been ported to FreeBSD yet so FreeNAS really only has warm spare capability. Update: FreeNAS added hot spare capability on Feb 27, 2015. To me this is a minor concern… if you’re going to use RAID-Z with a hot spare why not just configure the pool with RAID-Z2 or RAID-Z3 to begin with? However, I can see how the fault management daemon on OmniOS would reduce the amount of work if you had several hundred drives and failures were routine.
SWAP issue on FreeNAS
While I was testing I actually had a drive fail (this is why 3-year old Seagate drives are great to test with) and FreeNAS crashed! The NFS pool dropped out from under VMware. When I looked at the console I saw “swap_pager: I/O error – pagein failed” I had run into FreeNAS Bug 208 which was closed a year ago but never resolved. The default setting in FreeNAS is to create a 2GB swap partition on every drive which acts like striped swap space (I am not making this up, this is the default setting). So if any one of the drives fails it can take FreeNAS down. The argument from FreeNAS is that you shouldn’t be using swap–and perhaps that’s true but I had a FreeNAS box with 8GB memory and running only one jail with CrashPlan bring my entire system down because a single drive failed. That’s not an acceptable default setting. Fortunately there is a way to disable automatically creating swap partitions on FreeNAS, it’s best to disable the setting before initializing any disks.
In my three years of running an OpenSolaris / Illumos based OS I’ve never had a drive failure bring the system down
Running under VMware
FreeNAS is not supported running under a VM but OmniOS is. In my testing both OmniOS and FreeNAS work well under VMware under the best practices of passing an LSI controller flashed into IT mode to the VM using VT-d. I did find that OmniOS does a lot better virtualized on slower hardware than FreeNAS. On an Avaton C2750 FreeNAS performed well on bare metal, but when I virtualized it using vmdks on drives instead of VT-d FreeNAS suffered in performance but OmniOS performed quite well under the same scenario.
Both platforms have VMXNET3 drivers, neither has a Paravirtual SCSI driver.
Unfortunately Oracle did not release the source for Solaris 11, so there is no encryption support on OpenZFS directly.
FreeNAS can take advantage of FreeBSD’s GELI based encryption. FreeBSD’s implementation can use the AES instruction set, last I tested Solaris 11 the AES instruction set was not used so FreeBSD/FreeNAS probably has the fastest encryption implementation for ZFS.
There isn’t a good encryption option on OmniOS.
ZFS High Availability
Neither systems supports ZFS high availability out of the box. OmniOS can use a third party tool like RSF-1 (paid) to accomplish this. The commercially supported TrueNAS uses RSF-1 so it should also work in FreeNAS.
ZFS Replication & Backups
FreeNAS has the ability to easily setup replication as often as every 5 minutes which is a great way to have a standby host to failover to. Replication can be done over the network. If you’re going to replicate over the internet I’d say you want a small data set or a very fast connection–I ran into issues a couple of times where the replication got interrupted and it needed to start all over from scratch. On OmniOS Napp-It does not offer a free replication solution, but there is a paid replication feature, however there are also numerous free ZFS replication scripts that people have written such as ZnapZend.
I did get the CrashPlan plugin to work under FreeNAS, however I found that after a reboot the CrashPlan jail sometimes wouldn’t auto-mount my main pool so it ended up not being a reliable enough solution for me to be comfortable with. I wish FreeNAS made it so that it wasn’t in a jail.
FreeNAS is a little more power hungry than OmniOS. For my 8TB pool a bare minimum for FreeNAS is 8GB while OmniOS is quite happy with 4GB, although I run it with 6GB to give it a little more ARC.
FreeNAS supports more hardware than OmniOS. I generally virtualize my ZFS server so it doesn’t matter too much to me but if you’re running bare metal and on obscure or newer hardware there’s a much better chance that FreeNAS supports it. Also in 9.3 you have the ability to configure IPMI from the web interface.
FreeNAS now has VAAI support for iSCSI. OmniOS has no VAAI support. As of FreeNAS 9.3 and Napp-It 0.9f4 both control panels have the ability to enable VMware snapshot integration / ESXi hot snaps. The way this works is before every ZFS snapshot is taken FreeNAS has VMware snap all the VMs, then the ZFS snapshot is taken, then the VMware snapshots are released. This is really nice and allows for proper consistent snapshots.
The FreeNAS GUI looks a little nicer and is probably a little easier for a beginner. The background of the screen turns red whenever you’re about to do something dangerous. I found you can setup just about everything from the GUI, where I had to drop into the command line more often with OmniOS. The FreeNAS web interface seems to hang for a few seconds from time to time compared to Napp-It, but nothing major. I believe FreeNAS will have an asynchronous GUI in version 10.
One frustration I have with FreeNAS is it doesn’t quite do things that are compatible with CLI. For example, if you create a pool via CLI FreeNAS doesn’t see it, you actually have to import it using the GUI to use it there. Napp-it is essentially an interface that runs CLI commands so you can seamlessly switch back and forth between managing things on CLI and Napp-It. This is a difference in philosophy. Napp-It is just a web interface meant to run on top of an OS, where FreeNAS is more than just a webapp on top of FreeBSD, FreeNAS is it’s own OS.
I think most people experienced with the zfs command line and Solaris are going to be a little more at home with Napp-It’s control panel, but it’s easy enough to figure out what FreeNAS is doing. You just have to be careful what you do in the CLI.
On both platforms I found I had to switch into CLI from time to time to do things right (e.g. FreeNAS can’t set sync=always from the GUI, Napp-It can’t setup networking).
As far as managing a ZFS file system both have what I want.. email alerts when there’s a problem, scheduling for data scrubs, snapshots, etc.
FreeNAS has better security, it’s much easier to setup an SSL cert on the management interface, in fact you can create an internal CA to sign certificates from the GUI. Security updates are easier to manage from the web interface in FreeNAS as well.
FreeNAS and OmniOS both have great communities. If you post anything at HardForum chances are you’ll get a response from Gea and he’s usually quite helpful. Post anything on the FreeNAS forums and Cyberjock will tell you that you need more RAM and that you’ll lose all your data. There is a lot of info on the FreeNAS forums and the FreeNAS Redmine project is open so you can see all the issues, it’s great way to see what bugs and feature requests are out there and when they were or will be fixed. OmniOS has an active OmniOS Discuss mailman list and Gea, the author of Napp-It is active on various forums. He has answered my questions on several occasions over at HardForum’s Data Storage subforum. I’ve found the HardForum community a little more helpful…I’ve always gotten a response there while several questions I posted on the FreeNAS forums went unanswered.
FreeNAS documentation is great, like FreeBSD’s. Just about everything is in the FreeNAS Guide
OmniOS isn’t as organized. I found some howtos here, not nearly as comprehensive as FreeNAS. Most of what I find from OmniOS I find in forums or the Napp-It site.
FreeNAS does not have a way to mirror the ZFS boot device. FreeBSD does have this capability but it turns out FreeNAS is built on NanoBSD. The only way to get FreeNAS to have redundancy on the boot device that I know of is to set it up on a hardware RAID card.
FreeNAS 9.3 can now install to a mirrored ZFS rpool!
Features / Plugins / Extensions
Napp-It’s extensions include:
AMP (Apache, MySQL, PHP stack)
Baikal CalDAV / CardDAV Server
MediaTomb (DLNA / UPnP server)
Owncloud (Dropbox alternative)
PHPvirtualbox (VirtualBox interface)
Bacula (Backup Server)
BTSync (Bittorrent Sync)
CouchPotato (NZB and Torrent downloader)
CrashPlan (Backup client/server)
Cruciblewds (Computer imaging / cloning)
Firefly (media server for Roku SoundBridge and Apple iTunes)
Headphones (automatic music downloader for SABnzbd)
LazyLibrarian (follow authors and grab metadata for digital reading)
Maraschino (web interfrace for XBMC HTPC)
MineOS (Minecraft control panel)
Mylar (Comic book downloader)
OwnCloud (Dropbox alternative)
SABnzbd (Binary newsreader)
SickBeard (PVR for newsgroup users)
SickRage (Video file manager for TV shows)
Subsonic (music streaming server)
Syncthing (Open source cluster synchronization)
Transmission (BitTorrent client)
XDM (eXtendable Download Manager)
All FreeNAS plugins run in a jail so you must mount the storage that service will need inside the jail… this can be kind of annoying but it does allow for some nice security–for example CrashPlan can mount the storage you want to backup as read-only.
Protocols and Services
Both systems offer a standard stack of AFP, SMB/CIFS, iSCSI, FTP, NFS, RSYNC, TFTP FreeNAS also has WebDAV and few extra services like Dynamic DNS, LLDP, and UPS (the ability to connect to a UPS unit and shutdown automatically).
Performance Reporting and Monitoring
Napp-It does not have reports and graphs in the free version. FreeNAS has reports and you can look back as far as you want to see historical performance metrics.
As a Hypervisor
Both systems are very efficient running guests of the same OS. OmniOS has Zones, FreeNAS can run FreeBSD Jails. OmniOS also has KVM which can be used to run any OS. I suspect that FreeNAS 10 will have Bhyve. Also both can run VirtualBox.
Stability vs Latest
Both systems are stable, OmniOS/Napp-It seems to be the most robust of the two. The OmniOS LTS updates are very minimal, mostly security updates and a few bug fixes. Infrequent and minimal updates are what I like to see in a storage solution.
FreeNAS is pushing a little close to the cutting edge. They have frequent updates pushed out–sometimes I think they are too frequent to have been thoroughly tested. On the other hand if you come across an issue or feature request in FreeNAS and report it chances are they’ll get it in the next release pretty quickly.
Because of this, OmniOS is behind FreeNAS on some things like NFS and SMB protocol versions, VAAI support for iSCSI, etc.
I think this is an important consideration. With FreeNAS you’ll get newer features and later technologies while OmniOS LTS is generally the better platform for stability. The commercial TrueNAS soltution is also going to robust. For FreeNAS you could always pick a stable version and not update very often–I really wish FreeNAS had an LTS, or at least a slower moving stable branch that maybe only did quarterly updates except for security fixes.
OmniOS has a slight edge on ZFS integration. As I mentioned earlier OmniOS has multi-tiered snapshot integration into the the Windows Previous versions feature where FreeNAS can only pick one snap-frequency to show up there. Also, in OmniOS NFS and SMB shares are stored as properties on the datasets so you can export the pool, import it somewhere else and the shares stay with the pool so you don’t have to reconfigure them.
On an All-in-one setup, I setup VMware ESXi 6.0, a virtual storage network and tested FreeNAS and OmniOS using iSCSI and NFS. On all tests MTU is set to 9000 on the storage network, and compression is set to LZ4. iSCSI volumes are sparse ZVOLs. I gave the ZFS server 2 cores and 8GB memory, and the guest VM 2 cores and 8GB memory. The guest VM is Windows 10 running Crystal Benchmark.
Supermicro X10SL7-F with LSI 2308 HBA flashed to IT firmware and passed to ZFS server via VT-d (I flashed the P19 firmware for OmniOS and then re-flashed to P16 for FreeNAS).
Intel Xeon E3-1240v3 3.40Ghz.
16GB ECC Memory.
6 x 2TB Seagate 7200 drives in RAID-Z2
2 x 100GB DC S3700s striped for ZIL/SLOG. Over-provisioned to 8GB.
On Crystal Benchmark I ran 5 each of the 4000MB, 1000MB, and 50MB size tests, the results are the average of the results.
On all tests every write was going to the ZIL / SLOG devices. On NFS I left the default sync=standard (which results in every write being a sync with ESXi). On iSCSI I set sync=always, ESXi doesn’t honor sync requests from the guest with iSCSI so it’s not safe to run with sync=standard.
So it appears, from these unscientific benchmarks that OmniOS on NFS is the fastest configuration, iSCSI performs pretty similarly on both FreeNAS and OmniOS depending on the test. One other thing I should mention, which doesn’t show up in the tests is latency. With NFS I saw latency on the ESXi storage as high as 14ms during the tests, while latency never broke a millisecond with iSCSI.
One major drawback to my benchmarks is it’s only one guest hitting the storage. It would be interesting to repeat the test with several VMs accessing the storage simultaneously, I expect the results may be different under heavy concurrent load.
I chose 64k iSCSI block size because the larger blocks result in a higher LZ4 compression ratio, I did several quick benchmarks and found 16K and 64K performed pretty similarly, 16K did perform better at random 4K write QD=1, but otherwise 64K was close to a 16K block size depending on the test. I saw significant drop in random performance at 128k. Once again under different scenarios this may not be the optimal block-size for all types of workloads.