RHEL/CentOS, Debian, Fedora, Ubuntu & FreeBSD Comparison

Over the years I’ve used a number of Linux distributions (and FreeBSD), these are my top 5 and how I rank them:

centos_debian_fedora_ubuntu_freebsd_score

Desktop

Gnome ScreenshotI’m not a big fan of Ubuntu’s Unity, so Ubuntu-Gnome, Kubuntu, Debian and Fedora are my top distros for desktop choices.  If you want the latest Gnome features Fedora gets them first.  For KDE I think Kubuntu does a great job at reasonable default settings (like say, having the Start button open the KDE menu, why is it KDE programmers think that shouldn’t be default behavior?) where I have to do quite a bit more tweaking on other distros.  Ubuntu-Gnome also provides an optional PPA which tracks the latest version of Gnome bringing it almost as up to date as Fedora is.

Ugly fonts – for some reason, on FreeBSD, Fedora, CentOS, and Debian the fonts look ugly… I don’t know if they can’t detect my video card properly or if there’s something wrong with the fonts themselves but on every system I’ve tried the fonts look much better on Ubuntu based distributions.

If you’re interested in FreeBSD for a desktop PC-BSD is worth a look, but in my experience Linux runs a lot better on the desktop than FreeBSD.

Server

FreeBSD is historically my favorite server OS, but they tend to lag behind on some things and I have trouble getting some software working on it so for the most part I use Ubuntu for servers as it seems to have the best out of the box setup.  90% of the time I’m deploying in virtual environments and open-vm-tools is now enabled by default in 16.04.

With perhaps the exception of Fedora all the distros make decent servers.

Packages

All the package management systems are pretty decent, I do prefer apt just because I never have any problems with it and it’s faster.  Debian and Ubuntu have the most packages available, and Ubuntu has PPA support which makes it easy to manage 3rd party repositories.

One thing I don’t like about Debian, while it does have a lot of packages is a lot of packages are out of date.  A few months ago I tried to install Redmine from the repository and even though the repository had it at version 3.0 the actual version that was installed was 2.6.  Someone needs to do some clean up.

CentOS hardly offers any packages so you have to enable the EPEL just to make it functional and even then it’s limited.   My main issue with CentOS is it seems if you want to do anything other than a very basic install you’re dealing with not finding packages (like rdiff-backup, why isn’t that in the repos?) or needing packages from conflicting repositories and sometimes having to download them manually.  It’s a nightmare.

One other thing I like about apt is the philosophy of Debian and Ubuntu of setting up some sensible default configurations and enabling the service.  After installing packages on Fedora, CentOS, or FreeBSD I’m often left manually creating configuration files.  CentOS is the most annoying–maybe it’s just me but if I install a service I want SELinux to not block me from running that service… and when I make a change in SELinux it should take effect immediately instead of arbitrarily taking a few minutes to come to it’s senses.

Free Software

Richard Stallman
By – Thesupermat – CC BY-SA 3.0

While Richard Stallman wouldn’t endorse any of the distributions I’m comparing, if he had to choose from these Debian would likely be his choice.

Debian LogoAll the OSes include or provide ways of obtaining non-free software, but Debian is at the forefront of making it a goal to move to Free Software.  Fortunately I think they do this in a smart way where they’re still including ways to install non-free drivers so you can at least make a system usable.  I think Debian does the best job of making it clear what’s free and what isn’t, and allowing the user to make the choice.

 

Evilness

RedHat LogoI used to be a big RedHat fan back in the RH 6 and 7 days.  Then one day my loyalty was rewarded when out of the blue RedHat decided to start charging for updates for their “Free” OS… RedHat’s new free alternative was Fedora which was so unstable it was unusable.  I was suddenly going to need to buy lots of licenses… this left me scrambling for a solution and I eventually switched over to Ubuntu.  Since then I’m wary about anything related to RedHat.  CentOS is now the free version of RedHat while Fedora is where all the new features are available and it’s not so unstable these days.  And, yes, RedHat, I’m still bitter.

Ubuntu introduced Amazon ad supported searches and even worse was by default sending search keywords from the unity lens to Canonical.  I’d consider this an invasion of privacy and really the first time I started looking for Ubuntu alternatives after I switched from RedHat.   Fortunately the feature was easy to disable, and now Ubuntu has since disabled it.

Out of Box Hardware Support

Dell XPS 13 with UbuntuUbuntu has the best out of box hardware support.  Dell’s XPS 13 even comes in a developer edition that ships with Ubuntu 14.04 LTS.  It works outUbuntu Logo of the box on just about every laptop I’ve tried it on.  Also it was the first distro to support VMware’s VMXNET3 and SCSI Paravirtual driver in the default install and now I believe it’s the only distro that has open-vm-tools pre-installed.  All this cuts down on the amount of time and effort it takes to deploy.

I wish Debian did better here.  Debian excludes some non-free drivers which is good for the FSF philosophy but it’s also means I had no WiFi on a fresh Debian install.  Apparently you’re supposed to download the drivers separately.  This is particularly bad when your laptop doesn’t have an Ethernet port so you have no way to download the WiFi drivers.  I suppose I could have re-installed Ubuntu then downloaded the Debian, WiFi drivers, save them off to a USB drive, re-install Debian and side-load the WiFi drivers… but what a hassle.

Automatic Security Updates

Ubuntu and Debian give the option of enabling automatic security updates at install time.  The other systems have ways of enabling automatic updates but there isn’t an option to enable it by default at install time.  My opinion is all operating systems should automatically install security updates by default.

Init System

FreeBSD DaemonFreeBSD avoids the nonsense for the win here.  I do not like systemd.  I’d rather spend time not fighting systemd.  Maybe I can figure it out someday.  Why didn’t we all switch to upstart?  I liked upstart.

Cutting Edge vs Stability

Fedora LinuxFor cutting edge Fedora or Ubuntu standard (every 6 month) releases keep you up to date, great for wanting to stay cutting edge on a Desktop Environment.

FreeBSD is the most stable OS I’ve ever used.  If I was told I was building a solution that would still be around in 30 years I’d probably choose FreeBSD.  Changes to the base system are rare and well thought out.  If you wrote a program or script on FreeBSD 10 years ago it would probably still work today on the latest version.   In the Linux world I like Debian stable or Ubuntu’s LTS (after the first point release) and CentOS (aslo after the first point release) are great options.

Ubuntu provides the best of both worlds getting cutting edge with LTS releases which I find very beneficial for having a stable environment but still having relevant development tools and up to date server environments.  If you need something newer you have PPAs, but most of the time the standard packages are new enough.  Right now for example Ubuntu 16.04 LTS is the only distribution that ships with a version of OpenSSL and NGINX that supports an http/2 implementation that works with Google Chrome.  To top if off both OpenSSL and NGINX packages fall under Ubuntu’s 5-year support.  You don’t have to add 3rd party repos, solve dependency issues.  Just one command: “apt install nginx” and you’re good for 5-years.

Ubuntu 16.04 LTS is the only distro that supports http/2

(above screenshot from: https://www.nginx.com/blog/supporting-http2-google-chrome-users/)

Upgrading

FreeBSD LogoFreeBSD is the best OS I’ve ever used at upgrading to a newer release.  You could probably start at FreeBSD 4, and upgrade all the way to 11 and have no issues.  Debian and Ubuntu also have pretty good upgrade support… in all cases I test upgrading before doing it on a production system.

Long Term Support (LTS)

CentOS LogoCentOS has the longest support offering at 10-years!  Combined with the EPEL repository (which also has the same goal) I’d say RedHat/CentOS is the best distribution for a “deploy and forget” application that gets thrown in /opt if you don’t want to worry about changes or upgrades breaking the app for the next 10-years.  This is probably why enterprise applications like this distribution.

Debian is just starting a 5-year LTS program through a volunteer effort.  I’m looking forward to seeing how this goes.  I’m glad to see this change as lack of LTS was one of the main reasons I decided on Ubuntu over Debian.

Ubuntu offers 5-year LTS.  Ubuntu’s LTS not only covers the base system but also the Ubuntu team supports many packages (use “apt-cache show packagename”) and if you see 5y you’re good.

Predictable Release Cadence

release-chart-desktop

Ubuntu has the most predictable release cadence.  They release every 6 months with a 5-year LTS release every 2-years.  Having been a sysadmin and a developer I like knowing exactly how long systems are supported.  I plan out development, deployments, and upgrades years in advance based on Ubuntu’s release cadence.

My Thoughts

When I was younger it was fun to build my entire system from scratch using Gentoo and compile FreeBSD packages from ports (I also compiled the kernel).  Linux wasn’t as easy back then.  I remember just trying to get my scroll wheel working in RedHat 7.

Screenshot of how to get the scroll wheel working
I found this old note.  I finally got the scroll wheel working in RedHat 7.1!

Linux distributions are tools.  At some point you have to stop trying to build the perfect hammer and start using it to put nails in things.

Now days I don’t have time to compile from scratch, solve RPM dependency issues, or find out why packages aren’t the right version.  In the year 2000 I could understand having to fix ugly font issues and messing around with wifi-drivers.  But we should be beyond that now.  That was the past.

Calvin and Hobbes Comic Strip
By Bill Waterson, 1995-08-27, Fair Use – 17 U.S.C. § 107

Onward

Ben wearing RedHat
I used to wear the official RedHat Fedora

Fonts, automatic updates, scroll wheel, touchpad, bluetooth, wifi, printers, and hardware in general should be working out of the box by now–if it isn’t I’m not going to put a lot of effort into getting the distro working.  It’s time to move forward and focus work on things beyond the distribution–while I love all sorts of distros, I don’t want to be like Calvin fighting the computer the whole way.  I actually do work on them and need something stable and up to date out of the box with sane default settings.  Having predictable release cycles also helps.  If I could combine the philosophy of Debian with the few extras that Ubuntu provides I’d have the perfect distro.  But for the time being Ubuntu is close enough to what I want–I’ve been using it probably since 5.04 (Hoary Hedgehog) and standardized on it when they started doing LTS releases.  That doesn’t mean it’s for everyone, not everyone likes it, some people prefer the more vanilla feel from Debian, others might want something easier like Mint.  If you prefer CentOS, Fedora, Arch, etc. and they work well for you, use them.

Actually I don’t use Ubuntu for everything.  For my production environment I’ve standardized on Windows 10 for desktops, ESXi for virtualization, FreeNAS for storage, pfSense for firewalls, and Ubuntu for servers.  Honestly, none of the above systems were my first choice… but I’m at where I am because my first choices let me down.  It will likely evolve in the future, but for the time being that’s my setup and it works pretty well.

The great thing about modern day Linux distributions (and FreeBSD) is they’re all pretty good.  I haven’t had to hack an Xorg file to get the scroll wheel working in a long time.

 

 

VMware vs bhyve Performance Comparison

Playing with bhyve

Here’s a look at Gea’s popular All-in-one design which allows VMware to run on top of ZFS on a single box using a virtual 10Gbe storage network.  The design requires an HBA, and a CPU that supports VT-d so that the storage can be passed directly to a guest VM running a ZFS server (such as OmniOS or FreeNAS).  Then a virtual storage network is used to share the storage back to VMware.

vmware_all_in_one_with_storage_network
VMware and ZFS: All-In-One Design

bhyve, can simplify this design since it runs under FreeBSD it already has a ZFS server.  This not only simplifies the design, but it could potentially allow a hypervisor to run on simpler less expensive hardware.  The same design in bhyve eliminates the need to use a dedicated HBA and a CPU that supports VT-d.

freebsd_bhyve
Simpler bhyve design

I’ve never understood the advantage of type-1 hypervisors (such as VMware and Xen) over Type-2 hypervisors (like KVM and bhyve).  Type-1 proponents say the hypervisor runs on bare metal instead of an OS… I’m not sure how VMware isn’t considered an OS except that it is a purpose-built OS and probably smaller.  It seems you could take a Linux distribution running KVM and take away features until at some point it becomes a Type-1 hypervisor.  Which is all fine but it could actually be a disadvantage if you wanted some of those features (like ZFS).  A type-2 hypervisor that supports ZFS appears to have a clear advantage (at least theoretically) over a type-1 for this type of setup.

In fact, FreeBSD may be the best visualization / storage platform.  You get ZFS and bhyve, and also jails.  You really only need to run bhyve when virtualizing a different OS.

bhyve is still pretty young, but I thought I’d run some tests to see where it’s at…

Environments

This is running on my X10SDV-F Datacenter in a Box Build.

In all environments the following parameters were used:

  • Supermico X10SDV-F
  • Xeon D-1540
  • 32GB ECC DDR4 memory
  • IBM ServerRaid M1015 flashed to IT mode.
  • 4 x HGST Ultrastar 7K300 HGST 2TB enterprise drives in RAID-Z
  • One DC S3700 100GB over-provisioned to 8GB used as the log device.
  • No L2ARC.
  • Compression = LZ4
  • Sync = standard (unless specified).
  • Guest (where tests are run): Ubuntu 14.04 LTS, 16GB, 4 cores, 1GB memory.
  • OS defaults are left as is, I didn’t try to tweak number of NFS servers, sd.conf, etc.
  • My tests fit inside of ARC.  I ran each test 5 times on each platform to warm up the ARC.  The results are the average of the next 5 test runs.
  • I only tested an Ubuntu guest because it’s the only distribution I run in (in quantity anyway) addition to FreeBSD, I suppose a more thorough test should include other operating systems.

The environments were setup as follows:

1 – VM under ESXi 6 using NFS storage from FreeNAS 9.3 VM via VT-d

  • FreeNAS 9.3 installed under ESXi.
  • FreeNAS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

2 – VM under ESXi 6 using NFS storage from OmniOS VM via VT-d

  • OmniOS r151014 LTS installed under ESXi.
  • OmniOS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

3 – VM under FreeBSD bhyve

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.

4 – VM under FreeBSD bhyve sync always

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.
  • Sync=always

Benchmark Results

MariaDB OLTP Load

This test is a mix of CPU and storage I/O.  bhyve (yellow) pulls ahead in the 2 threaded test, probably because it doesn’t have to issue a sync after each write.  However, it falls behind on the 4 threaded test even with that advantage, probably because it isn’t as efficient at handling CPU processing as VMware (see next chart on finding primes).
sysbench_oltp

Finding Primes

Finding prime numbers with a VM under VMware is significantly faster than under bhyve.

sysbench_primes

Random Read

byhve has an advantage, probably because it has direct access to ZFS.

sysbench_rndrd

Random Write

With sync=standard bhyve has a clear advantage.  I’m not sure why VMware can outperform bhyve sync=always.  I am merely speculating but I wonder if VMware over NFS is translating smaller writes into larger blocks (maybe 64k or 128k) before sending them to the NFS server.

sysbench_rndwr

Random Read/Write

sysbench_rndrw

Sequential Read

Sequential reads are faster with bhyve’s direct storage access.

sysbench_seqrd

Sequential Write

What not having to sync every write will gain you..

sysbench_seqwr

Sequential Rewrite

sysbench_seqrewr

 

Summary

VMware is a very fine virtualization platform that’s been well tuned.  All that overhead of VT-d, virtual 10gbe switches for the storage network, VM storage over NFS, etc. are not hurting it’s performance except perhaps on sequential reads.

For as young as bhyve is I’m happy with the performance compared to VMware, it appears to be a slower on the CPU intensive tests.   I didn’t intend on comparing CPU performance so I haven’t done enough variety of tests to see what the difference is there but it appears VMware has an advantage.

One thing that is not clear to me is how safe running sync=standard is on bhyve.  The ideal scenario would be honoring fsync requests from the guest, however I’m not sure if bhyve has that kind of insight from the guest.  Probably the worst case under this scenario with sync=standard is losing the last 5 seconds of writes–but even that risk can be mitigated with battery backup. With standard sync there’s a lot of performance to be gained over VMware with NFS.  Even if you run bhyve with sync=always it does not perform badly, and even outperforms VMware All-in-one design on some tests.

The upcoming FreeNAS 10 may be an interesting hypervisor + storage platform, especially if it provides a GUI to manage bhyve.

 

FreeNAS vs. OmniOS / Napp-It

freenas      OmniOS_logo_200px

2015-01-07: I’ve updated this post to to reflect changes in FreeNAS 9.3. 

I’ve been using OpenIndiana since late 2011, and switched to OmniOS in 2013.  Lately I started testing FreeNAS, what drove me to do this is I use  CrashPlan to backup my pool but recently Code 42 announced they’ll be discontinuing Solaris support for Crashplan so I needed to start looking for an alternative OS or an alternative backup solution.  I decided to look at FreeNAS because it has a CrashPlan plugin that runs in a jail using Linux emulation.  After testing it out for awhile I am likely going to stay on OmniOS since it suits my needs better and instead switch out CrashPlan for ZnapZend for my backup solution.  But after running FreeNAS for a few months here are my thoughts on both platforms and their strengths and weaknesses as a ZFS storage server.

Update: 2015-01-07: After a lot of testing ZnapZend ended up not working for me, this is not it’s fault, but because I have limited and bandwidth the snapshots don’t catch up and it gets further and further behind so for now I’m continuing with Crashplan on OmniOS.  I am also testing FreeNAS and may consider a switch at some point.

CIFS / SMB Performance for Windows Shares

FreeNAS has a newer implementation of SMB, supporting SMB3, I think OmniOS is at SMB1.  FreeNAS can actually function as an Active Directory Domain Controller.

OmniOS is slightly faster, writing a large file over my LAN gets around 115MBps vs 98MBps on FreeNAS.  I suspect this is because OmniOS runs NFS SMB at the kernel level and FreeNAS runs it in user space.  I tried changing the FreeNAS protocol to SMB2, and even SMB1 but couldn’t get past 99MBps.  This is on a Xeon E3-1240V3 so there’s plenty of CPU power, Samba on FreeNAS just can’t keep up.

CIFS / SMB Snapshot Integration with Previous Versions

Previous Versions Snapshot Integration with Windows is far superior in OmniOS.   I always use multiple snapshot jobs to do progressive thinning of snapshots.  So for example I’ll setup monthly snaps with a 6 month retention, weekly with two month retention, daily with two week, hourly with 1 week, and every 5 minutes for two days.   FreeNAS will let you setup the snap jobs this way, but in Windows Previous Versions it will only show the snapshots from one of the snap jobs under Previous Versions (so you may see your every 5 minute snaps but you can’t see the hourly or weekly snaps).  OmniOS handles this nicely.  As a bonus Napp-It has an option to automatically delete empty snapshots sooner than their retention expiration so I don’t see them in Previous Versions unless some data actually changed.

previous_versions

Enclosure Management

Both platforms struggle here, FreeNAS has a bit of an edge here… probably the best thing to do is write down the serial number of each drive with the slot number.  In FreeNAS drives are given device names like da0, da1, etc. but unfortunately the numbers don’t seem to correspond to anything and they can even change between reboots.  FreeNAS does have the ability to label drives so you could insert one drive at a time and label them with the slot they’re in.

OmniOS drives are given names like c3t5000C5005328D67Bd0 which isn’t entirely helpful.

For LSI controllers the sas2irc utility (which works on FreeBSD or Solaris) will map the drives to slots.

Fault Management

The ZFS fault management daemon will automatically replace a failed drive with a hot spare… but it hasn’t been ported to FreeBSD yet so FreeNAS really only has warm spare capability.  Update: FreeNAS added hot spare capability on Feb 27, 2015.   To me this is a minor concern… if you’re going to use RAID-Z with a hot spare why not just configure the pool with RAID-Z2 or RAID-Z3 to begin with?  However, I can see how the fault management daemon on OmniOS would reduce the amount of work if you had several hundred drives and failures were routine.

SWAP issue on FreeNAS

While I was testing I actually had a drive fail (this is why 3-year old Seagate drives are great to test with) and FreeNAS crashed!  The NFS pool dropped out from under VMware.  When I looked at the console I saw “swap_pager: I/O error – pagein failed”   I had run into FreeNAS Bug 208 which was closed a year ago but never resolved.  The default setting in FreeNAS is to create a 2GB swap partition on every drive which acts like striped swap space (I am not making this up, this is the default setting).  So if any one of the drives fails it can take FreeNAS down.  The argument from FreeNAS is that you shouldn’t be using swap–and perhaps that’s true but I had a FreeNAS box with 8GB memory and running only one jail with CrashPlan bring my entire system down because a single drive failed.  That’s not an acceptable default setting.  Fortunately there is a way to disable automatically creating swap partitions on FreeNAS, it’s best to disable the setting before initializing any disks.

In my three years of running an OpenSolaris / Illumos based OS I’ve never had a drive failure bring the system down

Running under VMware

FreeNAS is not supported running under a VM but OmniOS is.  In my testing both OmniOS and FreeNAS work well under VMware under the best practices of passing an LSI controller flashed into IT mode to the VM using VT-d.  I did find that OmniOS does a lot better virtualized on slower hardware than FreeNAS.  On an Avaton C2750 FreeNAS performed well on bare metal, but when I virtualized it using vmdks on drives instead of VT-d FreeNAS suffered in performance but OmniOS performed quite well under the same scenario.

Both platforms have VMXNET3 drivers, neither has a Paravirtual SCSI driver.

Encryption

Unfortunately Oracle did not release the source for Solaris 11, so there is no encryption support on OpenZFS directly.

FreeNAS can take advantage of FreeBSD’s GELI based encryption.  FreeBSD’s implementation can use the AES instruction set, last I tested Solaris 11 the AES instruction set was not used so FreeBSD/FreeNAS probably has the fastest encryption implementation for ZFS.

There isn’t a good encryption option on OmniOS.

ZFS High Availability

Neither systems supports ZFS high availability out of the box.  OmniOS can use a third party tool like RSF-1 (paid) to accomplish this.  The commercially supported TrueNAS uses RSF-1 so it should also work in FreeNAS.

ZFS Replication & Backups

FreeNAS has the ability to easily setup replication as often as every 5 minutes which is a great way to have a standby host to failover to.  Replication can be done over the network.  If you’re going to replicate over the internet I’d say you want a small data set or a very fast connection–I ran into issues a couple of times where the replication got interrupted and it needed to start all over from scratch.  On OmniOS Napp-It does not offer a free replication solution, but there is a paid replication feature, however there are also numerous free ZFS replication scripts that people have written such as ZnapZend.

I did get the CrashPlan plugin to work under FreeNAS, however I found that after a reboot the CrashPlan jail sometimes wouldn’t auto-mount my main pool so it ended up not being a reliable enough solution for me to be comfortable with.  I wish FreeNAS made it so that it wasn’t in a jail.

Memory Requirements

FreeNAS is a little more power hungry than OmniOS.  For my 8TB pool a bare minimum for FreeNAS is 8GB while OmniOS is quite happy with 4GB, although I run it with 6GB to give it a little more ARC.

Hardware Support

FreeNAS supports more hardware than OmniOS.  I generally virtualize my ZFS server so it doesn’t matter too much to me but if you’re running bare metal and on obscure or newer hardware there’s a much better chance that FreeNAS supports it.  Also in 9.3 you have the ability to configure IPMI from the web interface.

VAAI (VMware vSphere Storage API’s — Array Integration)

FreeNAS now has VAAI support for iSCSI.  OmniOS has no VAAI support.  As of FreeNAS 9.3 and Napp-It 0.9f4 both control panels have the ability to enable VMware snapshot integration / ESXi hot snaps.  The way this works is before every ZFS snapshot is taken FreeNAS has VMware snap all the VMs, then the ZFS snapshot is taken, then the VMware snapshots are released.  This is really nice and allows for proper consistent snapshots.

 

GUI

The FreeNAS GUI looks a little nicer and is probably a little easier for a beginner.  The background of the screen turns red whenever you’re about to do something dangerous.  I found you can setup just about everything from the GUI, where I had to drop into the command line more often with OmniOS.  The FreeNAS web interface seems to hang for a few seconds from time to time compared to Napp-It, but nothing major.  I believe FreeNAS will have an asynchronous GUI in version 10.

One frustration I have with FreeNAS is it doesn’t quite do things that are compatible with CLI.  For example, if you create a pool via CLI FreeNAS doesn’t see it, you actually have to import it using the GUI to use it there.  Napp-it is essentially an interface that runs CLI commands so you can seamlessly switch back and forth between managing things on CLI and Napp-It.  This is a difference in philosophy.   Napp-It is just a web interface meant to run on top of an OS, where FreeNAS is more than just a webapp on top of FreeBSD, FreeNAS is it’s own OS.

I think most people experienced with the zfs command line and Solaris are going to be a little more at home with Napp-It’s control panel, but it’s easy enough to figure out what FreeNAS is doing.  You just have to be careful what you do in the CLI.

On both platforms I found I had to switch into CLI from time to time to do things right (e.g. FreeNAS can’t set sync=always from the GUI, Napp-It can’t setup networking).

As far as managing a ZFS file system both have what I want.. email alerts when there’s a problem, scheduling for data scrubs, snapshots, etc.

FreeNAS has better security, it’s much easier to setup an SSL cert on the management interface, in fact you can create an internal CA to sign certificates from the GUI.  Security updates are easier to manage from the web interface in FreeNAS as well.

Community

FreeNAS and OmniOS both have great communities.  If you post anything at HardForum chances are you’ll get a response from Gea and he’s usually quite helpful.  Post anything on the FreeNAS forums and Cyberjock will tell you that you need more RAM and that you’ll lose all your data.  There is a lot of info on the FreeNAS forums and the FreeNAS Redmine project is open so you can see all the issues, it’s great way to see what bugs and feature requests are out there and when they were or will be fixed.  OmniOS has an active OmniOS Discuss mailman list and Gea, the author of Napp-It is active on various forums.  He has answered my questions on several occasions over at HardForum’s Data Storage subforum.  I’ve found the HardForum community a little more helpful…I’ve always gotten a response there while several questions I posted on the FreeNAS forums went unanswered.

Documentation

FreeNAS documentation is great, like FreeBSD’s.  Just about everything is in the FreeNAS Guide

OmniOS isn’t as organized.  I found some howtos here, not nearly as comprehensive as FreeNAS.  Most of what I find from OmniOS I find in forums or the Napp-It site.

Mirrored ZFS boot device / rpool

OmniOS can boot to a mirrored ZFS rpool.

FreeNAS does not have a way to mirror the ZFS boot device.  FreeBSD does have this capability but it turns out FreeNAS is built on NanoBSD.  The only way to get FreeNAS to have redundancy on the boot device that I know of is to set it up on a hardware RAID card.

FreeNAS 9.3 can now install to a mirrored ZFS rpool!

Features / Plugins / Extensions

Napp-It’s extensions include:

  • AMP (Apache, MySQL, PHP stack)
  • Baikal CalDAV / CardDAV Server
  • Logitech MediaServer
  • MediaTomb (DLNA / UPnP server)
  • Owncloud (Dropbox alternative)
  • PHPvirtualbox (VirtualBox interface)
  • Pydio Sharing
  • FTP Server
  • Serviio Mediaserver
  • Tine Groupware

FreeNAS plugins:

  • Bacula (Backup Server)
  • BTSync (Bittorrent Sync)
  • CouchPotato (NZB and Torrent downloader)
  • CrashPlan (Backup client/server)
  • Cruciblewds (Computer imaging / cloning)
  • Firefly (media server for Roku SoundBridge and Apple iTunes)
  • Headphones (automatic music downloader for SABnzbd)
  • HTPC-Manager
  • LazyLibrarian (follow authors and grab metadata for digital reading)
  • Maraschino (web interfrace for XBMC HTPC)
  • MineOS (Minecraft control panel)
  • Mylar (Comic book downloader)
  • OwnCloud (Dropbox alternative)
  • PlexMediaServer
  • s3cmd
  • SABnzbd (Binary newsreader)
  • SickBeard (PVR for newsgroup users)
  • SickRage (Video file manager for TV shows)
  • Subsonic (music streaming server)
  • Syncthing (Open source cluster synchronization)
  • Transmission (BitTorrent client)
  • XDM (eXtendable Download Manager)

All FreeNAS plugins run in a jail so you must mount the storage that service will need inside the jail… this can be kind of annoying but it does allow for some nice security–for example CrashPlan can mount the storage you want to backup as read-only.

Protocols and Services

Both systems offer a standard stack of AFP, SMB/CIFS, iSCSI, FTP, NFS, RSYNC, TFTP
FreeNAS also has WebDAV and few extra services like Dynamic DNS, LLDP, and UPS (the ability to connect to a UPS unit and shutdown automatically).

Performance Reporting and Monitoring

Napp-It does not have reports and graphs in the free version.  FreeNAS has reports and you can look back as far as you want to see historical performance metrics.

freenas_stats

As a Hypervisor

Both systems are very efficient running guests of the same OS.  OmniOS has Zones, FreeNAS can run FreeBSD Jails.  OmniOS also has KVM which can be used to run any OS.  I suspect that FreeNAS 10 will have Bhyve.  Also both can run VirtualBox.

Stability vs Latest

Both systems are stable, OmniOS/Napp-It seems to be the most robust of the two.  The OmniOS LTS updates are very minimal, mostly security updates and a few bug fixes.  Infrequent and minimal updates are what I like to see in a storage solution.

FreeNAS is pushing a little close to the cutting edge.  They have frequent updates pushed out–sometimes I think they are too frequent to have been thoroughly tested.  On the other hand if you come across an issue or feature request in FreeNAS and report it chances are they’ll get it  in the next release pretty quickly.

Because of this, OmniOS is behind FreeNAS on some things like NFS and SMB protocol versions, VAAI support for iSCSI, etc.

I think this is an important consideration.  With FreeNAS you’ll get newer features and later technologies while OmniOS LTS is generally the better platform for stability.  The commercial TrueNAS soltution is also going to robust.  For FreeNAS you could always pick a stable version and not update very often–I really wish FreeNAS had an LTS, or at least a slower moving stable branch that maybe only did quarterly updates except for security fixes.

ZFS Integration

OmniOS has a slight edge on ZFS integration.  As I mentioned earlier OmniOS has multi-tiered snapshot integration into the the Windows Previous versions feature where FreeNAS can only pick one snap-frequency to show up there.  Also, in OmniOS NFS and SMB shares are stored as properties on the datasets so you can export the pool, import it somewhere else and the shares stay with the pool so you don’t have to reconfigure them.

Commercial Support

OmniOS offers Commercial Support if you want it.

iX Systems offers supported TrueNAS appliances.

Performance

On an All-in-one setup, I setup VMware ESXi 6.0, a virtual storage network and tested FreeNAS and OmniOS using iSCSI and NFS.  On all tests MTU is set to 9000 on the storage network, and compression is set to LZ4.  iSCSI volumes are sparse ZVOLs.  I gave the ZFS server 2 cores and 8GB memory, and the guest VM 2 cores and 8GB memory.  The guest VM is Windows 10 running Crystal Benchmark.

Environment:

  • Supermicro X10SL7-F with LSI 2308 HBA flashed to IT firmware and passed to ZFS server via VT-d (I flashed the P19 firmware for OmniOS and then re-flashed to P16 for FreeNAS).
  • Intel Xeon E3-1240v3 3.40Ghz.
  • 16GB ECC Memory.
  • 6 x 2TB Seagate 7200 drives in RAID-Z2
  • 2 x 100GB DC S3700s striped for ZIL/SLOG.  Over-provisioned to 8GB.

Latest stable updates on both operating systems:

FreeNAS 9.3  update 201503200528
OmniOS r151012 omnios-10b9c79

On Crystal Benchmark I ran 5 each of the 4000MB, 1000MB, and 50MB size tests, the results are the average of the results.

On all tests every write was going to the ZIL / SLOG devices.  On NFS I left the default sync=standard (which results in every write being a sync with ESXi).  On iSCSI I set sync=always, ESXi doesn’t honor sync requests from the guest with iSCSI so it’s not safe to run with sync=standard.

Update: 7/30/2015: FreeNAS has pushed out some updates that appear to improve NFS performance.  See the newer results here: VMware vs bhyve Performance Comparison.  Original results below.

Sequential Read MBps

seqrd

Sequential Write MBps

seqwr2

Random Read 512K MBps

rndrd512

Random Write 512K MBps

rndwr512

Random Read 4K IOPS

randrd4kqd1

Random Write 4K IOPS

rndwr4kqd1

Random Read 4K QD=32 IOPS

rdndrd4kqd32

Random Write 4K QD=32 IOPS

rndwr4kqd32

Performance Thoughts

So it appears, from these unscientific benchmarks that OmniOS on NFS is  the fastest configuration, iSCSI performs pretty similarly on both FreeNAS and OmniOS depending on the test.  One other thing I should mention, which doesn’t show up in the tests is latency.  With NFS I saw latency on the ESXi storage as high as 14ms during the tests,  while latency never broke a millisecond with iSCSI.

One major drawback to my benchmarks is it’s only one guest hitting the storage. It would be interesting to repeat the test with several VMs accessing the storage simultaneously, I expect the results may be different under heavy concurrent load.

I chose 64k iSCSI block size because the larger blocks result in a higher LZ4 compression ratio, I did several quick benchmarks and found 16K and 64K performed pretty similarly, 16K did perform better at random 4K write QD=1, but otherwise 64K was close to a 16K block size depending on the test.   I saw significant drop in random performance at 128k.  Once again under different scenarios this may not be the optimal block-size for all types of workloads.