Tim Cook, Gay American Hero Vs. Brendan Eich, Evil Christian Villian

I was waiting on someone to make this comparison.

Flashback: Mozilla CEO Steps Down Over Gay Marriage Views

FreeNAS vs. OmniOS / Napp-It

freenas      OmniOS_logo_200px

I’ve been using OpenIndiana since late 2011, and switched to OmniOS in 2013.  Lately I started testing FreeNAS, what drove me to do this is I use  CrashPlan to backup my pool but recently Code 42 announced they’ll be discontinuing Solaris support for Crashplan so I needed to start looking for an alternative OS or an alternative backup solution.  I decided to look at FreeNAS because it has a CrashPlan plugin that runs in a jail using Linux emulation.  After testing it out for awhile I am likely going to stay on OmniOS since it suits my needs better and instead switch out CrashPlan for ZnapZend for my backup solution.  But after running FreeNAS for a few months here are my thoughts on both platforms and their strengths and weaknesses as a ZFS storage server.

CIFS / SMB Performance for Windows Shares

FreeNAS has a newer implementation of SMB, supporting SMB3, I think OmniOS is at SMB1.

OmniOS is slightly faster, writing a large file over my LAN gets around 115MBps vs 98MBps on FreeNAS.  I suspect this is because OmniOS runs NFS at the kernel level and FreeNAS runs it in user space.  I tried changing the FreeNAS protocol to SMB2, and even SMB1 but couldn’t get past 99MBps.  This is on a Xeon E3-1240V3 so there’s plenty of CPU power, Samba on FreeNAS just can’t keep up.

CIFS / SMB Snapshot Integration with Previous Versions

Previous Versions Snapshot Integration with Windows is far superior in OmniOS.   I always use multiple snapshot jobs to do progressive thinning of snapshots.  So for example I’ll setup monthly snaps with a 6 month retention, weekly with two month retention, daily with two week, hourly with 1 week, and every 5 minutes for two days.   FreeNAS will let you setup the snap jobs this way, but in Windows Previous Versions it will only show the snapshots from one of the snap jobs under Previous Versions (so you may see your every 5 minute snaps but you can’t see the hourly or weekly snaps).  OmniOS handles this nicely.  As a bonus Napp-It has an option to automatically delete empty snapshots sooner than their retention expiration so I don’t see them in Previous Versions unless some data actually changed.

previous_versions

Enclosure Management

Both platforms struggle here… probably the best thing to do is write down the serial number of each drive with the slot number.  In FreeNAS drives are given device names like da0, da1, etc. but unfortunately the numbers don’t seem to correspond to anything and they can even change between reboots.  FreeNAS does have the ability to label drives so you could insert one drive at a time and label them with the slot they’re in.

OmniOS drives are given names like c3t5000C5005328D67Bd0 which isn’t entirely helpful.

For LSI controllers the sas2irc utility (which works on FreeBSD or Solaris) will map the drives to slots.

Fault Management

The ZFS fault management daemon will automatically replace a failed drive with a hot spare… but it hasn’t been ported to FreeBSD yet so FreeNAS really only has warm spare capability.  To me this is a minor concern… if you’re going to use RAID-Z with a hot spare why not just configure the pool with RAID-Z2 or RAID-Z3 to begin with?  However, I can see how the fault management daemon on OmniOS would reduce the amount of work if you had several hundred drives and failures were routine.

SWAP issue on FreeNAS

While I was testing I actually had a drive fail (this is why 3-year old Seagate drives are great to test with) and FreeNAS crashed!  The NFS pool dropped out from under VMware.  When I looked at the console I saw “swap_pager: I/O error – pagein failed”   I had run into FreeNAS Bug 208 which was closed a year ago but never resolved.  The default setting in FreeNAS is to create a 2GB swap partition on every drive which acts like striped swap space (I am not making this up, this is the default setting).  So if any one of the drives fails it can take FreeNAS down.  The argument from FreeNAS is that you shouldn’t be using swap–and perhaps that’s true but I had a FreeNAS box with 8GB memory and running only one jail with CrashPlan bring my entire system down because a single drive failed.  That’s not an acceptable default setting.  Fortunately there is a way to disable automatically creating swap partitions on FreeNAS, it’s best to disable the setting before initializing any disks.

In my three years of running an OpenSolaris / Illumos based OS I’ve never had a drive failure bring the system down

Running under VMware

FreeNAS is not supported running under a VM but OmniOS is.  In my testing both OmniOS and FreeNAS work well under VMware under the best practices of passing an LSI controller flashed into IT mode to the VM using VT-d.  I did find that OmniOS does a lot better virtualized on slower hardware than FreeNAS.  On an Avaton C2750 FreeNAS performed well on bare metal, but when I virtualized it using vmdks on drives instead of VT-d FreeNAS suffered in performance but OmniOS performed quite well under the same scenario.

Both platforms have VMXNET3 drivers, neither has a Paravirtual SCSI driver.

Encryption

Unfortunately Oracle did not release the source for Solaris 11, so there is no encryption support on OpenZFS directly.

FreeNAS can take advantage of FreeBSD’s GELI based encryption.  FreeBSD’s implementation can use the AES instruction set, last I tested Solaris 11 the AES instruction set was not used so FreeBSD/FreeNAS probably has the fastest encryption implementation for ZFS.

There isn’t a good encryption option on OmniOS.

ZFS High Availability

Neither systems supports ZFS high availability out of the box.  OmniOS can use a third party tool like RSF-1 (paid) to accomplish this.

ZFS Replication & Backups

FreeNAS has the ability to easily setup replication as often as every 5 minutes which is a great way to have a standby host to failover to.  Replication can be done over the network.  If you’re going to replicate over the internet I’d say you want a small data set or a very fast connection–I ran into issues a couple of times where the replication got interrupted and it needed to start all over from scratch.  On OmniOS Napp-It does not offer a free replication solution, but there is a paid replication feature, however there are also numerous free ZFS replication scripts that people have written such as ZnapZend.

I did get the CrashPlan plugin to work under FreeNAS, however I found that after a reboot the CrashPlan jail sometimes wouldn’t auto-mount my main pool so it ended up not being a reliable enough solution for me to be comfortable with.  I wish FreeNAS made it so that it wasn’t in a jail.

Memory Requirements

FreeNAS is a little more power hungry than OmniOS.  For my 8TB pool a bare minimum for FreeNAS is 8GB while OmniOS is quite happy with 4GB, although I run it with 6GB to give it a little more ARC.

Hardware Support

FreeNAS supports more hardware than OmniOS.  I generally virtualize my ZFS server so it doesn’t matter too much to me but if you’re running bare metal and on obscure or newer hardware there’s a much better chance that FreeNAS supports it.

VAAI (VMware vSphere Storage API’s — Array Integration)

FreeNAS now has VAAI support for iSCSI.  OmniOS has no VAAI support.  I use NFS so it’s not an issue for me but it’s useful for people using iSCSI.

GUI

The FreeNAS GUI looks a little nicer and is probably a little easier for a beginner.  The background of the screen turns red whenever you’re about to do something dangerous.  The FreeNAS web interface seems to hang for a few seconds from time to time compared to Napp-It, but nothing major.

I think most people experienced with the zfs command line are going to be a little more at home with Napp-It’s control panel but it’s easy enough to figure out what FreeNAS is doing.

As far as managing a ZFS file system both have what I want.. email alerts when there’s a problem, scheduling for data scrubs, snapshots, etc.

Community

FreeNAS and OmniOS both have great communities.  There is a lot of info on the FreeNAS forums and their Redmine project is open so you can see all the outstanding issues and even join the discussion.  I’ve found Redmine invaluable.  OmniOS has an active OmniOS Discuss mailman list and Gea, the author of Napp-It is active on various forums.  He has answered my questions on several occasions over at HardForum’s Data Storage subforum.  In general I’ve found the HardForum community a little more helpful…I’ve always gotten a response there while several questions I posted on the FreeNAS forums went unanswered.

Documentation

FreeNAS documentation is great, like FreeBSD’s.  Just about everything is in the FreeNAS Guide

OmniOS isn’t as organized.  I found some howtos here, not nearly as comprehensive as FreeNAS.  Most of what I find from OmniOS I find in forums or the Napp-It site.

Mirrored ZFS boot device / rpool

OmniOS can boot to a mirrored ZFS rpool.

FreeNAS does not have a way to mirror the ZFS boot device.  FreeBSD does have this capability but it turns out FreeNAS is built on NanoBSD.  The only way to get FreeNAS to have redundancy on the boot device that I know of is to set it up on a hardware RAID card.

Extra Features

OmniOS can run just about anything available to Solaris.  Napp-It has some gui tools to setup and configure MediaTomb (UPnP, DLNA server) which can serve up media to your computers, iPad, Androids, and some newer TV sets.  Napp-It’s gui can also setup AFP (Apple File Protocol, Apache, MySQL, PHP, FTP, Rsync, TFTPD, Bonjour, and Owncloud.

FreeNAS has Plugins (which are installed in FreeBSD jails) for bacula (backup server), couchpotato (NZB and torrent downloader), gamez (downloader for video games), maraschino (XBMC interface), mylar (Comic Book downloader), sabnzbd (binary newsreader), transmission (BitTorrent client), BTSync (peer to peer file syncing), crashplan (backup software), htpc-manager, minidlna (DLNA / UPnP server), owncloud (Dropbox replacement), plexmediaserver (media server), sickbeard (newsgroup reader), etc.

Commercial Support

OmniOS offers Commercial Support if you want it.

IX Systems does not offer support on FreeNAS installs, but iX Systems does offer TrueNAS appliances.

Ben’s Banned Popcorn Recipe

You May Die From Eating This

Please read about how bad coconut oil is from the liberal CSPI organization and learn all about the danger of Coconut Oil from Michelle Obama and the CDC before proceeding.

IMG_20140914_190304

The secret to great popcorn is coconut oil which despite being healthy for you has been banned in movie theater popcorn.  Now movie popcorn is worthless.  However, Kris and I have developed the perfect popcorn recipe…

whirleyYou’ll Need

A good popper:  I like this Whirley Popper which Amazon has for $20.

Ingredients

Pretty much all these ingredients are cheaper at Miller’s… if you don’t have a Miller’s you can get them from Amazon:

  1. Coconut Oil (not the extra virgin, it tastes better if it’s got some impurities).
  2. Butter (I just grab a chunk off an Amish butter slab).
  3. Popcorn
  4. Salt
  5. Toppings!  Choose at least one topping:

What To Do

  1. Microwave a chunk of butter until it’s liquid.  Then let it sit (do not put it in the popper… just let it sit and go to step 2).
  2. Drop a few spoonfuls of coconut oil in the Whirley and start the heat.
  3. When coconut oil is melted put a handful of popcorn in the popper and turn the crank.
  4. When the popcorn is done poor it into a bowl.
  5. By now the butter should be separated into three layers with milk solids on the bottom.  Poor the upper portion of the butter on your popcorn but don’t use the milk solids.  If you do the Popcorn will get soggy.
  6. Add salt, and your choice of garlic, yeast, or slap ya mama.

Bonus: This is Gluten Free!

SSD ZFS ZIL SLOG Benchmarks – Intel DC S3700, Intel DC S3500, Seagate 600 Pro, Crucial MX100 Comparison

I ran some performance tests comparing the Intel DC S3700, Intel DC S3500, Seagate 600 Pro, and Crucial MX100 when being used as a ZFS ZIL / SLOG Device.  All of these drives have a capacitor backed write cache so they can lose power in the middle of a write without losing data.

IMG_9271

Here are the results….

zfs_zil_ssd_comparison

The Intel DC S3700 takes the lead with the Intel DC S3500 being a great second performer.  I am surprised at how much better the Intel SSDs performed over the Crucial and Seagate considering Intel’s claims are not as great as the other two… Intel claims slower IOPS, and slower sequential performance yet it outperforms the other two SSDs.

SSDSeagate 600 ProCrucial MX 100Intel DC S3500Intel DC S3700
Size GB24025680100
ModelST240FP0021CT256MX100SSD1SSDSC2BB080G4SSDSC2BA100G3T
Sequencial (MB/s)500550Read 340 / Write 100Read 500 / Write 200
IOPS Random Read85,00085,00070,00075,000
IOPS Random Write11,00070,0007,00019,000
Endurance (Terabytes Written)13472451874
Warranty5 years3 Years5 years5 years
ZFS ZIL SLOG Results Below--------
oltp 2 thread66160732746
oltp 4 thread892349891008
random r/w20854314543641
random write8722618441661
seq write MB/s5149399

Drive Costs

Conclusion

  • For most workloads use the Intel DC S3500.
  • For heavy workloads use the Intel DC S3700.
  • The best performance for the dollar is the Intel DC S3500.

In my environment the best device for a ZFS ZIL is either the Intel DC S3500 or the Intel DC S3700.  The S3700 is designed to hold up to very heavy usage–you could overwrite the entire 100GB drive 10 times every day for 5 years!  With the DC S3500 you could write out 25GB/day every day for 5 years.  Probably for most environments the DC S3500 is enough.

I should note that both the Crucial and Seagate improved performance over not having a SLOG at all.

Unanswered Questions

I would like to know why the Seagate 600 Pro and Crucial MX100 performed so badly… my suspicion is it may be the way ESXi on NFS forces a cache sync on every write, the Seagate and Crucial may be obeying the sync command while the Intel drives are ignoring it because they know they can rely on their power loss protection mechanism.  I’m not entirely sure this is the difference but it’s my best guess.

Testing Environment

This is based on my Supermicro 2U ZFS Server Build:  Xeon E3-1240v3, The ZFS server is a FreeNAS 9.2.1.7 running under VMware ESXi 5.5.  HBA is the LSI 2308 built into the Supermicro X10SL7-F, flashed into IT mode.  The LSI 2308 is passed to FreeNAS using VT-d.  The FreeNAS VM is given 8GB memory.

Zpool is 3x7200RPM Seagate 2TB drives in RAID-Z, in all tests an Intel DC S3700 is used as the L2ARC.  Compression = LZ4, Deduplication off, sync = standard, encryption = off.  ZFS dataset is shared back to ESXi via NFS.  On that NFS share is a guest VM running Ubuntu 14.04 which is given 1GB memory and 2 cores.  The ZIL device is changed out between tests, I ran each test seven times and took the average discarding the first three test results (I disregarded the first three results to allow some data to get cached into ARC…I did not see any performance improvement after repeating a test three times so I believe that was sufficient).

Thoughts for Future Tests

I’d like to repeat these tests using OmniOS and Solaris sometime but who knows if I’ll ever get to it.  I imagine the results would be pretty close.  Also, of particular interest would be testing on  VMware ESXi 6 beta… I’d be curious to see if there are any changes in how NFS performs there… but if I tested it I wouldn’t be able to post the results because of the NDA.

Test Commands

Supermicro 2U ZFS Server Build

Here’s my latest VMware + ZFS VSAN all in one server build based on a Supermicro platform… this is borrowing from the napp-it all-in-one concept where you pass the SAS/SATA controller to the ZFS VM using VT-d and then share a ZFS dataset back to VMware using NFS or iSCSI.  For this particular server I’m using FreeNAS as the VSAN.

Chassis – 2U Supermicro 6 Bay Hotswap

15161878016_af4bd6c2fd_k

For the case I went with the Supermicro CSE-822T-400LPB.  It comes with rails, 6 hotswap bays, a 400W PSU (80 plus energy efficiency), and a 5.25 bay which works great for installing a mobile rack.

The 3400 RPM Nidec UltraFlo fans are too loud for my house (although quiet for a rack-mount server) and there is no way to keep the speed low in Supermicro’s BIOS so I replaced them with low noise Antec 80mm fans which are a couple dollars each.

14998128759_079d8fe1ab_k

14998338517_aa3f5ff041_k

Unfortunately these fans only have 3 wire (don’t have the RPM sensor) so I used freeipmi (apt-get install freeipmi) to disable monitoring the fan speeds:

#  ipmi-sensors-config -h 10.2.0.228 -u ADMIN -p ADMIN –filename=ipmi.config –checkout
Under the FAN sections change this to “No”…
Enable_Scanning_On_This_Sensor                                              No
# ipmi-sensors-config -h 10.2.0.228 -u ADMIN -p ADMIN –filename=ipmi.config –commit
Probably a better solution is to use 4-wire fans and lower the sensor thresholds but then you’re looking at around $10/fan.
The server makes about as much noise as my desktop computer now.

Motherboard – Supermicro X10SL7-F with LSI-2308

Supermicro’s X10SL7-F has a built in LSI 2308 controller with 8 SAS/SATA ports.  Using this across the two hotswap bays works perfectly.  I used the instructions on the FreeNAS forum post to flash the controller into IT mode.  Under VMware the LSI-2308 is passed to the ZFS Server which is connected to all 6 x 3.5″ hotswap bays plus the first two hotswap bays of the mobile rack.  This allows for 6 drives in a RAID-Z2 configuration, plus one SSD for ZIL and another for L2ARC.    The final 2.5″ drives in the mobile rack are connected into the two 6Gbps ports on the motherboard and used by VMware.  I usually enable VMware’s hostcache on SSDs.

vmware_zfs_vtd_supermicro_lsi2308

Memory: Crucial

Memory: Crucial 8GB DDR3L (low volt) ECC Server Memory I got the 16GB kit.

CPU Xeon E3-1231v3 Quad Core

I think the Xeon E3-1231v3 is about the best bang for the dollar for an E3 class CPU right now.  You get 4 3.4GHz cores (with 3.8GHz turbo), VT-d, ECC support, and hyper-threading.  It’s about $3 more than the Xeon E3-1230v3 but at 100MHz faster.   I’m actually using the Xeon E3-1240v3 (which is identical to the 1231v3) because I saw a good price on a used one awhile ago.

Mobile Rack 2.5″ SSD bay


15184899715_d71b7ad69a_k
ICY DOCK ToughArmor
4 x 2.5″ Mobile Rack.  The hotswap bays seem pretty high quality metal.

Hard Drives: I’m running 2TB Seagate 7200 RPM Drives, although they may not be the most reliable drives they’re cheap and I’m confident with RAID-Z2 (and I also always have two off-site backups and one onsite backup).  For SSDs I use Intel’s DC S3700 (here’s some performance testing I did on a different system, hopefully I’ll get a chance to retest on this build)

Racking with Rails on a Lackrack

15152114936_a863dcbece_o (1)
Awhile ago I discovered the Lackrack (these are $7 at Ikea)… I figured since I have rails I may as well install them.  The legs were slightly too close together so I used a handsaw to cut a small notch for the rails and mounted the rails using particle board screws.  One thing to note is the legs are hollow just below where the top screw of the rail mount goes so only that top screw is going to be able to bear any load.

14998127309_3741d99834_k

Here’s the final product:

supermicro_server_lackrack

What VMs runs on the server?

  • ZFS – provides CIFS and NFS storage for VMware and Windows, Linux, and FreeBSD clients.  Basically all our movies, pictures, videos, music, any anything of importance goes on here.
  • Backup server – backs up some of my offsite cloud servers.
  • Kolab – Mail / Groupware server (Exchange alternative)
  • Minecraft Server
  • Owncloud (Dropbox replacement)
  • VPN Server
  • Squid Proxy Cache (I have limited bandwidth where I’m at so this helps a lot).
  • DLNA server (so Android clients can watch movies).

Eli and Screwdriver

Kris gave me a small powered screwdriver for our Anniversary (thanks Kris!)

Eli took an interest in it so I got out an old hard drive and a few screws… he sat there the next 15 minutes screwing and unscrewing…

Eli with Screwdriver

 

Intel DC S3500 vs S3700 as a ZIL SSD Device Benchmarks

I ran some benchmarks comparing the Intel DC S3500 vs the Intel DC S3700 when being used as a SLOG/ZIL (ZFS Intent Log).  Both SSDs are what Intel considers datacenter class and they are both very reasonably priced compared to some of the other enterprise class offerings.

Update: 2014-09-14 — I’ve done newer benchmarks on faster hardware here: https://b3n.org/ssd-zfs-zil-slog-benchmarks-intel-dc-s3700-intel-dc-s3500-seagate-600-pro-crucial-mx100-comparison/

IMG_20140817_222031

SLOG Requirements

Since flushing the cache to spindles is slow, ZFS uses the ZIL to safely commit random writes.  The ZIL is never read from except in the case power is lost before “flushed” writes in memory have been written to the pool.  So to make a decent SLOG/ZIL the SSD must be able to sustain a power loss in the middle of a write without losing or corrupting data.  The ZIL translates random writes to sequential writes so it must be able to sustain fairly high throughput.  I don’t think random write IOPS is as important but I’m sure it helps some. Generally a larger SSD is better because they tend to offer more throughput.  I don’t have an unlimited budget so I got the 80GB S3500 and the 100GB S3700 but if you’re planning for some serious performance you may want to use a larger model, maybe around 200GB or even 400GB.

Specifications of SSDs Tested

Intel DC S3500 80GB

  • Seq Read/Write: 340MBps/100MBs
  • 70,000 / 7,000 4K Read/Write IOPS
  • Endurance Rating: 45TB written
  • Price: $113 at Amazon

Intel DC S3700 100GB

  • Seq Read/Write: 500MBs/200MBs
  • 75,000 / 19,000 4K Read/Write IOPS
  • Endurance Rating: 1.83PB written (that is a lot of endurance).
  • Price: $203 at Amazon

Build Specifications

Virtual NAS Configuration

  • FreeNAS 9.2.1.7 VM with 6GB memory and 2 cores.  Using VMXNET3 network driver.
  • RAID-Z is from VMDKs on 3×7200 Seagates.
  • SLOG/ZIL device is a 16GB vmdk on the tested SSD.
  • NFS dataset on pool is shared back to VMware.  For more information on this concept see Napp-in-one.
  • LZ4 Compression enabled on the pool.
  • Encryption On
  • Deduplication Off
  • Atime=off
  • Sync=Standard (VMware requests a cache flush after every write so this is a very safe configuration).

Don’t try this at home: I should note that FreeNAS is not supported running as a VM guest, and as a general rule running ZFS off of VMDKs is discouraged.  OmniOS would be better supported as a VM guest as long as the HBA is passed to the guest using VMDirectIO.  The Avoton processor doesn’t support VT-d which is why I didn’t try to benchmark in that configuration.

Benchmarked Guest VM Configuration

  • Benchmark vmdk is installed on the NFS datastore.  A VM is installed on that vmdk running Ubuntu 14.04 LTS, 2 cores, 1GB memory.  Para-virtual storage.
  • OLTP tests run against MariaDB Server 5.5.37 (fork from MySQL).

I wiped out the zpool after every configuration change, and ran each test three times for each configuration and took the average (in almost all cases subsequent tests ran faster because the ZFS ARC was caching reads into memory so I was very careful to run the tests in the same order on each configuration.  If I made a mistake I rebooted to clear the ARC).  I am mostly concerned with testing random write performance so these benchmarks are more concerned with write IOPS than with throughput.

Benchmark Commands

Random Read/Write:
# sysbench –test=fileio –file-total-size=6G –file-test-mode=rndrw –max-time=300 run

Random Write:
# sysbench –test=fileio –file-total-size=6G –file-test-mode=rndwr –max-time=300 run

OLTP 2 threads:
# sysbench –test=oltp –oltp-table-size=1000000 –mysql-db=sbtest –mysql-user=root –mysql-password=test –num-threads=2 –max-time=60 run

OLTP 4 threads:
# sysbench –test=oltp –oltp-table-size=1000000 –mysql-db=sbtest –mysql-user=root –mysql-password=test –num-threads=4 –max-time=60 run

Test Results

zil_random_read_write zil_random_write zil_oltp_4_theads zil_oltp_2_threads

 

SLOG Test TPS Avg
none OLTP 2 Threads 151
Intel DC 3500 80GB OLTP 2 Threads 188
Intel DC 3700 100GB OLTP 2 Threads 189
none OLTP 4 Threads 207
Intel DC 3500 80GB OLTP 4 Threads 271
Intel DC 3700 100GB OLTP 4 Threads 266
none RNDRW 613
Intel DC 3500 80GB RNDRW 1120
Intel DC 3700 100GB RNDRW 1166
none RNDWR 273
Intel DC 3500 80GB RNDWR 588
Intel DC 3700 100GB RNDWR 569

 

Surprisingly the Intel DC S3700 didn’t offer much of an advantage over the DC S3500.  Real life workload results may vary but the Intel DC S3500 is probably the best performance per dollar for a SLOG device unless you’re concerned about write endurance in which case you’ll want to use the DC S3700.

Other Observations

There are a few SSDs with power loss protection that would also work.  The Seagate 600 Pro SSD, and also for light workloads a consumer SSDs like the Crucial M500 and the Crucial MX100 would be decent candidates and still provide an advantage over running without a SLOG.

I ran a few tests comparing the VMXNET3 vs E1000 network adapter and there is a performance penalty for the E1000.  This test was against the DC 3500.

freenas_vmxnet_vs_e1000

Network Test TPS Avg
E1000g OLTP 2 Threads 187
VMXNET3 OLTP 2 Threads 188
E1000g OLTP 4 Threads 262
VMXNET3 OLTP 4 Threads 271
E1000g RNDRW 1101
VMXNET3 RNDRW 1120
E1000g RNDWR 564
VMXNET3 RNDWR 588

I ran a few tests with Encryption on and off and found a small performance penalty for encryption.  This test was against the DC S3700.

freenas_encryption_on_vs_off

Encryption Test TPS Avg
Off OLTP 2 Threads 195
On OLTP 2 Threads 189
Off OLTP 4 Threads 270
On OLTP 4 Threads 266
Off RNDRW 1209
On RNDRW 1166
Off RNDWR 609
On RNDWR 569