Solaris 11, OpenIndiana, and NexentaStor benchmarks

Purpose: Determine fastest small NAS/SAN OS/Appliance software solution for home running under ESXi.
Constraints: Candidates must support ZFS, be relatively stable (I narrowed down my candidate list by only choosing solutions that other people have used successfully), and be able to run CrashPlan for data protection.

If you’re not familair with ZFS (Zettabyte File System), read ZFS – The Last Word on File Systems:

OS/Appliance candidates

OpenIndiana
Nexenta
Oracle Solaris
 
  1. Solaris 11 + napp-it.  The latest OS release from Oracle.  For this test the text installer was used.  Solaris 11 is free for development and personal use but a support license is required for commercial use.
  2. OpenIndiana Server 1.51a + napp-it.  The fork off of Open Solaris when Oracle closed the project.    For this test the server installation ISO was used.  Open Indiana is free to use with no restrictions.
  3. NexentaStor 3.1 Community Edition.  Based on Open Solaris with a Debian based user-land.  Nexenta has built a great web GUI making it easy to setup a ZFS system.  I used Nexenta’s built-in gui instead of napp-it.  NexentaStor Update 2/19:  is free to use with up to 18TB of net storage (a limit that would be pretty hard to hit for a small home SAN/NAS), after that a license is required. does require a license for anything more than development and evaluation.
 
Napp-It
 
napp-it, is a web gui for running on top of OpenIndiana, Solaris 11, Nexenta, and a few others written by Günther Alka (aka Gea).  It’s one command to install and you have a web-interface to your OS for setting up your storage appliance.  I think Nexenta’s GUI is a little more polished and faster (you get stats, graphs, Ajax, etc.), but Gea’s Napp-it is solid and very light-weight.  When joining a WinServer 2008 R2 domain Napp-It had no problems, but I had to drop to a command shell to tweak some settings to get Nexenta to join.  The ACLs were also a bit easier to get going in Napp-It.  Overall I would say Nexenta is a little easier to use than Napp-It (I usually don’t have to dig too far into the menus to find something in Nexenta), but Napp-it won’t eat up as much memory.
 
 
Below is the testing environment:
 
 
Hardware Configuration
  • HP ProLiant Microserver
  • CPU: AMD Turion II Neo 1.5GHz
  • Memory: 8GB (2x4GB) PC3 – 10600 ECC
  • HD: 250GB Seagate 7200 RPM SATA (I really wanted to perform the test with more drives but with the prices of drives after the Thailand floods it wasn’t affordable.)
  • Everything else is stock configuration.

Virtualized Environment

  • VMWare ESXi 5.0 installed on a 2GB SanDisk Cruzer
  • The SATA controller on the Microserver does not support pass through on ESXi so I wasn’t able to give the OS direct access to the drive.  VM files were used for the drives.
VM Configuration
Each OS was installed on the Seagate drive, it was given 4GB Memory, 30GB of storage for the OS and 40GB for the share/test drive, and both CPU cores.  During testing all other VMs were powered down.

 

Network
Gigabit network all around, using Cat-5e cables (sorry, I don’t have any Cat-6 cables) through as Cisco/Linksys E3000 (gigabit router) running TomatoUSB.    I shutdown all programs that I thought would use a significant amount of network traffic.

 
NAS Share configuration
Shares were mounted via CIFS/SMB.  I did not test NFS since I don’t have a system outside the Virtual Server that can mount NFS shares.  CIFS shares were using standard sync, the default compromise between safety and performance.
 
SAN Share Configuration
The difference between a NAS and a SAN is a NAS shares a file-system, multiple people can access that system at the same time.  A SAN shares volumes at the block level so it’s more suitable for running a database on top of, but only one client can mount a SAN target (volume) at a time.  iSCSI targets were mounted from my Windows 7 desktop.  I setup the iSCSI pools to use up to 128K blocks, after thinking about it I should have used 512 byte blocks since that’s what the OS would expect, ZFS will automatically use the smallest block-size it can to fit the file-system blocks so the block-size would have been whatever NTFS defaults to when formatting a 10GB drive.
 
Testing Software
Crystal Mark 3.0.1 x64.  Tests set to repeat 3 times each.  Test size 100MB, test data set to random.

Tests Performed
  • Seq = Sequential Read/Write of 100MB.
  • 512K = Random Read/Write of 512K files (100MB worth).
  • 4K =  Random Read/Write of 4K files (100MB worth).
  • 4KQD32 =  Random Read/Write of 4K files (100MB worth) using a queue depth of 32.
 
ZFS Configurations
  • Standard – Typical ZFS share with nothing special.
  • Compression – LZJB Compression of each block.
  • Compress_Dedup – LZJB Compression of each block, and block level deduplication.
  • Gzip – Gzip-6 compression of each block.
  • Gzip9 – Gzip-9 compression of each block.
  • iCSCI, Compression – iSCSI target (block level shares) using LZJB compression.
Shortcomings with this test
  • My network should be on Cat-6, but I’m too cheap.
  • If my SATA-controller supported pass-through with ESXi, ZFS would use the disk cache resulting in faster speeds.
  • I only had a single 7200RPM HD available, we could have seen different results (likely faster across the board) with 3 or 4 drives in a RAID-Z setup, but that’s not going to happen until prices normalize.
 

Here are the test results.

 
Chart
 
 
Below are some specific OS Graphs (note that the scale changes with each one).
 
Nexenta Reads
Nexenta Writes
OpenIndiana Reads
OpenIndiana Writes
 
Solaris 11 Reads
Solaris 11 Writes
 
 
Pivot Data
 
 Column Labels  
Row LabelsNexentaStoreOpenIndianaSolaris11
Compress_Dedup   
READ   
Average of Seq31.8330.2829.49
Average of 512K32.3430.533.33
Average of 4K4.0374.0284.406
Average of 4KQD327.7137.3926.94
WRITE   
Average of Seq27.235.0775.65
Average of 512K37.4822.5967.54
Average of 4K3.2442.0823.992
Average of 4KQD321.4691.3224.261
Compress_Dedup Average of Seq29.51532.67552.57
Compress_Dedup Average of 512K34.9126.54550.435
Compress_Dedup Average of 4K3.64053.0554.199
Compress_Dedup Average of 4KQD324.5914.3575.6005
Compression   
READ   
Average of Seq32.7728.6133.77
Average of 512K31.8531.2430.3
Average of 4K4.10723.884.576
Average of 4KQD328.0354.5827.892
WRITE   
Average of Seq48.973.2176.56
Average of 512K24.0523.8872.42
Average of 4K2.4992.7634.022
Average of 4KQD321.9972.8164.667
Compression Average of Seq40.83550.9155.165
Compression Average of 512K27.9527.5651.36
Compression Average of 4K3.30313.32154.299
Compression Average of 4KQD325.0163.6996.2795
Gzip   
READ   
Average of Seq32.330.4129.68
Average of 512K32.0730.4334.5
Average of 4K4.0573.7924.19
Average of 4KQD328.017.0537.187
WRITE   
Average of Seq13.1315.9257.24
Average of 512K12.8130.744.96
Average of 4K0.50.9324.487
Average of 4KQD320.5110.6444.404
Gzip Average of Seq22.71523.16543.46
Gzip Average of 512K22.4430.56539.73
Gzip Average of 4K2.27852.3624.3385
Gzip Average of 4KQD324.26053.84855.7955
Gzip-9   
READ   
Average of Seq30.1128.5130.58
Average of 512K32.2731.4631.42
Average of 4K4.2274.6524.571
Average of 4KQD328.0817.0517.432
WRITE   
Average of Seq12.9215.4355.88
Average of 512K14.8416.6537.9
Average of 4K0.520.7943.93
Average of 4KQD320.470.5544.014
Gzip-9 Average of Seq21.51521.9743.23
Gzip-9 Average of 512K23.55524.05534.66
Gzip-9 Average of 4K2.37352.7234.2505
Gzip-9 Average of 4KQD324.27553.80255.723
iSCSI, Compression   
READ   
Average of Seq78.0481.9679.08
Average of 512K7173.968.94
Average of 4K3.8544.6193.59
Average of 4KQD3256.0760.1370.82
WRITE   
Average of Seq27.9437.8140.14
Average of 512K23.545.8546.35
Average of 4K3.4623.8324.274
Average of 4KQD3210.720.99429.99
iSCSI, Compression Average of Seq52.9959.88559.61
iSCSI, Compression Average of 512K47.2559.87557.645
iSCSI, Compression Average of 4K3.6584.22553.932
iSCSI, Compression Average of 4KQD3233.39530.56250.405
Standard   
READ   
Average of Seq29.9331.5332.84
Average of 512K32.4931.2733.59
Average of 4K4.1144.6194.687
Average of 4KQD328.1687.4827.634
WRITE   
Average of Seq50.1711.8160.99
Average of 512K26.558.90266.44
Average of 4K4.054.284.501
Average of 4KQD323.1214.4654.431
Standard Average of Seq40.0521.6746.915
Standard Average of 512K29.5220.08650.015
Standard Average of 4K4.0824.44954.594
Standard Average of 4KQD325.64455.97356.0325
    
 
 
 

5 thoughts on “Solaris 11, OpenIndiana, and NexentaStor benchmarks”

Leave a Comment