TomatoUSB as Internal DNS Server

Tux with Tomato
I just found out TomatoUSB can be used to manage internal DNS.

On the Dnsmasq custom configuration add:

Tomato dnsmasq settings screenshot

That should have been in the documentation somewhere…

Installing CrashPlan on OpenIndiana

CrashPlan is a great way to backup your ZFS server, here’s a quick install recipe for OpenIndiana:

Download the Solaris CrashPlan client from here:

Move the file to /var/spook/pkg then run…

# sudo pkgadd

(follow the prompts, do what’s obvious).

Setup Crashplan service:
# svccfg import /opt/sfw/crashplan/bin/crashplan.xml
# svcadm enable crashplan

Obviously you’ll want to move the backupArchive folder to your ZFS storage pool.  I like to move the entire crashplan folder to the pool and create a symbolic link to it so I’ve got a backup of the CP config files… 

# svcadm disable crashplan
# mv crashplan /tank/crashplan/opt/sfw/
# ln -s /tank/crashplan/opt/sfw/crashplan .
# svcadm enable crashplan

The Internet Is Down

Some of it anyway…
Packet Loss


Why does your computer slow down as it gets older?

One reason I don’t see addressed very often is your drive is too full…

Hard drives have a head that is like a needle for records, or a laser for CDs.  It has to position itself on a track then wait for the sector with the data you’re after to spin into position.  The nature of circles is that the inside track is shorter than the outside so fewer sectors are available and it travels slower than the outer portion even though both make the same number of revolutions per minute.  More data can be accessed on the outer track than on the inner track in the same period of time.  Drive manufacturers and operating systems know this so data gets written from the outside in (and during defragmentation some operating systems try to move frequently used data on the outer tracks).  Also if all your data is on the outer tracks that head doesn’t have to travel to the inner tracks which means you’ll get faster seek times.

Hard Drive Tracks

The good news is the fix is easy, just buy a new drive.  And while you’re at it make sure you’re getting a higher RPM drive than you had previously (the faster consumer drives are around 7200RPM).  Or you can avoid this problem entirely with a Solid State Drive (and get a different set of problems).

Who’s locking up those tables?

Here’s a simple query that reports SQL Server locks along with the program name, host, sql process id, and the user that went off to lunch in the middle of their transaction… 

select * from sys.dm_tran_locks l
join sys.dm_exec_sessions s on l.request_session_id = s.session_id
join sys.sysobjects o on l.resource_associated_entity_id =

Crashplan Deduplication slowing you down?

I’ve noticed that Crashplan isn’t saturating my 400kbps uplink (Northland Cable… just barely faster than satellite) even though I’ve given it no upload restrictions… I think the problem is with Crashplan’s De-duplication running too intense of an algorithm for my mere AMD Phenom II black quad core processor to handle efficiently.

Setting data de-duplication to minimal allowed me to saturate that upload…
Crashplan Backup Settings
Maybe if I was on dialup or have quad dual core xeon processors I might get better speeds by bumping data-dedup back to full.

Crashplan can’t see your ZFS pool on Nexenta? Here’s how to fix it…

NexentaStor 3.1 uses OpenSolaris kernel with Debian userland so neither the Solaris or Linux version of Crashplan works out of the box.  But Chris Moates has an excellent post on getting Crashplan on Nexenta  by combining files from the Linux and Solaris version of Crashplan.  This works except Crashplan can’t see the your ZFS pool (which is probably what you are trying to backup) on the later versions of NexentaStor.  I’m not sure why but here’s how to add it manually:

Connect to the service using the GUI and select a random directory:


Then stop the service:

/etc/init.d/crashplan stop

Edit /usr/local/crashplan/conf/my.service.xml

And change


(trailing slash is important)

Start the service back up, reconnect to the GUI and you can see your ZFS pool.

/etc/init.d/crashplan start



Solaris 11, OpenIndiana, and NexentaStor benchmarks

Purpose: Determine fastest small NAS/SAN OS/Appliance software solution for home running under ESXi.
Constraints: Candidates must support ZFS, be relatively stable (I narrowed down my candidate list by only choosing solutions that other people have used successfully), and be able to run CrashPlan for data protection.

If you’re not familair with ZFS (Zettabyte File System), read ZFS – The Last Word on File Systems

OS/Appliance candidates

Oracle Solaris
  1. Solaris 11 + napp-it.  The latest OS release from Oracle.  For this test the text installer was used.  Solaris 11 is free for development and personal use but a support license is required for commercial use.
  2. OpenIndiana Server 1.51a + napp-it.  The fork off of Open Solaris when Oracle closed the project.    For this test the server installation ISO was used.  Open Indiana is free to use with no restrictions.
  3. NexentaStor 3.1 Community Edition.  Based on Open Solaris with a Debian based user-land.  Nexenta has built a great web GUI making it easy to setup a ZFS system.  I used Nexenta’s built-in gui instead of napp-it.  NexentaStor Update 2/19:  is free to use with up to 18TB of net storage (a limit that would be pretty hard to hit for a small home SAN/NAS), after that a license is required. does require a license for anything more than development and evaluation.
napp-it, is a web gui for running on top of OpenIndiana, Solaris 11, Nexenta, and a few others written by Günther Alka (aka Gea).  It’s one command to install and you have a web-interface to your OS for setting up your storage appliance.  I think Nexenta’s GUI is a little more polished and faster (you get stats, graphs, Ajax, etc.), but Gea’s Napp-it is solid and very light-weight.  When joining a WinServer 2008 R2 domain Napp-It had no problems, but I had to drop to a command shell to tweak some settings to get Nexenta to join.  The ACLs were also a bit easier to get going in Napp-It.  Overall I would say Nexenta is a little easier to use than Napp-It (I usually don’t have to dig too far into the menus to find something in Nexenta), but Napp-it won’t eat up as much memory.
Below is the testing environment:
HP Microserver

Hardware Configuration
  • HP ProLiant Microserver
  • CPU: AMD Turion II Neo 1.5GHz
  • Memory: 8GB (2x4GB) PC3 – 10600 ECC
  • HD: 250GB Seagate 7200 RPM SATA (I really wanted to perform the test with more drives but with the prices of drives after the Thailand floods it wasn’t affordable.)
  • Everything else is stock configuration.

Virtualized Environment

  • VMWare ESXi 5.0 installed on a 2GB SanDisk Cruzer
  • The SATA controller on the Microserver does not support pass through on ESXi so I wasn’t able to give the OS direct access to the drive.  VM files were used for the drives.
VM Configuration
Each OS was installed on the Seagate drive, it was given 4GB Memory, 30GB of storage for the OS and 40GB for the share/test drive, and both CPU cores.  During testing all other VMs were powered down.

Gigabit network all around, using Cat-5e cables (sorry, I don’t have any Cat-6 cables) through as Cisco/Linksys E3000 (gigabit router) running TomatoUSB.    I shutdown all programs that I thought would use a significant amount of network traffic.

NAS Share configuration
Shares were mounted via CIFS/SMB.  I did not test NFS since I don’t have a system outside the Virtual Server that can mount NFS shares.  CIFS shares were using standard sync, the default compromise between safety and performance.
SAN Share Configuration
The difference between a NAS and a SAN is a NAS shares a file-system, multiple people can access that system at the same time.  A SAN shares volumes at the block level so it’s more suitable for running a database on top of, but only one client can mount a SAN target (volume) at a time.  iSCSI targets were mounted from my Windows 7 desktop.  I setup the iSCSI pools to use up to 128K blocks, after thinking about it I should have used 512 byte blocks since that’s what the OS would expect, ZFS will automatically use the smallest block-size it can to fit the file-system blocks so the block-size would have been whatever NTFS defaults to when formatting a 10GB drive.
Testing Software
Crystal Mark 3.0.1 x64.  Tests set to repeat 3 times each.  Test size 100MB, test data set to random.

Tests Performed
  • Seq = Sequential Read/Write of 100MB.
  • 512K = Random Read/Write of 512K files (100MB worth).
  • 4K =  Random Read/Write of 4K files (100MB worth).
  • 4KQD32 =  Random Read/Write of 4K files (100MB worth) using a queue depth of 32.
ZFS Configurations
  • Standard – Typical ZFS share with nothing special.
  • Compression – LZJB Compression of each block.
  • Compress_Dedup – LZJB Compression of each block, and block level deduplication.
  • Gzip – Gzip-6 compression of each block.
  • Gzip9 – Gzip-9 compression of each block.
  • iCSCI, Compression – iSCSI target (block level shares) using LZJB compression.
Shortcomings with this test
  • My network should be on Cat-6, but I’m too cheap.
  • If my SATA-controller supported pass-through with ESXi, ZFS would use the disk cache resulting in faster speeds.
  • I only had a single 7200RPM HD available, we could have seen different results (likely faster across the board) with 3 or 4 drives in a RAID-Z setup, but that’s not going to happen until prices normalize.

Here are the test results.

Below are some specific OS Graphs (note that the scale changes with each one).
Nexenta Reads


Nexenta Writes


OpenIndiana Reads


OpenIndiana Writes



Solaris 11 Reads


Solaris 11 Writes
Pivot Data
Column Labels
Row LabelsNexentaStoreOpenIndianaSolaris11
Average of Seq31.8330.2829.49
Average of 512K32.3430.533.33
Average of 4K4.0374.0284.406
Average of 4KQD327.7137.3926.94
Average of Seq27.235.0775.65
Average of 512K37.4822.5967.54
Average of 4K3.2442.0823.992
Average of 4KQD321.4691.3224.261
Compress_Dedup Average of Seq29.51532.67552.57
Compress_Dedup Average of 512K34.9126.54550.435
Compress_Dedup Average of 4K3.64053.0554.199
Compress_Dedup Average of 4KQD324.5914.3575.6005
Average of Seq32.7728.6133.77
Average of 512K31.8531.2430.3
Average of 4K4.10723.884.576
Average of 4KQD328.0354.5827.892
Average of Seq48.973.2176.56
Average of 512K24.0523.8872.42
Average of 4K2.4992.7634.022
Average of 4KQD321.9972.8164.667
Compression Average of Seq40.83550.9155.165
Compression Average of 512K27.9527.5651.36
Compression Average of 4K3.30313.32154.299
Compression Average of 4KQD325.0163.6996.2795
Average of Seq32.330.4129.68
Average of 512K32.0730.4334.5
Average of 4K4.0573.7924.19
Average of 4KQD328.017.0537.187
Average of Seq13.1315.9257.24
Average of 512K12.8130.744.96
Average of 4K0.50.9324.487
Average of 4KQD320.5110.6444.404
Gzip Average of Seq22.71523.16543.46
Gzip Average of 512K22.4430.56539.73
Gzip Average of 4K2.27852.3624.3385
Gzip Average of 4KQD324.26053.84855.7955
Average of Seq30.1128.5130.58
Average of 512K32.2731.4631.42
Average of 4K4.2274.6524.571
Average of 4KQD328.0817.0517.432
Average of Seq12.9215.4355.88
Average of 512K14.8416.6537.9
Average of 4K0.520.7943.93
Average of 4KQD320.470.5544.014
Gzip-9 Average of Seq21.51521.9743.23
Gzip-9 Average of 512K23.55524.05534.66
Gzip-9 Average of 4K2.37352.7234.2505
Gzip-9 Average of 4KQD324.27553.80255.723
iSCSI, Compression
Average of Seq78.0481.9679.08
Average of 512K7173.968.94
Average of 4K3.8544.6193.59
Average of 4KQD3256.0760.1370.82
Average of Seq27.9437.8140.14
Average of 512K23.545.8546.35
Average of 4K3.4623.8324.274
Average of 4KQD3210.720.99429.99
iSCSI, Compression Average of Seq52.9959.88559.61
iSCSI, Compression Average of 512K47.2559.87557.645
iSCSI, Compression Average of 4K3.6584.22553.932
iSCSI, Compression Average of 4KQD3233.39530.56250.405
Average of Seq29.9331.5332.84
Average of 512K32.4931.2733.59
Average of 4K4.1144.6194.687
Average of 4KQD328.1687.4827.634
Average of Seq50.1711.8160.99
Average of 512K26.558.90266.44
Average of 4K4.054.284.501
Average of 4KQD323.1214.4654.431
Standard Average of Seq40.0521.6746.915
Standard Average of 512K29.5220.08650.015
Standard Average of 4K4.0824.44954.594
Standard Average of 4KQD325.64455.97356.0325


Nearly five months later…

Finally finished my 189.7GB backup to Crashplan’s servers.  Started uploading on July 25th and completed on December 17th.  Didn’t notice too much of an internet slowdown thanks to Tomato’s QoS routing to prioritize traffic during the process.

Tomato Screenshot of Bandwidth Usage
Tomato Outbound QoS rules to limit CrashPlan
Outbound QoS rules




This is a great example of someone recognizing something we all hate and making it better.  We all despise our thermostats.  And for good reason.  A simple task like changing the temperature shouldn’t be very confusing, but the thermostat designers have made it complicated.

My first apartment had this Energy Star programmable thermostat.  I think you had to flip the far right switch to heat or cool, the next switch to auto,  turn the knob to “run”, press up or down until the desired temperature displayed, then press “hold” which turns the air on and causes an “Override” message to be displayed.  “Override” as far as I can tell means this setting will be overridden in about an hour and will have to set again.

When we bought a house that came with an electromechanical thermostat I thought it was the best thing in the world.  Simple to program, it sure beat the digital thermostat.  But if you don’t have a consistent schedule (say you take a day off, or have different climate control needs on the weekends) it can be a little annoying to reprogram for exceptions.

Now, there’s a company called Nest Labs, started by a former Apple employee who hated his confusing thermostat, that’s come out with a better thermostat.  Really, all you need to do is turn it up and down and it learns from your patterns and starts adjusting the temperature itself (you can always manually override it).  Turn the heat down before you go to bed and it learns to turn itself down automatically.

It connects to your Wifi network so you can adjust the temperature and view reports from your computer or iPhone/Android app…