I switched to Duplicati for Windows Backups and Restic for Linux Servers

So long, CrashPlan! After using it for 5 years, CrashPlan with less than a day notice decided to delete many of my files I had backed up. Once again, the deal got altered. Deleting files with no advanced notice is something I might expect from a totalitarian leader, but it isn’t acceptable for a backup service.

Darth Vader altering the deal
I am altering the deal. Pray I don’t alter it any further.

CrashPlan used to be the best offering for backups by far, but those days are gone. I needed to find something else. To start with I noted my requirements for a backup solution:

  1. Fully Automated. I am not going to remember to do something like take a backup on a regular basis. Between the demands from all aspects of life I already have trouble doing the thousands of things I should already be doing and I don’t need another thing to remember.
  2. Should alert me on failure. If my backups start failing. I want to know. I don’t want to check on the status periodically.
  3. Efficient with bandwidth, time, and price.
  4. Protect against my backup threat model (below).
  5. Not Unlimited. I’m tired of “unlimited” backup providers like CrashPlan not being able to handle unlimited and going out of business or altering the deal. I either want to provide my own hardware or pay by the GB.

Backup Strategy

Relayed Backups

This also gave me a good opportunity to review my backup strategy. I had been using a strategy where all local and cloud devices backed up to a NAS on my network, and then those backups were relayed to a remote (formerly CrashPlan) backup service. The other model is a direct backup. I like this a little better because living in North Idaho I don’t have a good upload speed so in several cases I’ve been in situations where my remote backups from the NAS would never complete because I don’t have enough bandwidth to keep up.

Now if Ting could get permission to run fiber under the railroad tracks and to my house I’d have gigabit upload speed, but until then the less I have to upload from home the better.

Direct Backups

Backup Threat Model

It’s best practice to think through all the threats you are protecting against. If you don’t do this exercise you may not think about something important… like keeping your only backup in the same location as your computer. My backup threat model (these are the threats which my backups should protect against):

  1. Disasters. If a fire sweeps through North Idaho burning every building but I somehow survive I want my data. So must have offsite backups in a different geo-location. We can assume that all keys and hardware tokens will be lost in a disaster so those must not be required to restore. At least one backup should be in a geographically separate area from me.
  2. Malware or ransomware. Must have an unavailable or offline backup.
  3. Physical theft or data leaks. Backups must be encrypted.
  4. Silent Data Corruption. Data integrity must be verified regularly and protected against bitrot.
  5. Time. I do not ever want to lose more than a days worth of work so backups must run on a daily basis and must not consume too much of my time maintaining them.
  6. Fast and easy targeted restores. I may need to recover an individual file I have accidentally deleted.
  7. Accidental Corruption. I may have a file corrupted or accidentally overwrite it and may not realize it until a week later or even a year alter. Therefore I need versioned backups to be able to restore a file from points in time up to several years.
  8. Complexity. If something were to happen to me, the workstation backups must be simple enough that Kris would be able to get to them. It’s okay if she has to call one of my tech friends for help, but it should be simple enough that they could figure it out.
  9. Non-payment of backup services. Backups must persist on their own in the event that I am unaware of failed payments or unable to pay for backups. If I’m traveling and my CC gets compromised I don’t want to not have backups.
  10. Bad backup software. The last thing you need is your backup software corrupting all your data because of some bug (I have seen this happen with rsync) so it should be stable. Looking at the git history I should be seeing minor fixes and infrequent releases instead of major rewrites and data corruption bug fixes.
Raspberry Pi and 4TB drive on wooden shelf
Raspberry Pi 4TB WD Backup

My friend Meredith had contacted me about swapping backup storage. We’re geographically separated so that works to cover local disasters. So that’s what we did, each of us setup an SSH/SFTP server for the other to backup to. I had plenty of space on my Proxmox environment so I created a VM for him and put it in an isolated DMZ. He had a Raspberry Pi and bought a new 4TB western digital external USB drive that he setup at his house for me.

Duplicati Backup Solution for Workstations

For Windows desktops I chose Duplicati 2. It also works with Mac, and Linux but for my purposes I just evaluated Windows.

Duplicati screenshot of main page

Duplicati has a nice local web interface. It’s simple and easy to use. Adding a new backup job is simple and gives plenty of options for my backup sets and destinations (this allows me to backup not only to a remote SFTP server, but also to any cloud service such as Backblaze B2 or Amazon S3).

Animation of setting up a duplicati backup job

Duplicati 2 has status icons in the system tray that quickly indicate any issues. The first few runs I was seeing a red icon indicating the backup had an error. Looking at the log it was because I had left programs open locking files it was trying to back up. I like that it warns about this instead of silently not backing up files.

Green play icon
Grey paused icon
Black idle icon
Red error icon

Green=In Progress, Grey=Paused, Black=Idle, Red=Error on the last backup.

Duplicati 2 seems to work well. I have tested restores and they come back pretty quickly. I can backup to my NAS as well as a remote server and a cloud server.

Two things I don’t care for Duplicati 2.

  1. It is still labeled Beta. That said it is a lot more stable than some GA software I’ve used.
  2. There are too many projects with similar names. Duplicati, Duplicity, Duplicacy. It’s hard to keep them straight.

Other considerations for workstation backups:

  • rsync – no gui
  • restic- no gui
  • Borg backup – Windows not officially supported
  • Duplicacy- License only allows personal

Restic Backup for Linux Servers

I settled on Restic for Linux servers. I have used Restic on several small projects over the years and it is a solid backup program. Once the environment variables are set it’s one command to backup or restore which can be run from cron.

Screenshot of restic animation

It’s also easy to mount any point in time snapshot as a read-only filesystem.

Borg backup came in pretty close to Restic, the main reason I chose Restic is the support for backends other than sftp. The cheapest storage these days is object storage such as Backblaze B2 and Wasabi. If Meredith’s server goes down, with Borg backup I’d have to redo my backup strategy entirely. With restic I have the option to quickly add a new cloud backup target.

Looking at my threat model there are two potential issues with Restic:

  1. A compromised server would have access to delete it’s own backups. This can be mitigated by storing the backup on a VM that is backed by storage configured with periodic immutable ZFS snapshots.
  2. Because restic uses a push instead of a pull model, a compromised server would also have access to other server’s backups increasing the risk of data exfiltration. At the cost of some deduplication benefits this can be mitigated by setting up one backup repository per host, or at the very least by creating separate repos for groups of hosts. (e.g. a restic repo set for minecraft servers and separate restic repo for web servers).

Automating Restic Deployment

Obviously it would be ridiculous to configure 50 servers by hand. To automate I used two Ansible Galaxy roles. I created https://galaxy.ansible.com/ahnooie/generate_ssh_keys which automatically generates ssh keys and copies the key ids to the restic backup target. The second role https://galaxy.ansible.com/paulfantom/restic automatically installs and configures a restic job on each server to run from cron.

Utilizing the above roles here is the Ansible Playbook I used to configure restic backups across all my servers. This sets it up so that each server is backed up once a day at a random time:

Manual Steps

I’ve minimized manual steps but some still must be performed:

  1. Backup to cold storage. This is archiving everything to an external hard drive and then leaving it offline. I do this manually once a year on world backup day and also after major events (e.g. doing taxes, taking awesome photos, etc.). This is my safety in case online backups get destroyed.
  2. Test restores. I do this once a year on world backup day.
  3. Verify backups are running. I have a reminder set to do this once a quarter. With Duplicati I can check in the web UI, and with a single Restic command it can get a list of hosts with the most recent backup date for each.

Cast your bread upon the waters,
for you will find it after many days.


Give a portion to seven,

or even to eight,
for you know not what disaster may happen on earth.

Solomon
Ecclesiastes 11:1-2 ESV

FreeNAS vs. OmniOS / Napp-It

freenas      OmniOS_logo_200px

2015-01-07: I’ve updated this post to to reflect changes in FreeNAS 9.3. 

I’ve been using OpenIndiana since late 2011, and switched to OmniOS in 2013.  Lately I started testing FreeNAS, what drove me to do this is I use  CrashPlan to backup my pool but recently Code 42 announced they’ll be discontinuing Solaris support for Crashplan so I needed to start looking for an alternative OS or an alternative backup solution.  I decided to look at FreeNAS because it has a CrashPlan plugin that runs in a jail using Linux emulation.  After testing it out for awhile I am likely going to stay on OmniOS since it suits my needs better and instead switch out CrashPlan for ZnapZend for my backup solution.  But after running FreeNAS for a few months here are my thoughts on both platforms and their strengths and weaknesses as a ZFS storage server.

Update: 2015-01-07: After a lot of testing ZnapZend ended up not working for me, this is not it’s fault, but because I have limited and bandwidth the snapshots don’t catch up and it gets further and further behind so for now I’m continuing with Crashplan on OmniOS.  I am also testing FreeNAS and may consider a switch at some point.

CIFS / SMB Performance for Windows Shares

FreeNAS has a newer implementation of SMB, supporting SMB3, I think OmniOS is at SMB1.  FreeNAS can actually function as an Active Directory Domain Controller.

OmniOS is slightly faster, writing a large file over my LAN gets around 115MBps vs 98MBps on FreeNAS.  I suspect this is because OmniOS runs NFS SMB at the kernel level and FreeNAS runs it in user space.  I tried changing the FreeNAS protocol to SMB2, and even SMB1 but couldn’t get past 99MBps.  This is on a Xeon E3-1240V3 so there’s plenty of CPU power, Samba on FreeNAS just can’t keep up.

CIFS / SMB Snapshot Integration with Previous Versions

Previous Versions Snapshot Integration with Windows is far superior in OmniOS.   I always use multiple snapshot jobs to do progressive thinning of snapshots.  So for example I’ll setup monthly snaps with a 6 month retention, weekly with two month retention, daily with two week, hourly with 1 week, and every 5 minutes for two days.   FreeNAS will let you setup the snap jobs this way, but in Windows Previous Versions it will only show the snapshots from one of the snap jobs under Previous Versions (so you may see your every 5 minute snaps but you can’t see the hourly or weekly snaps).  OmniOS handles this nicely.  As a bonus Napp-It has an option to automatically delete empty snapshots sooner than their retention expiration so I don’t see them in Previous Versions unless some data actually changed.

previous_versions

Enclosure Management

Both platforms struggle here, FreeNAS has a bit of an edge here… probably the best thing to do is write down the serial number of each drive with the slot number.  In FreeNAS drives are given device names like da0, da1, etc. but unfortunately the numbers don’t seem to correspond to anything and they can even change between reboots.  FreeNAS does have the ability to label drives so you could insert one drive at a time and label them with the slot they’re in.

OmniOS drives are given names like c3t5000C5005328D67Bd0 which isn’t entirely helpful.

For LSI controllers the sas2irc utility (which works on FreeBSD or Solaris) will map the drives to slots.

Fault Management

The ZFS fault management daemon will automatically replace a failed drive with a hot spare… but it hasn’t been ported to FreeBSD yet so FreeNAS really only has warm spare capability.  Update: FreeNAS added hot spare capability on Feb 27, 2015.   To me this is a minor concern… if you’re going to use RAID-Z with a hot spare why not just configure the pool with RAID-Z2 or RAID-Z3 to begin with?  However, I can see how the fault management daemon on OmniOS would reduce the amount of work if you had several hundred drives and failures were routine.

SWAP issue on FreeNAS

While I was testing I actually had a drive fail (this is why 3-year old Seagate drives are great to test with) and FreeNAS crashed!  The NFS pool dropped out from under VMware.  When I looked at the console I saw “swap_pager: I/O error – pagein failed”   I had run into FreeNAS Bug 208 which was closed a year ago but never resolved.  The default setting in FreeNAS is to create a 2GB swap partition on every drive which acts like striped swap space (I am not making this up, this is the default setting).  So if any one of the drives fails it can take FreeNAS down.  The argument from FreeNAS is that you shouldn’t be using swap–and perhaps that’s true but I had a FreeNAS box with 8GB memory and running only one jail with CrashPlan bring my entire system down because a single drive failed.  That’s not an acceptable default setting.  Fortunately there is a way to disable automatically creating swap partitions on FreeNAS, it’s best to disable the setting before initializing any disks.

In my three years of running an OpenSolaris / Illumos based OS I’ve never had a drive failure bring the system down

Running under VMware

FreeNAS is not supported running under a VM but OmniOS is.  In my testing both OmniOS and FreeNAS work well under VMware under the best practices of passing an LSI controller flashed into IT mode to the VM using VT-d.  I did find that OmniOS does a lot better virtualized on slower hardware than FreeNAS.  On an Avaton C2750 FreeNAS performed well on bare metal, but when I virtualized it using vmdks on drives instead of VT-d FreeNAS suffered in performance but OmniOS performed quite well under the same scenario.

Both platforms have VMXNET3 drivers, neither has a Paravirtual SCSI driver.

Encryption

Unfortunately Oracle did not release the source for Solaris 11, so there is no encryption support on OpenZFS directly.

FreeNAS can take advantage of FreeBSD’s GELI based encryption.  FreeBSD’s implementation can use the AES instruction set, last I tested Solaris 11 the AES instruction set was not used so FreeBSD/FreeNAS probably has the fastest encryption implementation for ZFS.

There isn’t a good encryption option on OmniOS.

ZFS High Availability

Neither systems supports ZFS high availability out of the box.  OmniOS can use a third party tool like RSF-1 (paid) to accomplish this.  The commercially supported TrueNAS uses RSF-1 so it should also work in FreeNAS.

ZFS Replication & Backups

FreeNAS has the ability to easily setup replication as often as every 5 minutes which is a great way to have a standby host to failover to.  Replication can be done over the network.  If you’re going to replicate over the internet I’d say you want a small data set or a very fast connection–I ran into issues a couple of times where the replication got interrupted and it needed to start all over from scratch.  On OmniOS Napp-It does not offer a free replication solution, but there is a paid replication feature, however there are also numerous free ZFS replication scripts that people have written such as ZnapZend.

I did get the CrashPlan plugin to work under FreeNAS, however I found that after a reboot the CrashPlan jail sometimes wouldn’t auto-mount my main pool so it ended up not being a reliable enough solution for me to be comfortable with.  I wish FreeNAS made it so that it wasn’t in a jail.

Memory Requirements

FreeNAS is a little more power hungry than OmniOS.  For my 8TB pool a bare minimum for FreeNAS is 8GB while OmniOS is quite happy with 4GB, although I run it with 6GB to give it a little more ARC.

Hardware Support

FreeNAS supports more hardware than OmniOS.  I generally virtualize my ZFS server so it doesn’t matter too much to me but if you’re running bare metal and on obscure or newer hardware there’s a much better chance that FreeNAS supports it.  Also in 9.3 you have the ability to configure IPMI from the web interface.

VAAI (VMware vSphere Storage API’s — Array Integration)

FreeNAS now has VAAI support for iSCSI.  OmniOS has no VAAI support.  As of FreeNAS 9.3 and Napp-It 0.9f4 both control panels have the ability to enable VMware snapshot integration / ESXi hot snaps.  The way this works is before every ZFS snapshot is taken FreeNAS has VMware snap all the VMs, then the ZFS snapshot is taken, then the VMware snapshots are released.  This is really nice and allows for proper consistent snapshots.

 

GUI

The FreeNAS GUI looks a little nicer and is probably a little easier for a beginner.  The background of the screen turns red whenever you’re about to do something dangerous.  I found you can setup just about everything from the GUI, where I had to drop into the command line more often with OmniOS.  The FreeNAS web interface seems to hang for a few seconds from time to time compared to Napp-It, but nothing major.  I believe FreeNAS will have an asynchronous GUI in version 10.

One frustration I have with FreeNAS is it doesn’t quite do things that are compatible with CLI.  For example, if you create a pool via CLI FreeNAS doesn’t see it, you actually have to import it using the GUI to use it there.  Napp-it is essentially an interface that runs CLI commands so you can seamlessly switch back and forth between managing things on CLI and Napp-It.  This is a difference in philosophy.   Napp-It is just a web interface meant to run on top of an OS, where FreeNAS is more than just a webapp on top of FreeBSD, FreeNAS is it’s own OS.

I think most people experienced with the zfs command line and Solaris are going to be a little more at home with Napp-It’s control panel, but it’s easy enough to figure out what FreeNAS is doing.  You just have to be careful what you do in the CLI.

On both platforms I found I had to switch into CLI from time to time to do things right (e.g. FreeNAS can’t set sync=always from the GUI, Napp-It can’t setup networking).

As far as managing a ZFS file system both have what I want.. email alerts when there’s a problem, scheduling for data scrubs, snapshots, etc.

FreeNAS has better security, it’s much easier to setup an SSL cert on the management interface, in fact you can create an internal CA to sign certificates from the GUI.  Security updates are easier to manage from the web interface in FreeNAS as well.

Community

FreeNAS and OmniOS both have great communities.  If you post anything at HardForum chances are you’ll get a response from Gea and he’s usually quite helpful.  Post anything on the FreeNAS forums and Cyberjock will tell you that you need more RAM and that you’ll lose all your data.  There is a lot of info on the FreeNAS forums and the FreeNAS Redmine project is open so you can see all the issues, it’s great way to see what bugs and feature requests are out there and when they were or will be fixed.  OmniOS has an active OmniOS Discuss mailman list and Gea, the author of Napp-It is active on various forums.  He has answered my questions on several occasions over at HardForum’s Data Storage subforum.  I’ve found the HardForum community a little more helpful…I’ve always gotten a response there while several questions I posted on the FreeNAS forums went unanswered.

Documentation

FreeNAS documentation is great, like FreeBSD’s.  Just about everything is in the FreeNAS Guide

OmniOS isn’t as organized.  I found some howtos here, not nearly as comprehensive as FreeNAS.  Most of what I find from OmniOS I find in forums or the Napp-It site.

Mirrored ZFS boot device / rpool

OmniOS can boot to a mirrored ZFS rpool.

FreeNAS does not have a way to mirror the ZFS boot device.  FreeBSD does have this capability but it turns out FreeNAS is built on NanoBSD.  The only way to get FreeNAS to have redundancy on the boot device that I know of is to set it up on a hardware RAID card.

FreeNAS 9.3 can now install to a mirrored ZFS rpool!

Features / Plugins / Extensions

Napp-It’s extensions include:

  • AMP (Apache, MySQL, PHP stack)
  • Baikal CalDAV / CardDAV Server
  • Logitech MediaServer
  • MediaTomb (DLNA / UPnP server)
  • Owncloud (Dropbox alternative)
  • PHPvirtualbox (VirtualBox interface)
  • Pydio Sharing
  • FTP Server
  • Serviio Mediaserver
  • Tine Groupware

FreeNAS plugins:

  • Bacula (Backup Server)
  • BTSync (Bittorrent Sync)
  • CouchPotato (NZB and Torrent downloader)
  • CrashPlan (Backup client/server)
  • Cruciblewds (Computer imaging / cloning)
  • Firefly (media server for Roku SoundBridge and Apple iTunes)
  • Headphones (automatic music downloader for SABnzbd)
  • HTPC-Manager
  • LazyLibrarian (follow authors and grab metadata for digital reading)
  • Maraschino (web interfrace for XBMC HTPC)
  • MineOS (Minecraft control panel)
  • Mylar (Comic book downloader)
  • OwnCloud (Dropbox alternative)
  • PlexMediaServer
  • s3cmd
  • SABnzbd (Binary newsreader)
  • SickBeard (PVR for newsgroup users)
  • SickRage (Video file manager for TV shows)
  • Subsonic (music streaming server)
  • Syncthing (Open source cluster synchronization)
  • Transmission (BitTorrent client)
  • XDM (eXtendable Download Manager)

All FreeNAS plugins run in a jail so you must mount the storage that service will need inside the jail… this can be kind of annoying but it does allow for some nice security–for example CrashPlan can mount the storage you want to backup as read-only.

Protocols and Services

Both systems offer a standard stack of AFP, SMB/CIFS, iSCSI, FTP, NFS, RSYNC, TFTP
FreeNAS also has WebDAV and few extra services like Dynamic DNS, LLDP, and UPS (the ability to connect to a UPS unit and shutdown automatically).

Performance Reporting and Monitoring

Napp-It does not have reports and graphs in the free version.  FreeNAS has reports and you can look back as far as you want to see historical performance metrics.

freenas_stats

As a Hypervisor

Both systems are very efficient running guests of the same OS.  OmniOS has Zones, FreeNAS can run FreeBSD Jails.  OmniOS also has KVM which can be used to run any OS.  I suspect that FreeNAS 10 will have Bhyve.  Also both can run VirtualBox.

Stability vs Latest

Both systems are stable, OmniOS/Napp-It seems to be the most robust of the two.  The OmniOS LTS updates are very minimal, mostly security updates and a few bug fixes.  Infrequent and minimal updates are what I like to see in a storage solution.

FreeNAS is pushing a little close to the cutting edge.  They have frequent updates pushed out–sometimes I think they are too frequent to have been thoroughly tested.  On the other hand if you come across an issue or feature request in FreeNAS and report it chances are they’ll get it  in the next release pretty quickly.

Because of this, OmniOS is behind FreeNAS on some things like NFS and SMB protocol versions, VAAI support for iSCSI, etc.

I think this is an important consideration.  With FreeNAS you’ll get newer features and later technologies while OmniOS LTS is generally the better platform for stability.  The commercial TrueNAS soltution is also going to robust.  For FreeNAS you could always pick a stable version and not update very often–I really wish FreeNAS had an LTS, or at least a slower moving stable branch that maybe only did quarterly updates except for security fixes.

ZFS Integration

OmniOS has a slight edge on ZFS integration.  As I mentioned earlier OmniOS has multi-tiered snapshot integration into the the Windows Previous versions feature where FreeNAS can only pick one snap-frequency to show up there.  Also, in OmniOS NFS and SMB shares are stored as properties on the datasets so you can export the pool, import it somewhere else and the shares stay with the pool so you don’t have to reconfigure them.

Commercial Support

OmniOS offers Commercial Support if you want it.

iX Systems offers supported TrueNAS appliances.

Performance

On an All-in-one setup, I setup VMware ESXi 6.0, a virtual storage network and tested FreeNAS and OmniOS using iSCSI and NFS.  On all tests MTU is set to 9000 on the storage network, and compression is set to LZ4.  iSCSI volumes are sparse ZVOLs.  I gave the ZFS server 2 cores and 8GB memory, and the guest VM 2 cores and 8GB memory.  The guest VM is Windows 10 running Crystal Benchmark.

Environment:

  • Supermicro X10SL7-F with LSI 2308 HBA flashed to IT firmware and passed to ZFS server via VT-d (I flashed the P19 firmware for OmniOS and then re-flashed to P16 for FreeNAS).
  • Intel Xeon E3-1240v3 3.40Ghz.
  • 16GB ECC Memory.
  • 6 x 2TB Seagate 7200 drives in RAID-Z2
  • 2 x 100GB DC S3700s striped for ZIL/SLOG.  Over-provisioned to 8GB.

Latest stable updates on both operating systems:

FreeNAS 9.3  update 201503200528
OmniOS r151012 omnios-10b9c79

On Crystal Benchmark I ran 5 each of the 4000MB, 1000MB, and 50MB size tests, the results are the average of the results.

On all tests every write was going to the ZIL / SLOG devices.  On NFS I left the default sync=standard (which results in every write being a sync with ESXi).  On iSCSI I set sync=always, ESXi doesn’t honor sync requests from the guest with iSCSI so it’s not safe to run with sync=standard.

Update: 7/30/2015: FreeNAS has pushed out some updates that appear to improve NFS performance.  See the newer results here: VMware vs bhyve Performance Comparison.  Original results below.

Sequential Read MBps

seqrd

Sequential Write MBps

seqwr2

Random Read 512K MBps

rndrd512

Random Write 512K MBps

rndwr512

Random Read 4K IOPS

randrd4kqd1

Random Write 4K IOPS

rndwr4kqd1

Random Read 4K QD=32 IOPS

rdndrd4kqd32

Random Write 4K QD=32 IOPS

rndwr4kqd32

Performance Thoughts

So it appears, from these unscientific benchmarks that OmniOS on NFS is  the fastest configuration, iSCSI performs pretty similarly on both FreeNAS and OmniOS depending on the test.  One other thing I should mention, which doesn’t show up in the tests is latency.  With NFS I saw latency on the ESXi storage as high as 14ms during the tests,  while latency never broke a millisecond with iSCSI.

One major drawback to my benchmarks is it’s only one guest hitting the storage. It would be interesting to repeat the test with several VMs accessing the storage simultaneously, I expect the results may be different under heavy concurrent load.

I chose 64k iSCSI block size because the larger blocks result in a higher LZ4 compression ratio, I did several quick benchmarks and found 16K and 64K performed pretty similarly, 16K did perform better at random 4K write QD=1, but otherwise 64K was close to a 16K block size depending on the test.   I saw significant drop in random performance at 128k.  Once again under different scenarios this may not be the optimal block-size for all types of workloads.

Restore Zpool with CrashPlan

I messed up my zpool with a stuck log device, and rather than try to fix it I decided to wipe out the zpool and restore the zfs datasets using CrashPlan.

CrashPlan Restore

It took CrashPlan roughly a week to restore all my data (~1TB) which is making me consider local backups.  All permissions intact.  Dirty VM backups seemed to boot just fine (I also have daily clean backups using GhettoVCB just in case but I didn’t need them).  One nice thing is I could prioritize certain files or folders on the restore by creating multiple restore jobs.  I had a couple of high priority VMs and then wanted kid movies to come in next and then the rest of the data and VMs.

Installing CrashPlan on OmniOS

OmniOSCrashPlanI’m in the progress of switching my ZFS Server from OpenIndiana to OmniOS, mainly because OmniOS is designed only to be a server system so it’s a little cleaner, and they have a stable production release that’s commercially supported, and it has become Gea’s OS of choice for Napp-It.  One of the last things I had to do was get CrashPlan up and running, here’s a quick little howto…

Unfortunately, I got the error below:

So I went to look at the checkinstall script…

I’m not entirely sure how I fixed it, I modified the checkinstall script to look for java in /usr/java/bin, but then ran pkgadd and the CrashPlan installer refused to run because it detected the file had been modified, so I undid my change and re-ran pkgadd and it worked…

Or if you have a secure network you can set serviceHost in /opt/sfw/crashplan/conf from 127.0.0.1 to 0.0.0.0 and then on your client change the serviceHost in C:\Program Files\CrashPlan\conf\ui.properties to your OmniOS IP Address.
I also like to move my CrashPlan install and config onto the main pool where all my storage is… I had already created a dataset called /tank/crashplan:

On a side note my ZFS server CrashPlan backup passed the 1TB mark today!
Here’s a video…