Ben’s Phone Guide (2016 edition)

Phones depreciate in value fast, their useful life is less than their lifespan.  Not because old phones don’t work anymore.  But because manufacturers stop providing security updates after about 3 years (at best!)


What If I Told You a Hacker Can Take over Your Phone with One Text… And You Don’t Even Have to Open It?

You might be hacked now and not even know it.

Exploits like like this and like this are real.  Vulnerabilities have been found in the past and exploited.  They will be found in the future and exploited.  Some exploits require you to do nothing but receive (not even open, just receive) an SMS message and a hacker can do what he wants with your phone.  He can install malware, use your phone to launch a DDoS attack against Krebs on Security, he can spy on you (or your kids if your kids have phones) activating the camera and microphone at will listening in on your conversations and reading every message passing through the device.

The only protection against this is either (1) not have a phone (more secure), or (2) if you must have a phone, keep it up to date constantly (not as foolproof but would block all but the most sophisticated hackers).

One of the big problems with phones is security.  For iPhones you get your updates through Apple.  For Android things aren’t as clean.  The Android OS itself gets security updates, but then it has to trickle down through the manufacturer (who often doesn’t provide an update) and then the carrier you bought the phone from.

Calculating Remaining Life Before You Buy

To calculate the real cost of a phone, find out how long the manufacturer and carrier will support security updates for it.  Divide the cost of the phone by the number of months left for security updates and that’s cost of the phone.

monthly cost = cost of phone / remaining life in months

cost of phone: $500
remaining life for security updates: 29 months
monthly-cost: $500/29 = $17.24

Oddly, the price of phones doesn’t usually drop that much after the 1st year even though they have lost 1/3rd of their useful life!

There Are Only Two Options

A lot of phone manufacturers / carriers don’t even provide updates to their phones.  They’re unsupported from the moment you bought them!

For the sake of security, I only recommend two phone manufacturers.  Google and Apple.  Both have a track record of providing timely security updates.  Google pushes out a security update every month and Apple doesn’t have a schedule but does a good job getting them out timely.  I also only recommend Apple with the caveat that you trust them because it is a proprietary closed source OS.  You are trusting them to do the right thing and have decent security.

Google Nexus Devices

Nexus 5X

Google stopped selling the Nexus, but they still have 2 years of updates left and are reasonably priced on Amazon.

Google Guarantees Security Patches on Nexus devices 3-years from the release date or at least 18 months from when the Google Store last sold the device (whichever is longer).

As of October 2016, here is the cost per month as I calculate it:

Nexus 5X – security updates until October 2018.  $332. – 16GB.
Ben’s cost over remaining life:  $332/24mos = $13.83/mo
Nexus 6P – security updates until October 2018. $450 – 32GB.
Ben’s cost over remaining life: $450/24mos = $18.75/mo

(If you get a Nexus, note that there are U.S. and International versions of the phone, if you live in the U.S. you’ll want the U.S. version).

Google has not committed to EOL dates on the Pixel line but if it’s similar to Nexus you’re looking at:

Google Pixel – $650 – 32GB – probably until October 2019
Ben’s cost over remaining life: 650/36mos = ~$18.05/mo

Google Pixel XL – $770 – 32GB – probably until October 2019
Ben’s cost over remaining life: 770/36mos ~$21.38/mo

Apple Devices

iPhone 7

iOS is closed source so I consider it less secure and less open than Android, but they do a pretty decent job at keeping hackers out.  Most compromises I hear about are through hooking your iPhone up to a service like iCloud and not the iPhone itself.  I used to use an iPhone, but at the time it was the best phone (better than Blackberry).  Now that we have Android I don’t see a huge need to use a closed proprietary system.  However, it’s always good to have competition.

Here’s a comparison of iPhone models currently getting security updates with a guess of (but not guaranteed) security updates for 3-years.

iPhone 7 Plus – probably until September 2019
Ben’s cost over remaining life: $650/35mos ~$18.57/mo

iPhone 7 Plus – probably until September 2019
Ben’s cost over remaining life: $650/35mos ~$22.00/mo

iPhone 6S – probably until September 2018
iPhone 6 / 6 Plus – probably until September 2017
iPhone 5S / 5C – probably until the next major iOS update

Where Not to Buy a Phone

Mobile carriers typically install a lot of battery sucking bloatware, which can’t be deleted, and often delay pushing out security updates by months, even years, leaving your phone vulnerable to hackers.  Not only that some of the extra software installed introduces vulnerabilities.

Also, phones bought from a mobile carrier are usually locked to that carrier so you can’t switch to someone else without purchasing a new phone.

Mobile Carriers

Having an unlocked phone I avoid the main carriers and instead use MVNOs (Mobile Virtual Network Operator).  These MVNOs use the same network that Verizon, AT&T, Sprint, and T-Mobile have, but most often for a better price.  For great service and prices I like Google Fi (Sprint & T-Mobile Network), Ting (Spring or T-Mobile), and TracFone (Verizon or AT&T) and there are plenty of other MVNO operators to choose from.  You can find one that offers the best plan for your situation.  Using TracFone (which is a pre-paid service) we pay less than $10/month for a voice/data/text plan for a Nexus 5X on Verizon’s network.

Don’t Save Money with a Used Phone

I used to buy used phones off eBay to save money but now I don’t think it’s a good idea with the recent USB firmware hacks and the amount of malware out there.  Used phones are a security risk–you have no idea if a used phone has been compromised, and if it’s been plugged into a compromised USB device that rewrote it’s firmware.  Physical security is paramount.  To be safe, I always buy my phones new.

Personal Data on Work Phones and Work Data on Personal Phones

Think carefully before using your personal phone for work.  If you connect your phone to work email it almost always gives your employer complete control of the device.  They can wipe your phone when you leave, track your location, install software on your phone, and have access to all your personal data.

And similarly, if you put your personal information or your personal email account on a work phone your employer has access to that data.

What Phone Do I Have?

Kris and I both use the Nexus 5X.  I’ve reviewed the Nexus 5X here.  I will likely replace them both when security updates go EOL which will likely be 2018.  Pixel phones are bit expensive so I’m hoping they release some new phones on the Nexus line again next year.

Phone Safety Tips

  1. Always use a phone that’s getting regular (monthly) security updates.  As soon as the phone goes out of support, get a new phone.
  2. Minimize the number of apps you install.  Limit yourself to the official Google Play Store or iOS store and avoid 3rd party stores like the Amazon Store where authors don’t do as good a job at keeping things updated.
  3. Favor installing well known apps with lots of downloads as they’re more likely to be reviewed and have better security practices.
  4. Uninstall apps that you don’t use.
  5. Always buy a new phone.
  6. Don’t use a phone at all.
  7. If you have a Samsung Note 7, you might want to return it before you catch on fire.


How to Encrypt Your Email

So, you want to hide your email from the NSA’s prying eyes?  It’s impossible… but here are some steps you can use to make it harder.

This isn’t theoretical.  The NSA has and does intercept this traffic.

Common Points of NSA Interception

The NSA has unlimited resources to compromise your communications.  You’re not going to stop them.  But that doesn’t mean it should be easy. Below are the easy points of NSA interception.  In this example of an email from Mom to Ben the NSA can intercept the email at Mom’s ISP, Mom’s email provider, Ben’s email provider, Ben’s ISP, and any internet hop in between.



I’m going to skip over a lot of important stuff, this guide is not intended for security experts or sysadmins of email systems and how to prevent downgrade attacks, etc.  This is meant to be a post about what the average American should do to protect their emails.

Step 1. Client to Server TLS Encryption

thunderbird_starttlsEnsure your email client (e.g. Thunderbird) or browser is using a TLS connection to the server.  If you’re using any major provider like Gmail, Office 365, etc. they will be enforcing TLS.  All email providers should be enforcing TLS so if yours is not that’s a good sign you should be switching.

chrome_httpsIf using webmail your browser should show https, if using Thunderbird you should be using STARTTLS for both inbound and outbound connections.

Note, the entire CA (Certificate Authority) system is broken, the NSA could generate a fraudulent certificate from an amicable CA and do a MITM attack and still intercept the email, but now they have to take some effort to do so.  The point is security comes in layers, and we need to start at the basics, we’ll get to more advanced security below.

Step 2. Make sure Your Email Provider is Encrypting Server to Server Traffic

server-serverIn 2013 Google was outraged after finding out the NSA was intercepting it’s server to server traffic.  As a result Google started encrypting all internal traffic between servers (Good for Google).  Most major internet providers provide server to server encryption.  But the problem is not all ISPs use encryption, so it doesn’t do much good if you send an email from a secure service like Gmail to a small-town ISP that has no security whatsoever.  Probably the best way to check is to enter in a recipients email address here: and if their email provider’s MX server’s pass all the test they’re probably secure.

Step 3.  PGP Encrypt Your Emails


Now, the NSA can still potentially intercept your emails at rest through a court order, through PRISM, or through hacking into ISPs.  Your email should be encrypted not only in transit, but also at rest.  The best way to do that is to encrypt it using OpenPGP.  This means even if the NSA gets a hold of your email they can’t read it (at least not without spending some serious time and money).

PGP (Pretty Good Privacy) isn’t foolproof.  It doesn’t encrypt the metadata (the NSA can still see that you sent me an email, they can see when you sent it and where you were) but it does encrypt the content.

How go you get OpenPGP?  Right here: It’s free, open source, and there are plugins for just about everything.  It works on Webmail, Thunderbird, Outlook, etc.  Check the link above for a complete list but here are two common options:

If you use Thunderbird I suggest Enigmail, and if you use Gmail with the webmail interface Mailvelope is a great plugin.

Here’s a very quick getting started guide for Mailvelope below.  If you’re not going to use Mailvelope the concept is pretty much the same nomatter what plugin you choose.  You’ll Generate a Public/Private Keypair, obtain the public key of the person you’re sending an email to, and send them en encrypted email.

How to Setup Mailvelope for Gmail and Chrome

Mailvelope IconHere’s a quick walk-through to set it up.  After installing the plugin you should see this icon on the top-right in Chrome.   Right-click on it and choose Options.

Next Generate a Key….

I should note that “Password” is traditionally called a Pass Phrase, it should be long, but you don’t ever want to forget it or you won’t be able to read any encrypted messages sent to you.  I strongly suggest writing it down and keeping it someplace safe.


Now, to send an encrypted email to me, you’ll need to import my key.  Go to “Import Keys” and type in my email address and hit search.  You should click on the keyID: 13E708FC.  A key will pop up, click on it to import my key.

mailvelope_buttonNow, you can send me an encrypted email.  Go to compose a new email in Gmail.  You’ll notice a button in the compose menu.  Click the button.

Write me a message…


When you receive an encrypted email, it will look like this.  Click on it and enter your passphrase to decrypt.


And there you have it.   I wouldn’t say this is foolproof…. it doesn’t protect against a lot of other attack vectors…

XKCD Comic
CC-By-NC 2.5

But I say if the NSA is going to intercept my communications it shouldn’t be easy.  I want them to spend some effort and money to do so.

For further reading I might suggest


ZFS Dataset Hierarchy | Data Hoarder Edition

OpenZFS LogoZFS is flexible and will let you name and organize datasets however you choose–but before you start building datasets there’s some ways to make management easier in the long term.  I’ve found the following convention works well for me.  It’s not “the” way by any means, but I hope you will find it helpful, I wish some tips like this had been written when I built my first storage system 4 years ago.

Here are my personal ZFS best practices and naming conventions to structure and manage ZFS data sets.

ZFS Pool Naming

I never give two zpools the same name even if they’re in different servers in case there is the off-chance that sometime down the road I’ll need to import two pools into the same system.  I generally like to name my zpool tank[n] where is an incremental number that’s unique across all my servers.

So if I have two servers, say stor1 and stor2 I might have two zpools : tank1 tank2

Top Level ZFS Datasets for Simple Recursive Management

Create a top level dataset called ds[n] where n is unique number across all your pools just in case you ever have to bring two separate datasets onto the same zpool.  The reason I like to create one main top-level dataset is it makes it easy to manage high level tasks recursively on all sub-datasets (such as snapshots, replication, backups, etc.).  If you have more than a handful of datasets you really don’t want to be configuring replication on every single one individually.  So on my first server I have:


I usually mount tank/ds1 as readonly from my CrashPlan VM for backups.  You can configure snapshot tasks, replication tasks, backups, all at this top level and be done with it.

ZFS snaps and pruning recursively managed at the top level dataset

Name ZFS Datasets for Replication

One of the reasons to have a top level dataset is if you’ll ever have two servers…
   | - tank1/ds1
   | - tank2/ds2

I replicate them to each other for backup.  Having that top level ds[n] dataset lets me manage ds1 (the primary dataset on the server) completely separately from the replicated dataset (ds2) on stor1.
 | - tank1/ds1
 | - tank2/ds2 (replicated)
 | - tank2/ds2
 | - tank1/ds1 (replicated)

Advice for Data Hoarders.  Overkill for the Rest of Us


The ideal is we backup everything.  But in reality storage costs money, WAN bandwidth isn’t always available to backup everything remotely.  I like to structure my datasets such that I can manage them by importance.  So under the ds[n] dataset create sub-datasets.
 | - tank1/ds1/kirk – very important – family pictures, personal files
 | - tank1/ds1/spock – important – ripped media, ISO files, etc.
 | - tank1/ds1/redshirt – scratch data, tmp data, testing area
 | - tank1/ds1/archive – archived data
 | - tank1/ds1/backups – backups

Kirk – Very Important.  Family photos, home videos, journal, code, projects, scans, crypto-currency wallets, etc.  I like to keep four to five copies of this data using multiple backup methods and multiple locations.  It’s backed up to CrashPlan offsite, rsynced to a friend’s remote server, snapshots are replicated to a local ZFS server, plus an annual backup to a local hard drive for cold storage.  3 copies onsite, 2 copies offsite, 2 different file-system types (ZFS, XFS) and 3 different backup technologies (CrashPlan, Rsync, and  ZFS replication) .  I do not want to lose this data.

Multiple Backup Locations Across the World
Important data is backed up to multiple geographic locations

Spock – Important.  Important data that would be a pain to lose, might cost money to reproduce, but it isn’t catastrophic.  If I had to go a few weeks without it I’d be fine.  For example, rips of all my movies, downloaded Linux ISO files, Logos library and index, etc.  If I lost this data and the house burned down I might have to repurchase my movies and spend a few weeks ripping them again, but I can reproduce the data.  For this dataset I want at least 2 copies, everything is backed up offsite to CrashPlan and if I have the space local ZFS snapshots are replicated to a 2nd server giving me 3 copies.


Redshirt – This is my expendable dataset.  This might be a staging area to store MakeMKV rips until they’re transcoded, I might do video editing here or test out VMs.  This data doesn’t get backed up… I may run snapshots with a short retention policy.  Losing this data would mean losing no more than a days worth of work.  I might also run zfs sync=disabled to get maximum performance here.  And typically I don’t do ZFS snapshot replication to a 2nd server.  In many cases it will make sense to pull this out from under the top level ds[n] dataset and have it be by itself.

Backups – Dataset contains backups of workstations, servers, cloud services–I may backup the backups to CrashPlan or some online service and usually that is sufficient as I already have multiple copies elsewhere.

Archive – This is data I no longer use regularly but don’t want to lose. Old school papers that I’ll probably never need again, backup images of old computers, etc.  I set set this dataset to compression=gzip9, and back it up to CrashPlan plus a local backup and try to have at least 3 copies.

Now, you don’t have to name the datasets Kirk, Spock, and Redshirt… but the idea is to identify importance so that you’re only managing a few datasets when configuring ZFS snapshots, replication, etc.  If you have unlimited cheap storage and bandwidth it may not worth it to do this–but it’s nice to have the option to prioritize.

Now… once I’ve established that hierarchy I start defining my datasets that actually store data which may look something like this:
| - tank1/ds1/kirk/photos
| - tank1/ds1/kirk/git
| - tank1/ds1/kirk/documents
| - tank1/ds1/kirk/vmware-kirk-nfs
| - tank1/ds1/spock/media
| - tank1/ds1/spock/vmware-spock-nfs
| - tank1/ds1/spock/vmware-iso
| - tank1/ds1/redshirt/raw-rips
| - tank1/ds1/redshirt/tmp
| - tank1/ds1/archive
| - tank1/ds1/archive/2000
| - tank1/ds1/archive/2001
| - tank1/ds1/archive/2002
| - tank1/ds1/backups
| - tank1/ds1/backups/incoming-rsync-backups
| - tank1/ds1/backups/windows
| - tank1/ds1/backups/windows-file-history


With this ZFS hierarchy I can manage everything at the top level of ds1 and just setup the same automatic snapshot, replication, and backups for everything.  Or if I need to be more precise I have the ability to handle Kirk, Spock, and Redshirt differently.


Intranet SSL Certificates Using Let’s Encrypt | DNS-01

Let's EncryptLet’s Encrypt is a great service offering the ability to generate free SSL certs.  The way it normally works is using http-01 challenge…  to respond to the Let’s Encrypt challenge the client (typically Certbot) puts an answer in the webroot.  Let’s Encrypt makes an http request and if it finds the response to the challenge it issues the cert.


Certbot is great for public web-servers.

Generating Intranet SSL Certs Using DNS-01 Challenge

But, what if you’re generating an SSL certificate for a mail server, or mumble server, or anything but a webserver?  You don’t want to spin up a web-server just for certificate verification.

Or what if you’re trying to generate an SSL certificate for an intranet server  Many homelabs, organizations and businesses need publicly signed SSL certs on internal servers.  You may not even want external A records for these services, much less a web-server for validation.

ACME DNS Challenge

Fortunately, Let’s Encrypt introduced the DNS-01 challenge in January of 2016.  Now you can respond to a challenge by creating a TXT record in DNS.

ACME Let's Encrypt DNS-01 Challenge Diagram


Lukas Schauer wrote dehydrated (formerly which can be used to automate the process.  Here’s a quick guide on Ubuntu 16.04, but it should work on any Linux distribution (or even FreeBSD).

Install dehydrated /

Hook for DNS-01 Challenge

At this point, you need to install a hook for your DNS provider.  If your DNS provider doesn’t have a hook available you can write one against their API, or switch to a provider that has one.

If you need to pick a new provider with a proper API my favorite DNS Providers are CloudFlare and Amazon Route53.  CloudFlare is what I use for  It gets consistently low latency lookup times according to SolveDNS, and it’s free (I only use CloudFlare for DNS, I don’t use their proxy caching service which can be annoying for visitors from some regions).  Route53 is one of the most advanced DNS providers.  It’s not free but usually ends up cheaper than most other options and is extremely robust.  The access control, APIs, and advanced routing work great.  I’m sure there are other great DNS providers but I haven’t tried them.

Here’s how to set up a CloudFlare hook as an example:

In letsencrypt-cloudflare-hook/ change the top line to point at python3:

Config File

Edit the “/etc/dehydrated/config” file… add or uncomment the following lines:


Create an /etc/dehydrated/domains.txt file, something like this:

The first four lines will each generate their respective certificates, the last line creates a multi-domain or SAN (Subject Alternate Name) cert with multiple entries in a single SSL certificate.

Finally, run

The first time you run it, it should get the challenge from Let’s Encrypt, and provision a DNS TXT record with the response.  When validated the certs will be placed under the certs directory and from there you can distribute them to the appropriate applications.  The certificates will be valid for 90 days.

For subsequent runs will check to see if the certificates have less than 30 days left and attempt to renew them.


It would be wise to run dehydrated -c from cron once or twice a day and let it renew certs as needed.

To deploy the certs to the respective servers I suggest using an IT Automation tool like Ansible, you can configure an ansible playbook to run from a daily cron job to copy updated certificates to remote servers and automatically reload services if the certificates have been updated.  Here’s an example of an Ansible Playbook which could be called daily to copy certs to all web-servers and reload nginx if the certs were updated or renewed:


PSD is not my favourite file format

This programmer does not like the PSD File Format:


At this point, I’d like to take a moment to speak to you about the Adobe PSD format.

PSD is not a good format. PSD is not even a bad format. Calling it such would be an insult to other bad formats, such as PCX or JPEG. No, PSD is an abysmal format. Having worked on this code for several weeks now, my hate for PSD has grown to a raging fire that burns with the fierce passion of a million suns.

If there are two different ways of doing something, PSD will do both, in different places. It will then make up three more ways no sane human would think of, and do those too. PSD makes inconsistency an art form. Why, for instance, did it suddenly decide that *these* particular chunks should be aligned to four bytes, and that this alignement should *not* be included in the size? Other chunks in other places are either unaligned, or aligned with the alignment included in the size. Here, though, it is not included. Either one of these three behaviours would be fine. A sane format would pick one. PSD, of course, uses all three, and more.

Trying to get data out of a PSD file is like trying to find something in the attic of your eccentric old uncle who died in a freak freshwater shark attack on his 58th birthday. That last detail may not be important for the purposes of the simile, but at this point I am spending a lot of time imagining amusing fates for the people responsible for this Rube Goldberg of a file format.

Earlier, I tried to get a hold of the latest specs for the PSD file format. To do this, I had to apply to them for permission to apply to them to have them consider sending me this sacred tome. This would have involved faxing them a copy of some document or other, probably signed in blood. I can only imagine that they make this process so difficult because they are intensely ashamed of having created this abomination. I was naturally not gullible enough to go through with this procedure, but if I had done so, I would have printed out every single page of the spec, and set them all on fire.

Were it within my power, I would gather every single copy of those specs, and launch them on a spaceship directly into the sun.

PSD is not my favourite file format.


— code comment from



RHEL/CentOS, Debian, Fedora, Ubuntu & FreeBSD Comparison

Over the years I’ve used a number of Linux distributions (and FreeBSD), these are my top 5 and how I rank them:



Gnome ScreenshotI’m not a big fan of Ubuntu’s Unity, so Ubuntu-Gnome, Kubuntu, Debian and Fedora are my top distros for desktop choices.  If you want the latest Gnome features Fedora gets them first.  For KDE I think Kubuntu does a great job at reasonable default settings (like say, having the Start button open the KDE menu, why is it KDE programmers think that shouldn’t be default behavior?) where I have to do quite a bit more tweaking on other distros.  Ubuntu-Gnome also provides an optional PPA which tracks the latest version of Gnome bringing it almost as up to date as Fedora is.

Ugly fonts – for some reason, on FreeBSD, Fedora, CentOS, and Debian the fonts look ugly… I don’t know if they can’t detect my video card properly or if there’s something wrong with the fonts themselves but on every system I’ve tried the fonts look much better on Ubuntu based distributions.

If you’re interested in FreeBSD for a desktop PC-BSD is worth a look, but in my experience Linux runs a lot better on the desktop than FreeBSD.


FreeBSD is historically my favorite server OS, but they tend to lag behind on some things and I have trouble getting some software working on it so for the most part I use Ubuntu for servers as it seems to have the best out of the box setup.  90% of the time I’m deploying in virtual environments and open-vm-tools is now enabled by default in 16.04.

With perhaps the exception of Fedora all the distros make decent servers.


All the package management systems are pretty decent, I do prefer apt just because I never have any problems with it and it’s faster.  Debian and Ubuntu have the most packages available, and Ubuntu has PPA support which makes it easy to manage 3rd party repositories.

One thing I don’t like about Debian, while it does have a lot of packages is a lot of packages are out of date.  A few months ago I tried to install Redmine from the repository and even though the repository had it at version 3.0 the actual version that was installed was 2.6.  Someone needs to do some clean up.

CentOS hardly offers any packages so you have to enable the EPEL just to make it functional and even then it’s limited.   My main issue with CentOS is it seems if you want to do anything other than a very basic install you’re dealing with not finding packages (like rdiff-backup, why isn’t that in the repos?) or needing packages from conflicting repositories and sometimes having to download them manually.  It’s a nightmare.

One other thing I like about apt is the philosophy of Debian and Ubuntu of setting up some sensible default configurations and enabling the service.  After installing packages on Fedora, CentOS, or FreeBSD I’m often left manually creating configuration files.  CentOS is the most annoying–maybe it’s just me but if I install a service I want SELinux to not block me from running that service… and when I make a change in SELinux it should take effect immediately instead of arbitrarily taking a few minutes to come to it’s senses.

Free Software

Richard Stallman
By – Thesupermat – CC BY-SA 3.0

While Richard Stallman wouldn’t endorse any of the distributions I’m comparing, if he had to choose from these Debian would likely be his choice.

Debian LogoAll the OSes include or provide ways of obtaining non-free software, but Debian is at the forefront of making it a goal to move to Free Software.  Fortunately I think they do this in a smart way where they’re still including ways to install non-free drivers so you can at least make a system usable.  I think Debian does the best job of making it clear what’s free and what isn’t, and allowing the user to make the choice.



RedHat LogoI used to be a big RedHat fan back in the RH 6 and 7 days.  Then one day my loyalty was rewarded when out of the blue RedHat decided to start charging for updates for their “Free” OS… RedHat’s new free alternative was Fedora which was so unstable it was unusable.  I was suddenly going to need to buy lots of licenses… this left me scrambling for a solution and I eventually switched over to Ubuntu.  Since then I’m wary about anything related to RedHat.  CentOS is now the free version of RedHat while Fedora is where all the new features are available and it’s not so unstable these days.  And, yes, RedHat, I’m still bitter.

Ubuntu introduced Amazon ad supported searches and even worse was by default sending search keywords from the unity lens to Canonical.  I’d consider this an invasion of privacy and really the first time I started looking for Ubuntu alternatives after I switched from RedHat.   Fortunately the feature was easy to disable, and now Ubuntu has since disabled it.

Out of Box Hardware Support

Dell XPS 13 with UbuntuUbuntu has the best out of box hardware support.  Dell’s XPS 13 even comes in a developer edition that ships with Ubuntu 14.04 LTS.  It works outUbuntu Logo of the box on just about every laptop I’ve tried it on.  Also it was the first distro to support VMware’s VMXNET3 and SCSI Paravirtual driver in the default install and now I believe it’s the only distro that has open-vm-tools pre-installed.  All this cuts down on the amount of time and effort it takes to deploy.

I wish Debian did better here.  Debian excludes some non-free drivers which is good for the FSF philosophy but it’s also means I had no WiFi on a fresh Debian install.  Apparently you’re supposed to download the drivers separately.  This is particularly bad when your laptop doesn’t have an Ethernet port so you have no way to download the WiFi drivers.  I suppose I could have re-installed Ubuntu then downloaded the Debian, WiFi drivers, save them off to a USB drive, re-install Debian and side-load the WiFi drivers… but what a hassle.

Automatic Security Updates

Ubuntu and Debian give the option of enabling automatic security updates at install time.  The other systems have ways of enabling automatic updates but there isn’t an option to enable it by default at install time.  My opinion is all operating systems should automatically install security updates by default.

Init System

FreeBSD DaemonFreeBSD avoids the nonsense for the win here.  I do not like systemd.  I’d rather spend time not fighting systemd.  Maybe I can figure it out someday.  Why didn’t we all switch to upstart?  I liked upstart.

Cutting Edge vs Stability

Fedora LinuxFor cutting edge Fedora or Ubuntu standard (every 6 month) releases keep you up to date, great for wanting to stay cutting edge on a Desktop Environment.

FreeBSD is the most stable OS I’ve ever used.  If I was told I was building a solution that would still be around in 30 years I’d probably choose FreeBSD.  Changes to the base system are rare and well thought out.  If you wrote a program or script on FreeBSD 10 years ago it would probably still work today on the latest version.   In the Linux world I like Debian stable or Ubuntu’s LTS (after the first point release) and CentOS (aslo after the first point release) are great options.

Ubuntu provides the best of both worlds getting cutting edge with LTS releases which I find very beneficial for having a stable environment but still having relevant development tools and up to date server environments.  If you need something newer you have PPAs, but most of the time the standard packages are new enough.  Right now for example Ubuntu 16.04 LTS is the only distribution that ships with a version of OpenSSL and NGINX that supports an http/2 implementation that works with Google Chrome.  To top if off both OpenSSL and NGINX packages fall under Ubuntu’s 5-year support.  You don’t have to add 3rd party repos, solve dependency issues.  Just one command: “apt install nginx” and you’re good for 5-years.

Ubuntu 16.04 LTS is the only distro that supports http/2

(above screenshot from:


FreeBSD LogoFreeBSD is the best OS I’ve ever used at upgrading to a newer release.  You could probably start at FreeBSD 4, and upgrade all the way to 11 and have no issues.  Debian and Ubuntu also have pretty good upgrade support… in all cases I test upgrading before doing it on a production system.

Long Term Support (LTS)

CentOS LogoCentOS has the longest support offering at 10-years!  Combined with the EPEL repository (which also has the same goal) I’d say RedHat/CentOS is the best distribution for a “deploy and forget” application that gets thrown in /opt if you don’t want to worry about changes or upgrades breaking the app for the next 10-years.  This is probably why enterprise applications like this distribution.

Debian is just starting a 5-year LTS program through a volunteer effort.  I’m looking forward to seeing how this goes.  I’m glad to see this change as lack of LTS was one of the main reasons I decided on Ubuntu over Debian.

Ubuntu offers 5-year LTS.  Ubuntu’s LTS not only covers the base system but also the Ubuntu team supports many packages (use “apt-cache show packagename”) and if you see 5y you’re good.

Predictable Release Cadence


Ubuntu has the most predictable release cadence.  They release every 6 months with a 5-year LTS release every 2-years.  Having been a sysadmin and a developer I like knowing exactly how long systems are supported.  I plan out development, deployments, and upgrades years in advance based on Ubuntu’s release cadence.

My Thoughts

When I was younger it was fun to build my entire system from scratch using Gentoo and compile FreeBSD packages from ports (I also compiled the kernel).  Linux wasn’t as easy back then.  I remember just trying to get my scroll wheel working in RedHat 7.

Screenshot of how to get the scroll wheel working
I found this old note.  I finally got the scroll wheel working in RedHat 7.1!

Linux distributions are tools.  At some point you have to stop trying to build the perfect hammer and start using it to put nails in things.

Now days I don’t have time to compile from scratch, solve RPM dependency issues, or find out why packages aren’t the right version.  In the year 2000 I could understand having to fix ugly font issues and messing around with wifi-drivers.  But we should be beyond that now.  That was the past.

Calvin and Hobbes Comic Strip
By Bill Waterson, 1995-08-27, Fair Use – 17 U.S.C. § 107


Ben wearing RedHat
I used to wear the official RedHat Fedora

Fonts, automatic updates, scroll wheel, touchpad, bluetooth, wifi, printers, and hardware in general should be working out of the box by now–if it isn’t I’m not going to put a lot of effort into getting the distro working.  It’s time to move forward and focus work on things beyond the distribution–while I love all sorts of distros, I don’t want to be like Calvin fighting the computer the whole way.  I actually do work on them and need something stable and up to date out of the box with sane default settings.  Having predictable release cycles also helps.  If I could combine the philosophy of Debian with the few extras that Ubuntu provides I’d have the perfect distro.  But for the time being Ubuntu is close enough to what I want–I’ve been using it probably since 5.04 (Hoary Hedgehog) and standardized on it when they started doing LTS releases.  That doesn’t mean it’s for everyone, not everyone likes it, some people prefer the more vanilla feel from Debian, others might want something easier like Mint.  If you prefer CentOS, Fedora, Arch, etc. and they work well for you, use them.

Actually I don’t use Ubuntu for everything.  For my production environment I’ve standardized on Windows 10 for desktops, ESXi for virtualization, FreeNAS for storage, pfSense for firewalls, and Ubuntu for servers.  Honestly, none of the above systems were my first choice… but I’m at where I am because my first choices let me down.  It will likely evolve in the future, but for the time being that’s my setup and it works pretty well.

The great thing about modern day Linux distributions (and FreeBSD) is they’re all pretty good.  I haven’t had to hack an Xorg file to get the scroll wheel working in a long time.



Journey to Facebook

Week 1:

Number of Friends: 6.  (That’s probably enough)
Number of Likes: 0.
Species: Kind of like the Borg.

Defender (Star Trek USS Enterprise) of Freedom vs Facebook (Borg ship)

I see my home,, getting further into the distance.  My blog is in one of the most beautiful locations nestled in the mountains between the Tech and Conservative Blogs, definitely more on the Tech side and well away from the Bay of Flame.  I can see the tech blogging area I’m most familiar with getting smaller and smaller.  A few minutes later I see Lifehacker passing by and I’m flying over the Sea of Opinions.   And then it hit me.   I’ve left the Blogosphere.

After a long flight I stop for a layover at Reddit, then I was back in the air and landed just north of Data Mines, Facebook.  And I joined Facebook.  The reason for my travel?  I’m looking for information locked away in a closed Facebook group.

That was last week.

Map of Social Networks showing my travel from the Blogosphere to Facebook

Most of my friends left the Blogosphere for MySpace, and then moved further north to Facebook years ago (and I’ve re-united with six of them so far).   My impression of Facebook so far: It’s like a bunch of mindless drones all talking at once–well, let me start over.  It’s like a bunch of ads all talking at once and mindless drones trying to shout above them.

Facebook is a land I’ve always avoided–It’s basically what AOL or Geocities should have become–a step back from freedom and individuality.

It’s Not Social Networking That’s the Problem

When you join Facebook, you have to abide by their rules and subject yourself to their censorship.  If you disagree with Facebook, you either comply or you’re out.  There’s no alternative.

Websites, Blogging, and Email on the other hand are based on what the internet should be–open protocols.  If I run my own email server I can send an email to anybody else no-matter what provider they use!  This blog is run on a server I control.  Currently it’s rented from DigitalOcean because I no longer have the bandwidth at my house to run it, but in the past I’ve run it from my dorm room, my bedroom closet, from right under my desk, and from Jeff’s house.  And the thing is anybody can setup their own server–but they don’t have to.  They can use a provider like Blogger or Gmail if they prefer–but if you can get better service somewhere else you can migrate to different provider at will and not lose anything.

But Facebook isn’t open and federated.  Facebook users can only talk to other Facebook users and as long as you want to talk to your Facebook friends the only way is to be on Facebook yourself.  The content is all stored on their servers so you are at their mercy for control and privacy of your content.  Or is it your content?  On Facebook, you are not your own individual, or your own community.  You are part of the Borg.

I’m not against social networking, but Facebook is designed in a very centralized manner which isn’t consistent with how the internet services should be–more distributed and federated.  Some social networks I might be more interested in are Friendica and Diaspora but I don’t think they have much traction yet.

One More Thing

One particularly concerning thing about Facebook is you don’t pay for it–which means, that you’re not Facebook’s customer.  No, indeed.  You, my liked Friend, are the product being sold.