Best Gifts for Computer Hackers 2016

Looking for a Christmas gift idea for your computer geek?  Here’s a short gift guide with a few ideas I think would make great gifts.  Unlike a lot of other top gift idea lists written by non-tech people just to make a sale, I’m actually a developer and these are the sort of things that I would enjoy (in fact most of them I own or at the very least had a chance to play with).

Here’s some gifts your geek, hacker, developer, programmer, tech enthusiast, etc. may enjoy:

WiFi RGB LED Light

Wifi LightbulbMagicLight WiFi Smart LED Light Bulb ($).  This looks like a normal light-bulb, but it can connect to your WiFi network and be controlled by your SmartPhone, or through home automation software, or Python scripts.  This Bulb can change to any color.  You can send it HTML Hex Color Codes!  If you live up in North Idaho like I do you can program your light to gradually get brighter in the morning to wake you up naturally in the months where the Sun doesn’t rise until late in the day.  Or program it to redshift in the evenings before bedtime so the blue light isn’t messing with your circadian rhythm.  Or have it turn red as a warning when you’ve left the garage door open after dark!  Put a few outside on your house and set them to be certain colors during the Holidays (Red & Green at Christmas, Orange during Halloween, Red, White, and Blue for Independence Day).

Raspberry Pi 3

Raspberry Pi 3 KitRaspberry Pi 3 Starter Kit ($$).  Every technology enthusiast would enjoy a Raspberry Pi.  There are so many projects you could do… build your own weather station, automatic sprinkler system, home automation server, arcade, even a small computer, tiny server, thermometer, etc.

Python Book

Python Programming Book CoverPython Programming for Beginners ($) by Jason Cannon.  Yes, the name comes from Monty Python.  Python is becoming a well loved language and is growing fast, and is fun to learn and practical.  I have been seeing a lot of increase of this language lately.  This is one of the best programming languages to learn, even if you’re not a programmer.  This book is perfect for someone new to Python or even for someone starting out learning to code for the first time.

Mechanical Keyboard

Mechanical Keyboard
MasterKeys Pro L Mechanical Keyboard.  ($$$).  (This is the latest model, I use an older version of this keyboard at work).  If your hacker is on the young side there’s a good chance he has never experienced the joy of typing on a mechanical keyboard and may not even know they exist!  Does your keyboard let you press every single key on the keyboard simultaneously and they ALL register?  This keyboard does.  Cherry MX KeysThis keyboard has 3 switch options.  Cherry MX Red, Brown, or Blue.  I linked to the Cherry MX Brown version but there are several different switch types:  Cherry MX Reds have no tactile bump, they are linear so great for FPS or RTS gaming where speed matters.  Cherry MX Blues provide an audible click and a tactile bump and are great for typing (unless noise is a concern), Cherry MX Browns provide subtle tactile feedback with no audible click making it a great all-purpose keyboard.  The MX browns are my favorite Cherry switch and it’s what I recommend starting with for most people if you don’t know what you want.

I should mention, that by “no audible click” I mean no added click noise.   Kris tells me the “silent” Browns and Reds are loud compared to a typical keyboard.  The Blues are even louder. 

Civilization

Civ 6 Screenshot
Civilization VI ($$).  This game is one of the longest running series, and in my opinion one of the best turn-based strategy games on the market.  Your gamer geek can play single-player, or online with friends.  Starting out with a single Settler and building cities… what I like about Civilization is the unique ways to win.  Most games are about World Domination through force.  But in Civilization that is just one of many ways to win.  In addition to Domination you can obtain Victory through Culture, Religion, or Science.

Chicory Coffee & Beignet Donuts

Cafe Du Monde
Cafe Du Monde – New Orleans

cafe_du_monde_mix_setChicory Coffee & Beignet Donuts ($).  If you are ever visiting New Orleans you should stop by the Cafe Du Monde (open 24/7) for some Beignet Donuts and Café au lait.  But the next best thing is giving the gift of coffee and donuts for those early mornings or late night programming sessions.  This is one of my favorite coffee flavors, it has a unique taste and everyone I’ve brewed it for loves it.

YubiKey

yubikey-neo-1000-2016-444x444YubiKey Neo ($).  If your hacker is concerned about security you might consider getting him the YubiKey Neo.  It’s a 2nd Factor Authentication device which works with Android (using NFC), Linux, Mac, and Windows.  Everyone should be locking down their accounts (Email, Github, etc.) with a Yubikey.  Yubico is one of the more reputable companies.  Last year a security bug was discovered in the OpenPGP applet and they offered free replacement (including free shipping) for all the affected devices.  Their software to work with they key is open source on GitHub.  YubiKey supports such a large variety of MFA authentication methods including FIDO U2F, HOTP, TOPT, Yubico OTP, PIV-Compliant SmartCard, HMAC-SHA1 challenge response, etc.  It’s really the only authentication device you need.  I can authenticate with just about any service and protocol using a single YubiKey.

ESV Bible

ESV MacArthur Study Bible PersonalESV MacArthur Study Bible Personal Size ($).  Of course, it would be remiss of me not to include a gift that has to do with the very reason we celebrate Christmas.  From the Creation and Fall of man, the Son of God coming to earth to die on the cross to take the penalty for our sins, and raising from the dead so that anyone who believes in Him will have eternal life.  I received this as a gift a few years ago and it’s to date my favorite Bible.  I don’t think you’ll find a higher quality Bible at this price point, it’s even Smyth Sewn which surprised me!  MacArthur has some of the most scholarly and practical (easy enough for me to understand) Study Bible notes on the market today.  His notes are extensive enough to be helpful, yet the personal edition is still small enough to be portable.

Well, that’s my guide for this year.  Wishing everyone a Happy Thanksgiving and a Merry Christmas.

 

Is Your WiFi Unstable?

The most Frequently Asked Question from my Family, Friends, and FOAFs…

Laptop Buyer: What Kind of Laptop Should I buy?
Ben: Get one with an Intel Wireless card

WiFi Cards Matter

wifi_cardThe first piece of advice I have is make sure your wireless card is made by Intel.  Do not get anything else.  You might see other tempting wireless cards for so much less by Dell, Broadcom, Ralink, Killer, Realtek, etc.  These WiFi cards might work with most WiFi hotspots, they might work most of the time, but don’t get them.  The problem is they aren’t robust.  I’ve seen them drop connections randomly, not be able to connect to certain wireless APs, drop out the signal when the Microwave is running, etc.  At best case it works fine but later on a driver update might make it worse.  It is not worth saving a few bucks to deal with these issues.  Pay extra for an Intel branded WiFi Card.  It might cost you $20 more and save you months of frustration.  You’ll thank me later when your card isn’t disconnecting randomly.

This brings me to the 2nd most Frequently Asked Question….

My Wireless Keeps Disconnecting.  Help!

Laptop buyer: So, my wireless signal keeps dropping out.
Ben: Did you get an Intel Card like I told you?
Laptop buyer: No….
Ben: Were you trying to save money and went too cheap?
Laptop buyer: Yes…..

And the 3rd most Frequently Asked Question….

Can You Fix My WiFi Stability?

If Eli can fix it, you can fix it.

replacing_wireless

You will need to swap out your WiFi card.

If you’re in the situation where you bought a laptop with a flaky WiFi card, it’s easy to fix!  Grab an inexpensive Intel 7260 WiFi Card from Amazon.  On most laptops the WiFi card is easily accessible from behind the back cover, usually it’s not more work than a memory upgrade.  Unplug the antenna connectors from your unstable wireless card, pop it out, and put the new card in and hook it up.  Your WiFi connections will now be robust.

Back Story

I don’t say this because I’m an Intel fan.  I just want things to work.  Every couple of years I give another brand a try just to make sure my “only Intel” advice is relevant.  I’ve had the same experience with non-Intel brands the last 15 years!

Last year I decided to buy a cheap laptop to watch movies on (we don’t have a TV) and it came with a Dell DW 1704 / Broadcom 4314 Wireless Card.  I bought it just to see if things had gotten better.  They haven’t.  This wireless NIC can’t stream a full length movie from my media server without losing the wireless signal several times.

And it’s not just me, earlier this year several of my colleagues bought Dell XPS laptops with Killer Branded WiFi cards.  They just don’t work reliably in scenarios that Intel chips do.  In their case they couldn’t connect to several APs.  In my case the connection would drop several times a day.  This was both in Windows 10 and in Linux.  And yes, I tried disabling power saving mode on the WiFi adapter.

I’ve had friends and family not be able to even connect to certain APs at all until they swapped out their Broadcom, Killer, or Ralinks for an Intel card.  Now, you might get lucky and find another brand that works.  To me it’s not worth the hassle.

The next time you buy a computer, get one with an Intel WiFi card.

 

 

Ben’s Phone Guide (2016 edition)

Phones depreciate in value fast, their useful life is less than their lifespan.  Not because old phones don’t work anymore.  But because manufacturers stop providing security updates after about 3 years (at best!)

old_phone

What If I Told You a Hacker Can Take over Your Phone with One Text… And You Don’t Even Have to Open It?

You might be hacked now and not even know it.

Exploits like like this and like this are real.  Vulnerabilities have been found in the past and exploited.  They will be found in the future and exploited.  Some exploits require you to do nothing but receive (not even open, just receive) an SMS message and a hacker can do what he wants with your phone.  He can install malware, use your phone to launch a DDoS attack against Krebs on Security, he can spy on you (or your kids if your kids have phones) activating the camera and microphone at will listening in on your conversations and reading every message passing through the device.

The only protection against this is either (1) not have a phone (more secure), or (2) if you must have a phone, keep it up to date constantly (not as foolproof but would block all but the most sophisticated hackers).

One of the big problems with phones is security.  For iPhones you get your updates through Apple.  For Android things aren’t as clean.  The Android OS itself gets security updates, but then it has to trickle down through the manufacturer (who often doesn’t provide an update) and then the carrier you bought the phone from.

Calculating Remaining Life Before You Buy

To calculate the real cost of a phone, find out how long the manufacturer and carrier will support security updates for it.  Divide the cost of the phone by the number of months left for security updates and that’s cost of the phone.

monthly cost = cost of phone / remaining life in months

e.g.
cost of phone: $500
remaining life for security updates: 29 months
monthly-cost: $500/29 = $17.24

Oddly, the price of phones doesn’t usually drop that much after the 1st year even though they have lost 1/3rd of their useful life!

There Are Only Two Options

A lot of phone manufacturers / carriers don’t even provide updates to their phones.  They’re unsupported from the moment you bought them!

For the sake of security, I only recommend two phone manufacturers.  Google and Apple.  Both have a track record of providing timely security updates.  Google pushes out a security update every month and Apple doesn’t have a schedule but does a good job getting them out timely.  I also only recommend Apple with the caveat that you trust them because it is a proprietary closed source OS.  You are trusting them to do the right thing and have decent security.

Google Nexus Devices

Nexus 5X

Google stopped selling the Nexus, but they still have 2 years of updates left and are reasonably priced on Amazon.

Google Guarantees Security Patches on Nexus devices 3-years from the release date or at least 18 months from when the Google Store last sold the device (whichever is longer).

As of October 2016, here is the cost per month as I calculate it:

Nexus 5X – security updates until October 2018.  $332. – 16GB.
Ben’s cost over remaining life:  $332/24mos = $13.83/mo
Nexus 6P – security updates until October 2018. $450 – 32GB.
Ben’s cost over remaining life: $450/24mos = $18.75/mo

(If you get a Nexus, note that there are U.S. and International versions of the phone, if you live in the U.S. you’ll want the U.S. version).

Google has not committed to EOL dates on the Pixel line but if it’s similar to Nexus you’re looking at:

Google Pixel – $650 – 32GB – probably until October 2019
Ben’s cost over remaining life: 650/36mos = ~$18.05/mo

Google Pixel XL – $770 – 32GB – probably until October 2019
Ben’s cost over remaining life: 770/36mos ~$21.38/mo

Apple Devices

iPhone 7

iOS is closed source so I consider it less secure and less open than Android, but they do a pretty decent job at keeping hackers out.  Most compromises I hear about are through hooking your iPhone up to a service like iCloud and not the iPhone itself.  I used to use an iPhone, but at the time it was the best phone (better than Blackberry).  Now that we have Android I don’t see a huge need to use a closed proprietary system.  However, it’s always good to have competition.

Here’s a comparison of iPhone models currently getting security updates with a guess of (but not guaranteed) security updates for 3-years.

iPhone 7 Plus – probably until September 2019
Ben’s cost over remaining life: $650/35mos ~$18.57/mo

iPhone 7 Plus – probably until September 2019
Ben’s cost over remaining life: $650/35mos ~$22.00/mo

iPhone 6S – probably until September 2018
iPhone 6 / 6 Plus – probably until September 2017
iPhone 5S / 5C – probably until the next major iOS update

Where Not to Buy a Phone

Mobile carriers typically install a lot of battery sucking bloatware, which can’t be deleted, and often delay pushing out security updates by months, even years, leaving your phone vulnerable to hackers.  Not only that some of the extra software installed introduces vulnerabilities.

Also, phones bought from a mobile carrier are usually locked to that carrier so you can’t switch to someone else without purchasing a new phone.

Mobile Carriers

Having an unlocked phone I avoid the main carriers and instead use MVNOs (Mobile Virtual Network Operator).  These MVNOs use the same network that Verizon, AT&T, Sprint, and T-Mobile have, but most often for a better price.  For great service and prices I like Google Fi (Sprint & T-Mobile Network), Ting (Spring or T-Mobile), and TracFone (Verizon or AT&T) and there are plenty of other MVNO operators to choose from.  You can find one that offers the best plan for your situation.  Using TracFone (which is a pre-paid service) we pay less than $10/month for a voice/data/text plan for a Nexus 5X on Verizon’s network.

Don’t Save Money with a Used Phone

I used to buy used phones off eBay to save money but now I don’t think it’s a good idea with the recent USB firmware hacks and the amount of malware out there.  Used phones are a security risk–you have no idea if a used phone has been compromised, and if it’s been plugged into a compromised USB device that rewrote it’s firmware.  Physical security is paramount.  To be safe, I always buy my phones new.

Personal Data on Work Phones and Work Data on Personal Phones

Think carefully before using your personal phone for work.  If you connect your phone to work email it almost always gives your employer complete control of the device.  They can wipe your phone when you leave, track your location, install software on your phone, and have access to all your personal data.

And similarly, if you put your personal information or your personal email account on a work phone your employer has access to that data.

What Phone Do I Have?

Kris and I both use the Nexus 5X.  I’ve reviewed the Nexus 5X here.  I will likely replace them both when security updates go EOL which will likely be 2018.  Pixel phones are bit expensive so I’m hoping they release some new phones on the Nexus line again next year.

Phone Safety Tips

  1. Always use a phone that’s getting regular (monthly) security updates.  As soon as the phone goes out of support, get a new phone.
  2. Minimize the number of apps you install.  Limit yourself to the official Google Play Store or iOS store and avoid 3rd party stores like the Amazon Store where authors don’t do as good a job at keeping things updated.
  3. Favor installing well known apps with lots of downloads as they’re more likely to be reviewed and have better security practices.
  4. Uninstall apps that you don’t use.
  5. Always buy a new phone.
  6. Don’t use a phone at all.
  7. If you have a Samsung Note 7, you might want to return it before you catch on fire.

 

How to Encrypt Your Email

So, you want to hide your email from the NSA’s prying eyes?  It’s impossible… but here are some steps you can use to make it harder.

This isn’t theoretical.  The NSA has and does intercept this traffic.

Common Points of NSA Interception

The NSA has unlimited resources to compromise your communications.  You’re not going to stop them.  But that doesn’t mean it should be easy. Below are the easy points of NSA interception.  In this example of an email from Mom to Ben the NSA can intercept the email at Mom’s ISP, Mom’s email provider, Ben’s email provider, Ben’s ISP, and any internet hop in between.

no-encryption

 

I’m going to skip over a lot of important stuff, this guide is not intended for security experts or sysadmins of email systems and how to prevent downgrade attacks, etc.  This is meant to be a post about what the average American should do to protect their emails.

Step 1. Client to Server TLS Encryption

client-server-encryption
thunderbird_starttlsEnsure your email client (e.g. Thunderbird) or browser is using a TLS connection to the server.  If you’re using any major provider like Gmail, Office 365, etc. they will be enforcing TLS.  All email providers should be enforcing TLS so if yours is not that’s a good sign you should be switching.

chrome_httpsIf using webmail your browser should show https, if using Thunderbird you should be using STARTTLS for both inbound and outbound connections.

Note, the entire CA (Certificate Authority) system is broken, the NSA could generate a fraudulent certificate from an amicable CA and do a MITM attack and still intercept the email, but now they have to take some effort to do so.  The point is security comes in layers, and we need to start at the basics, we’ll get to more advanced security below.

Step 2. Make sure Your Email Provider is Encrypting Server to Server Traffic

server-serverIn 2013 Google was outraged after finding out the NSA was intercepting it’s server to server traffic.  As a result Google started encrypting all internal traffic between servers (Good for Google).  Most major internet providers provide server to server encryption.  But the problem is not all ISPs use encryption, so it doesn’t do much good if you send an email from a secure service like Gmail to a small-town ISP that has no security whatsoever.  Probably the best way to check is to enter in a recipients email address here: http://checktls.com/ and if their email provider’s MX server’s pass all the test they’re probably secure.

Step 3.  PGP Encrypt Your Emails

openpgp-4096

Now, the NSA can still potentially intercept your emails at rest through a court order, through PRISM, or through hacking into ISPs.  Your email should be encrypted not only in transit, but also at rest.  The best way to do that is to encrypt it using OpenPGP.  This means even if the NSA gets a hold of your email they can’t read it (at least not without spending some serious time and money).

PGP (Pretty Good Privacy) isn’t foolproof.  It doesn’t encrypt the metadata (the NSA can still see that you sent me an email, they can see when you sent it and where you were) but it does encrypt the content.

How go you get OpenPGP?  Right here:  http://openpgp.org/software/ It’s free, open source, and there are plugins for just about everything.  It works on Webmail, Thunderbird, Outlook, etc.  Check the link above for a complete list but here are two common options:

If you use Thunderbird I suggest Enigmail, and if you use Gmail with the webmail interface Mailvelope is a great plugin.

Here’s a very quick getting started guide for Mailvelope below.  If you’re not going to use Mailvelope the concept is pretty much the same nomatter what plugin you choose.  You’ll Generate a Public/Private Keypair, obtain the public key of the person you’re sending an email to, and send them en encrypted email.

How to Setup Mailvelope for Gmail and Chrome

Mailvelope IconHere’s a quick walk-through to set it up.  After installing the plugin you should see this icon on the top-right in Chrome.   Right-click on it and choose Options.

Next Generate a Key….

I should note that “Password” is traditionally called a Pass Phrase, it should be long, but you don’t ever want to forget it or you won’t be able to read any encrypted messages sent to you.  I strongly suggest writing it down and keeping it someplace safe.

mailvelope_key_generation

Now, to send an encrypted email to me, you’ll need to import my key.  Go to “Import Keys” and type in my email address and hit search.  You should click on the keyID: 13E708FC.  A key will pop up, click on it to import my key.

mailvelope_buttonNow, you can send me an encrypted email.  Go to compose a new email in Gmail.  You’ll notice a button in the compose menu.  Click the button.

Write me a message…

compose_email_to_ben

When you receive an encrypted email, it will look like this.  Click on it and enter your passphrase to decrypt.

decrypt

And there you have it.   I wouldn’t say this is foolproof…. it doesn’t protect against a lot of other attack vectors…

XKCD Comic
CC-By-NC 2.5 https://creativecommons.org/licenses/by-nc/2.5/

But I say if the NSA is going to intercept my communications it shouldn’t be easy.  I want them to spend some effort and money to do so.

For further reading I might suggest https://futureboy.us/pgp.html

 

ZFS Dataset Hierarchy | Data Hoarder Edition

OpenZFS LogoZFS is flexible and will let you name and organize datasets however you choose–but before you start building datasets there’s some ways to make management easier in the long term.  I’ve found the following convention works well for me.  It’s not “the” way by any means, but I hope you will find it helpful, I wish some tips like this had been written when I built my first storage system 4 years ago.

Here are my personal ZFS best practices and naming conventions to structure and manage ZFS data sets.

ZFS Pool Naming

I never give two zpools the same name even if they’re in different servers in case there is the off-chance that sometime down the road I’ll need to import two pools into the same system.  I generally like to name my zpool tank[n] where is an incremental number that’s unique across all my servers.

So if I have two servers, say stor1 and stor2 I might have two zpools :

stor1.b3n.org: tank1
stor2.b3n.org: tank2

Top Level ZFS Datasets for Simple Recursive Management

Create a top level dataset called ds[n] where n is unique number across all your pools just in case you ever have to bring two separate datasets onto the same zpool.  The reason I like to create one main top-level dataset is it makes it easy to manage high level tasks recursively on all sub-datasets (such as snapshots, replication, backups, etc.).  If you have more than a handful of datasets you really don’t want to be configuring replication on every single one individually.  So on my first server I have:

tank1/ds1

I usually mount tank/ds1 as readonly from my CrashPlan VM for backups.  You can configure snapshot tasks, replication tasks, backups, all at this top level and be done with it.

freenas_snapshot_pruning
ZFS snaps and pruning recursively managed at the top level dataset

Name ZFS Datasets for Replication

One of the reasons to have a top level dataset is if you’ll ever have two servers…

stor1.b3n.org
   | - tank1/ds1

stor2.b3n.org
   | - tank2/ds2

I replicate them to each other for backup.  Having that top level ds[n] dataset lets me manage ds1 (the primary dataset on the server) completely separately from the replicated dataset (ds2) on stor1.

stor1.b3n.org
 | - tank1/ds1
 | - tank2/ds2 (replicated)

stor2.b3n.org
 | - tank2/ds2
 | - tank1/ds1 (replicated)

Advice for Data Hoarders.  Overkill for the Rest of Us

supermicro_zfs

The ideal is we backup everything.  But in reality storage costs money, WAN bandwidth isn’t always available to backup everything remotely.  I like to structure my datasets such that I can manage them by importance.  So under the ds[n] dataset create sub-datasets.

stor1.b3n.org
 | - tank1/ds1/kirk – very important – family pictures, personal files
 | - tank1/ds1/spock – important – ripped media, ISO files, etc.
 | - tank1/ds1/redshirt – scratch data, tmp data, testing area
 | - tank1/ds1/archive – archived data
 | - tank1/ds1/backups – backups

Kirk – Very Important.  Family photos, home videos, journal, code, projects, scans, crypto-currency wallets, etc.  I like to keep four to five copies of this data using multiple backup methods and multiple locations.  It’s backed up to CrashPlan offsite, rsynced to a friend’s remote server, snapshots are replicated to a local ZFS server, plus an annual backup to a local hard drive for cold storage.  3 copies onsite, 2 copies offsite, 2 different file-system types (ZFS, XFS) and 3 different backup technologies (CrashPlan, Rsync, and  ZFS replication) .  I do not want to lose this data.

Multiple Backup Locations Across the World
Important data is backed up to multiple geographic locations

Spock – Important.  Important data that would be a pain to lose, might cost money to reproduce, but it isn’t catastrophic.  If I had to go a few weeks without it I’d be fine.  For example, rips of all my movies, downloaded Linux ISO files, Logos library and index, etc.  If I lost this data and the house burned down I might have to repurchase my movies and spend a few weeks ripping them again, but I can reproduce the data.  For this dataset I want at least 2 copies, everything is backed up offsite to CrashPlan and if I have the space local ZFS snapshots are replicated to a 2nd server giving me 3 copies.

redshirt_startrek

Redshirt – This is my expendable dataset.  This might be a staging area to store MakeMKV rips until they’re transcoded, I might do video editing here or test out VMs.  This data doesn’t get backed up… I may run snapshots with a short retention policy.  Losing this data would mean losing no more than a days worth of work.  I might also run zfs sync=disabled to get maximum performance here.  And typically I don’t do ZFS snapshot replication to a 2nd server.  In many cases it will make sense to pull this out from under the top level ds[n] dataset and have it be by itself.

Backups – Dataset contains backups of workstations, servers, cloud services–I may backup the backups to CrashPlan or some online service and usually that is sufficient as I already have multiple copies elsewhere.

Archive – This is data I no longer use regularly but don’t want to lose. Old school papers that I’ll probably never need again, backup images of old computers, etc.  I set set this dataset to compression=gzip9, and back it up to CrashPlan plus a local backup and try to have at least 3 copies.

Now, you don’t have to name the datasets Kirk, Spock, and Redshirt… but the idea is to identify importance so that you’re only managing a few datasets when configuring ZFS snapshots, replication, etc.  If you have unlimited cheap storage and bandwidth it may not worth it to do this–but it’s nice to have the option to prioritize.

Now… once I’ve established that hierarchy I start defining my datasets that actually store data which may look something like this:

stor1.b3n.org
| - tank1/ds1/kirk/photos
| - tank1/ds1/kirk/git
| - tank1/ds1/kirk/documents
| - tank1/ds1/kirk/vmware-kirk-nfs
| - tank1/ds1/spock/media
| - tank1/ds1/spock/vmware-spock-nfs
| - tank1/ds1/spock/vmware-iso
| - tank1/ds1/redshirt/raw-rips
| - tank1/ds1/redshirt/tmp
| - tank1/ds1/archive
| - tank1/ds1/archive/2000
| - tank1/ds1/archive/2001
| - tank1/ds1/archive/2002
| - tank1/ds1/backups
| - tank1/ds1/backups/incoming-rsync-backups
| - tank1/ds1/backups/windows
| - tank1/ds1/backups/windows-file-history

 

With this ZFS hierarchy I can manage everything at the top level of ds1 and just setup the same automatic snapshot, replication, and backups for everything.  Or if I need to be more precise I have the ability to handle Kirk, Spock, and Redshirt differently.

 

Intranet SSL Certificates Using Let’s Encrypt | DNS-01

Let's EncryptLet’s Encrypt is a great service offering the ability to generate free SSL certs.  The way it normally works is using http-01 challenge…  to respond to the Let’s Encrypt challenge the client (typically Certbot) puts an answer in the webroot.  Let’s Encrypt makes an http request and if it finds the response to the challenge it issues the cert.

Certbot

Certbot is great for public web-servers.

Generating Intranet SSL Certs Using DNS-01 Challenge

But, what if you’re generating an SSL certificate for a mail server, or mumble server, or anything but a webserver?  You don’t want to spin up a web-server just for certificate verification.

Or what if you’re trying to generate an SSL certificate for an intranet server  Many homelabs, organizations and businesses need publicly signed SSL certs on internal servers.  You may not even want external A records for these services, much less a web-server for validation.

ACME DNS Challenge

Fortunately, Let’s Encrypt introduced the DNS-01 challenge in January of 2016.  Now you can respond to a challenge by creating a TXT record in DNS.

ACME Let's Encrypt DNS-01 Challenge Diagram

 

Lukas Schauer wrote dehydrated (formerly letsencrypt.sh) which can be used to automate the process.  If you need to generate SSL certs for Windows I’ve added the ability to output to PFX / PKCFS 12 in my fork.

Here’s a quick guide on Ubuntu 16.04, but it should work on any Linux distribution (or even FreeBSD).

Install dehydrated / letsencrypt.sh

Hook for DNS-01 Challenge

At this point, you need to install a hook for your DNS provider.  If your DNS provider doesn’t have a hook available you can write one against their API, or switch to a provider that has one.

If you need to pick a new provider with a proper API my favorite DNS Providers are CloudFlare and Amazon Route53.  CloudFlare is what I use for b3n.org.  It gets consistently low latency lookup times according to SolveDNS, and it’s free (I only use CloudFlare for DNS, I don’t use their proxy caching service which can be annoying for visitors from some regions).  Route53 is one of the most advanced DNS providers.  It’s not free but usually ends up cheaper than most other options and is extremely robust.  The access control, APIs, and advanced routing work great.  I’m sure there are other great DNS providers but I haven’t tried them.

Here’s how to set up a CloudFlare hook as an example:

In letsencrypt-cloudflare-hook/hook.py change the top line to point at python3:

Config File

Edit the “/etc/dehydrated/config” file… add or uncomment the following lines:

domains.txt

Create an /etc/dehydrated/domains.txt file, something like this:

The first four lines will each generate their respective certificates, the last line creates a multi-domain or SAN (Subject Alternate Name) cert with multiple entries in a single SSL certificate.

Finally, run

The first time you run it, it should get the challenge from Let’s Encrypt, and provision a DNS TXT record with the response.  When validated the certs will be placed under the certs directory and from there you can distribute them to the appropriate applications.  The certificates will be valid for 90 days.

For subsequent runs letsencrypt.sh will check to see if the certificates have less than 30 days left and attempt to renew them.

Automate

It would be wise to run dehydrated -c from cron once or twice a day and let it renew certs as needed.

To deploy the certs to the respective servers I suggest using an IT Automation tool like Ansible.  I have a dedicated VM that runs Ansible.  You can configure an ansible playbook to run from a daily cron job to copy updated certificates to remote servers and automatically reload services if the certificates have been updated.  Here’s an example of an Ansible Playbook which could be called daily to copy certs to all web-servers and reload nginx if the certs were updated or renewed:

Create a file web-servers-nginx.yml

Add the below to your Ansible inventory file (mine is namned ‘production’).  “b3n.org” matches the primary name of the certificate, found in /etc/dehydrated/certs/

Execute the playbook with:

(note that the user that runs this needs to have permissions to read the certificates that dehydrated generated.  Easiest way to do that is to use the same user account to run dehydrated as you do for Ansible.  Also Ansible will need public/private key authentication setup to connect to the remote server without a password).

Then obviously you would have something like this in nginx:

(for the ssl_dhparam to work you’ll need to run the below command once on the web server):

And after that nginx needs to be restarted.

If this is a public server I strongly suggest testing with SSLLabs to make sure chaining and security is setup correctly.

 

PSD is not my favourite file format

This programmer does not like the PSD File Format:

/*

At this point, I’d like to take a moment to speak to you about the Adobe PSD format.

PSD is not a good format. PSD is not even a bad format. Calling it such would be an insult to other bad formats, such as PCX or JPEG. No, PSD is an abysmal format. Having worked on this code for several weeks now, my hate for PSD has grown to a raging fire that burns with the fierce passion of a million suns.

If there are two different ways of doing something, PSD will do both, in different places. It will then make up three more ways no sane human would think of, and do those too. PSD makes inconsistency an art form. Why, for instance, did it suddenly decide that *these* particular chunks should be aligned to four bytes, and that this alignement should *not* be included in the size? Other chunks in other places are either unaligned, or aligned with the alignment included in the size. Here, though, it is not included. Either one of these three behaviours would be fine. A sane format would pick one. PSD, of course, uses all three, and more.

Trying to get data out of a PSD file is like trying to find something in the attic of your eccentric old uncle who died in a freak freshwater shark attack on his 58th birthday. That last detail may not be important for the purposes of the simile, but at this point I am spending a lot of time imagining amusing fates for the people responsible for this Rube Goldberg of a file format.

Earlier, I tried to get a hold of the latest specs for the PSD file format. To do this, I had to apply to them for permission to apply to them to have them consider sending me this sacred tome. This would have involved faxing them a copy of some document or other, probably signed in blood. I can only imagine that they make this process so difficult because they are intensely ashamed of having created this abomination. I was naturally not gullible enough to go through with this procedure, but if I had done so, I would have printed out every single page of the spec, and set them all on fire.

Were it within my power, I would gather every single copy of those specs, and launch them on a spaceship directly into the sun.

PSD is not my favourite file format.

*/

— code comment from https://github.com/gco/xee/blob/master/XeePhotoshopLoader.m#L108

 

 

RHEL/CentOS, Debian, Fedora, Ubuntu & FreeBSD Comparison

Over the years I’ve used a number of Linux distributions (and FreeBSD), these are my top 5 and how I rank them:

centos_debian_fedora_ubuntu_freebsd_score

Desktop

Gnome ScreenshotI’m not a big fan of Ubuntu’s Unity, so Ubuntu-Gnome, Kubuntu, Debian and Fedora are my top distros for desktop choices.  If you want the latest Gnome features Fedora gets them first.  For KDE I think Kubuntu does a great job at reasonable default settings (like say, having the Start button open the KDE menu, why is it KDE programmers think that shouldn’t be default behavior?) where I have to do quite a bit more tweaking on other distros.  Ubuntu-Gnome also provides an optional PPA which tracks the latest version of Gnome bringing it almost as up to date as Fedora is.

Ugly fonts – for some reason, on FreeBSD, Fedora, CentOS, and Debian the fonts look ugly… I don’t know if they can’t detect my video card properly or if there’s something wrong with the fonts themselves but on every system I’ve tried the fonts look much better on Ubuntu based distributions.

If you’re interested in FreeBSD for a desktop PC-BSD is worth a look, but in my experience Linux runs a lot better on the desktop than FreeBSD.

Server

FreeBSD is historically my favorite server OS, but they tend to lag behind on some things and I have trouble getting some software working on it so for the most part I use Ubuntu for servers as it seems to have the best out of the box setup.  90% of the time I’m deploying in virtual environments and open-vm-tools is now enabled by default in 16.04.

With perhaps the exception of Fedora all the distros make decent servers.

Packages

All the package management systems are pretty decent, I do prefer apt just because I never have any problems with it and it’s faster.  Debian and Ubuntu have the most packages available, and Ubuntu has PPA support which makes it easy to manage 3rd party repositories.

One thing I don’t like about Debian, while it does have a lot of packages is a lot of packages are out of date.  A few months ago I tried to install Redmine from the repository and even though the repository had it at version 3.0 the actual version that was installed was 2.6.  Someone needs to do some clean up.

CentOS hardly offers any packages so you have to enable the EPEL just to make it functional and even then it’s limited.   My main issue with CentOS is it seems if you want to do anything other than a very basic install you’re dealing with not finding packages (like rdiff-backup, why isn’t that in the repos?) or needing packages from conflicting repositories and sometimes having to download them manually.  It’s a nightmare.

One other thing I like about apt is the philosophy of Debian and Ubuntu of setting up some sensible default configurations and enabling the service.  After installing packages on Fedora, CentOS, or FreeBSD I’m often left manually creating configuration files.  CentOS is the most annoying–maybe it’s just me but if I install a service I want SELinux to not block me from running that service… and when I make a change in SELinux it should take effect immediately instead of arbitrarily taking a few minutes to come to it’s senses.

Free Software

Richard Stallman
By – Thesupermat – CC BY-SA 3.0

While Richard Stallman wouldn’t endorse any of the distributions I’m comparing, if he had to choose from these Debian would likely be his choice.

Debian LogoAll the OSes include or provide ways of obtaining non-free software, but Debian is at the forefront of making it a goal to move to Free Software.  Fortunately I think they do this in a smart way where they’re still including ways to install non-free drivers so you can at least make a system usable.  I think Debian does the best job of making it clear what’s free and what isn’t, and allowing the user to make the choice.

 

Evilness

RedHat LogoI used to be a big RedHat fan back in the RH 6 and 7 days.  Then one day my loyalty was rewarded when out of the blue RedHat decided to start charging for updates for their “Free” OS… RedHat’s new free alternative was Fedora which was so unstable it was unusable.  I was suddenly going to need to buy lots of licenses… this left me scrambling for a solution and I eventually switched over to Ubuntu.  Since then I’m wary about anything related to RedHat.  CentOS is now the free version of RedHat while Fedora is where all the new features are available and it’s not so unstable these days.  And, yes, RedHat, I’m still bitter.

Ubuntu introduced Amazon ad supported searches and even worse was by default sending search keywords from the unity lens to Canonical.  I’d consider this an invasion of privacy and really the first time I started looking for Ubuntu alternatives after I switched from RedHat.   Fortunately the feature was easy to disable, and now Ubuntu has since disabled it.

Out of Box Hardware Support

Dell XPS 13 with UbuntuUbuntu has the best out of box hardware support.  Dell’s XPS 13 even comes in a developer edition that ships with Ubuntu 14.04 LTS.  It works outUbuntu Logo of the box on just about every laptop I’ve tried it on.  Also it was the first distro to support VMware’s VMXNET3 and SCSI Paravirtual driver in the default install and now I believe it’s the only distro that has open-vm-tools pre-installed.  All this cuts down on the amount of time and effort it takes to deploy.

I wish Debian did better here.  Debian excludes some non-free drivers which is good for the FSF philosophy but it’s also means I had no WiFi on a fresh Debian install.  Apparently you’re supposed to download the drivers separately.  This is particularly bad when your laptop doesn’t have an Ethernet port so you have no way to download the WiFi drivers.  I suppose I could have re-installed Ubuntu then downloaded the Debian, WiFi drivers, save them off to a USB drive, re-install Debian and side-load the WiFi drivers… but what a hassle.

Automatic Security Updates

Ubuntu and Debian give the option of enabling automatic security updates at install time.  The other systems have ways of enabling automatic updates but there isn’t an option to enable it by default at install time.  My opinion is all operating systems should automatically install security updates by default.

Init System

FreeBSD DaemonFreeBSD avoids the nonsense for the win here.  I do not like systemd.  I’d rather spend time not fighting systemd.  Maybe I can figure it out someday.  Why didn’t we all switch to upstart?  I liked upstart.

Cutting Edge vs Stability

Fedora LinuxFor cutting edge Fedora or Ubuntu standard (every 6 month) releases keep you up to date, great for wanting to stay cutting edge on a Desktop Environment.

FreeBSD is the most stable OS I’ve ever used.  If I was told I was building a solution that would still be around in 30 years I’d probably choose FreeBSD.  Changes to the base system are rare and well thought out.  If you wrote a program or script on FreeBSD 10 years ago it would probably still work today on the latest version.   In the Linux world I like Debian stable or Ubuntu’s LTS (after the first point release) and CentOS (aslo after the first point release) are great options.

Ubuntu provides the best of both worlds getting cutting edge with LTS releases which I find very beneficial for having a stable environment but still having relevant development tools and up to date server environments.  If you need something newer you have PPAs, but most of the time the standard packages are new enough.  Right now for example Ubuntu 16.04 LTS is the only distribution that ships with a version of OpenSSL and NGINX that supports an http/2 implementation that works with Google Chrome.  To top if off both OpenSSL and NGINX packages fall under Ubuntu’s 5-year support.  You don’t have to add 3rd party repos, solve dependency issues.  Just one command: “apt install nginx” and you’re good for 5-years.

Ubuntu 16.04 LTS is the only distro that supports http/2

(above screenshot from: https://www.nginx.com/blog/supporting-http2-google-chrome-users/)

Upgrading

FreeBSD LogoFreeBSD is the best OS I’ve ever used at upgrading to a newer release.  You could probably start at FreeBSD 4, and upgrade all the way to 11 and have no issues.  Debian and Ubuntu also have pretty good upgrade support… in all cases I test upgrading before doing it on a production system.

Long Term Support (LTS)

CentOS LogoCentOS has the longest support offering at 10-years!  Combined with the EPEL repository (which also has the same goal) I’d say RedHat/CentOS is the best distribution for a “deploy and forget” application that gets thrown in /opt if you don’t want to worry about changes or upgrades breaking the app for the next 10-years.  This is probably why enterprise applications like this distribution.

Debian is just starting a 5-year LTS program through a volunteer effort.  I’m looking forward to seeing how this goes.  I’m glad to see this change as lack of LTS was one of the main reasons I decided on Ubuntu over Debian.

Ubuntu offers 5-year LTS.  Ubuntu’s LTS not only covers the base system but also the Ubuntu team supports many packages (use “apt-cache show packagename”) and if you see 5y you’re good.

Predictable Release Cadence

release-chart-desktop

Ubuntu has the most predictable release cadence.  They release every 6 months with a 5-year LTS release every 2-years.  Having been a sysadmin and a developer I like knowing exactly how long systems are supported.  I plan out development, deployments, and upgrades years in advance based on Ubuntu’s release cadence.

My Thoughts

When I was younger it was fun to build my entire system from scratch using Gentoo and compile FreeBSD packages from ports (I also compiled the kernel).  Linux wasn’t as easy back then.  I remember just trying to get my scroll wheel working in RedHat 7.

Screenshot of how to get the scroll wheel working
I found this old note.  I finally got the scroll wheel working in RedHat 7.1!

Linux distributions are tools.  At some point you have to stop trying to build the perfect hammer and start using it to put nails in things.

Now days I don’t have time to compile from scratch, solve RPM dependency issues, or find out why packages aren’t the right version.  In the year 2000 I could understand having to fix ugly font issues and messing around with wifi-drivers.  But we should be beyond that now.  That was the past.

Calvin and Hobbes Comic Strip
By Bill Waterson, 1995-08-27, Fair Use – 17 U.S.C. § 107

Onward

Ben wearing RedHat
I used to wear the official RedHat Fedora

Fonts, automatic updates, scroll wheel, touchpad, bluetooth, wifi, printers, and hardware in general should be working out of the box by now–if it isn’t I’m not going to put a lot of effort into getting the distro working.  It’s time to move forward and focus work on things beyond the distribution–while I love all sorts of distros, I don’t want to be like Calvin fighting the computer the whole way.  I actually do work on them and need something stable and up to date out of the box with sane default settings.  Having predictable release cycles also helps.  If I could combine the philosophy of Debian with the few extras that Ubuntu provides I’d have the perfect distro.  But for the time being Ubuntu is close enough to what I want–I’ve been using it probably since 5.04 (Hoary Hedgehog) and standardized on it when they started doing LTS releases.  That doesn’t mean it’s for everyone, not everyone likes it, some people prefer the more vanilla feel from Debian, others might want something easier like Mint.  If you prefer CentOS, Fedora, Arch, etc. and they work well for you, use them.

Actually I don’t use Ubuntu for everything.  For my production environment I’ve standardized on Windows 10 for desktops, ESXi for virtualization, FreeNAS for storage, pfSense for firewalls, and Ubuntu for servers.  Honestly, none of the above systems were my first choice… but I’m at where I am because my first choices let me down.  It will likely evolve in the future, but for the time being that’s my setup and it works pretty well.

The great thing about modern day Linux distributions (and FreeBSD) is they’re all pretty good.  I haven’t had to hack an Xorg file to get the scroll wheel working in a long time.

 

 

Journey to Facebook

Week 1:

Number of Friends: 6.  (That’s probably enough)
Number of Likes: 0.
Species: Kind of like the Borg.

Defender (Star Trek USS Enterprise) of Freedom vs Facebook (Borg ship)

I see my home, b3n.org, getting further into the distance.  My blog is in one of the most beautiful locations nestled in the mountains between the Tech and Conservative Blogs, definitely more on the Tech side and well away from the Bay of Flame.  I can see the tech blogging area I’m most familiar with getting smaller and smaller.  A few minutes later I see Lifehacker passing by and I’m flying over the Sea of Opinions.   And then it hit me.   I’ve left the Blogosphere.

After a long flight I stop for a layover at Reddit, then I was back in the air and landed just north of Data Mines, Facebook.  And I joined Facebook.  The reason for my travel?  I’m looking for information locked away in a closed Facebook group.

That was last week.

Map of Social Networks showing my travel from the Blogosphere to Facebook

Most of my friends left the Blogosphere for MySpace, and then moved further north to Facebook years ago (and I’ve re-united with six of them so far).   My impression of Facebook so far: It’s like a bunch of mindless drones all talking at once–well, let me start over.  It’s like a bunch of ads all talking at once and mindless drones trying to shout above them.

Facebook is a land I’ve always avoided–It’s basically what AOL or Geocities should have become–a step back from freedom and individuality.

It’s Not Social Networking That’s the Problem

When you join Facebook, you have to abide by their rules and subject yourself to their censorship.  If you disagree with Facebook, you either comply or you’re out.  There’s no alternative.

Websites, Blogging, and Email on the other hand are based on what the internet should be–open protocols.  If I run my own email server I can send an email to anybody else no-matter what provider they use!  This blog is run on a server I control.  Currently it’s rented from DigitalOcean because I no longer have the bandwidth at my house to run it, but in the past I’ve run it from my dorm room, my bedroom closet, from right under my desk, and from Jeff’s house.  And the thing is anybody can setup their own server–but they don’t have to.  They can use a provider like Blogger or Gmail if they prefer–but if you can get better service somewhere else you can migrate to different provider at will and not lose anything.

But Facebook isn’t open and federated.  Facebook users can only talk to other Facebook users and as long as you want to talk to your Facebook friends the only way is to be on Facebook yourself.  The content is all stored on their servers so you are at their mercy for control and privacy of your content.  Or is it your content?  On Facebook, you are not your own individual, or your own community.  You are part of the Borg.

I’m not against social networking, but Facebook is designed in a very centralized manner which isn’t consistent with how the internet services should be–more distributed and federated.  Some social networks I might be more interested in are Friendica and Diaspora but I don’t think they have much traction yet.

One More Thing

One particularly concerning thing about Facebook is you don’t pay for it–which means, that you’re not Facebook’s customer.  No, indeed.  You, my liked Friend, are the product being sold.

 

Automatic Ripping Machine | Headless | Blu-Ray/DVD/CD

The A.R.M. (Automatic Ripping Machine) detects the insertion of an optical disc, identifies the type of media and autonomously performs the appropriate action:

  • DVD / Blu-ray -> Rip with MakeMKV and Transcode with Handbrake
  • Audio CD -> Rip and Encode to FLAC and Tag the files if possible.
  • Data Disc -> Make an ISO backup

It runs on Linux, it’s completely headless and fully automatic requiring no interaction or manual input to complete it’s tasks (other than inserting the disk).  Once it completes a rip it ejects the disc for you and you can pop in another one.

Flowchart of Ripping Process

Automatic Ripping Machine Features

  • Determines if disc is Video, Data, or Audio
    • If video get the Title
    • Determine if it’s a TV or Movie
    • Rip using MakeMKV
    • Send rip to Handbrake and eject disc asynchronously
    • When done transcoding tell Emby to rescan library, or send notifications using PushBullet or IFTT
  • If audio CD – rip to mp3 or flac using abcde and eject
  • If data disc make an ISO backup
  • Can rip from multiple optical drives
  • Completely headless design–no graphical interface.  The only interaction is inserting the disc and it takes it from there, ejecting it when done.

Free Software

I uploaded the scripts to GitHub under the MIT license.  As of version 1.1.0 (which pulls in muckngrind4’s changes) the ARM can rip from multiple drives simultaneously, and send push notifications to your phone when it’s complete using Pushbullet or IFTTT.

Instructions to get it installed on Ubuntu 16.04 LTS follows.

Automatic Ripping Machine (Supermicro MicroServer) under my desk

ARM Equipment & Hardware

Blu-Ray Hardware and VMware Settings

A WARNING ABOUT SOME BLU-RAY DRIVES

Most Blu-Ray drives have an anti-feature called “riplock” where it will purposefully cripple the read-speed on dvds and blue-rays to around 2X to 4X instead of the advertised drive speed (I believe this to be false advertising).  If you have a normal 5 1/4″ drive bay I suggest getting the LG WH16NS40 16X blu-ray drive since it is known to not be speed limited.  LG seems to be one of the better drive manufacturers in my experience.

You will need a server.  You can use Ubuntu on bare metal or run it under VMware.   I am using my Datacenter in a Box Build and run the ARM on Ubuntu Linux 16.04 LTS under VMware.  At first I tried using an external USB Blu-Ray drive but the VM didn’t seem to be able to get direct access to it.  Unfortunately my server case only has a slim-DVD slot on it so I purchased the Panasonic UJ160 Blu-Ray Player Drive  because it was one of the cheaper Blu-Ray drives.

I wasn’t sure if VMware would recognize the Blu-Ray functions on the drive but it does!  Once physically installed edit the VM properties so that it uses the host device as the CD/DVD drive and then select the optical drive.

VMware Machine Properties, select CD/DVD drive, set Device Type to Host Device and select the optical drive.

Regions…

I kept getting this error while trying to rip a movie:

MSG:3031,0,1,”Drive BD-ROM NECVMWar VMware IDE CDR10 1.00 has RPC protection that can not be bypassed. Change drive region or update drive firmware from http://tdb.rpc1.org. Errors likely to follow.”,”Drive %1 has RPC protection that can not be bypassed. Change drive region or update drive firmware from http://tdb.rpc1.org. Errors likely to follow.”,”BD-ROM NECVMWar VMware IDE CDR10 1.00″

Defective By Design Logo

After doing a little research I found out DVD and Blu-Ray players have region codes that only allow them to play movies in the region they were intended–by default the Panosonic drive shipped with a region code set to 0.

World Map with DVD Region Codes
CC BY-SA 3.0 from https://en.wikipedia.org/wiki/DVD_region_code#/media/File:DVD-Regions_with_key-2.svg

Notice that North America is not 0.

Looking at http://tdb.rpc1.org/ it looks like it is possible to flash some drives so that they can play videos in all region codes.  Fortunately before I got too far down the flash the drive path I discovered you can simply change the region code!  Since I’m only playing North American movies I set the region code to 1 using:

You can only change this setting 4 or 5 times then it gets stuck so if you’re apt to watch movies from multiple regions you’ll want to look at getting a drive that you can flash the firmware.

Install MakeMKV, Handbrake, ABCDE and At

Note, the installation instructions here are a bit old, please follow the instructions in the README.md file.

Mount Samba/CIFS Media Share

If you’re ripping to the local machine skip this section, if you’re ripping to a NAS like I am do something like this…

In FreeNAS I created a media folder on my data share at \\zfs\data\media

Edit /etc/fstab

Once that’s in the file mount the folder and create an ARM and an ARM/raw folder.

Install ARM Scripts

Create a folder to install the Automatic Ripping Scripts.  I suggest putting them in /opt/arm.

You should look over the config file to make sure it suits your needs, if you want to add Android or iOS push notifications that’s where to do it.

Figure out how to restart udev, or reboot the VM (make sure your media folder gets mounted on reboot).  You should be set.

Automatic Ripping Machine Usage

  1. Insert Disc.
  2. Wait until the A.R.M. ejects the disc.
  3. Repeat

Test out a movie, audio cd, and data cd and make sure it’s working as expected.  Check the ouput logs at /opt/arm/logs and also syslog if you run into any issues.  If you run into trouble feel free to post an issue here.

Install MakeMKV License

MakeMKV will run on a trial basis for 30 days.  Once it expires you’ll need to purchase a key or while it’s in BETA you can get a free key…  I would love to build this solution on 100% free open source software but MakeMKV saves so much time and is more reliable compared to anything else I’ve tried.  I will most likely purchase a license when it’s out of beta.

Grab the latest license key from: http://www.makemkv.com/forum2/viewtopic.php?f=5&t=1053

Edit the /root/.MakeMKV/settings.conf  and add a line:

Get an OMDB API Key

Next you’ll want to get an OMDB API key and put it in your ARM config file.  A free key will let you do 1,000 API queries a day which should be more than enough: http://www.omdbapi.com/apikey.aspx

How it Works?

When UDEV/systemd detects a disc insert as defined by /lib/udev/rules.d/51-automedia.rules it runs the wrapper which in turn runs /opt/arm/identify.sh which identifies the type of media inserted and then calls the appropriate scripts.  (if you ever need it this is a great command get get info on a disk):

Video Discs (Blu-Ray/DVD)

For video discs the first step is ARM tries to obtain the disc title.   If it’s a blu-ray it can often be extracted from the disc, if it’s a DVD we calculate a hash of the DVD and then query Windows Media Metaservice (which is what Windows Media Player queries when a disc is inserted) to get the title.

Once the title is obtained we send that to the OMDB API which will tell us whether the video is a Movie, or a TV Show.  If the video is a Movie ARM can usually determine the main title feature, and rip that.  And optionally rip all the other titles into an Extras folder.  Once done ARM can automatically tell Emby to rescan the library.  If the video is a TV Show ARM will rip all the titles and you’ll need to use Filebot to rename the shows.

All tracks get ripped using MakeMKV and placed in the /mnt/media/ARM/raw folder as soon as ripping is complete the disk ejects and transcoding starts with HandBrakeCli transcoding every track into /mnt/media/ARM/timestamp_discname.  You don’t have to wait for transcoding to complete, you can immediately insert the next disk to get it started.

FileBot Screenshot Selecting files for rename

Most of the time everything just works, but in some cases if ARM can’t determine the title some video file renaming needs to be done by hand.  The ARM will name the folder using the disc title, but this isn’t always accurate.  For a Season of TV shows I’ll name them using FileBot and then move them to one of the Movie or TV folders that my Emby Server looks at.  Fortunately this manual part of the process can be done at any time, it won’t hold up ripping more media.  The Emby Server then downloads artwork and metadata for the videos.

Screenshot of Emby's Movies Page

Audio CDs

If an audio track is detected it is ripped to a FLAC file (or mp3 or whatever you want) using the abcde ripper.  I opted for the FLAC format because it’s lossless, well supported, and is un-proprietary.  If you’d prefer a different format ABCDE can be configured to rip to MP3, AAC, OGG, whatever you want.  I have it dropping the audio files in the same location as the video files but I could probably just move it directly to the music folder where Emby is looking.

emby_beethovens_last_night

Data Disks (Software, Pictures, etc.)

If ARM determines there is no video on the disc, then a simple script is run to make a backup ISO image of the disc.

Screenshot of TurboTax ISO file

Morality of Ripping

Two Evils: Piracy vs. DRM

I am for neither Piracy or DRM.  Where I stand morally is I make sure we own every CD, DVD, and Blu-Ray that we rip using the ARM.

I don’t advocate piracy.  It is immoral for people to make copies of movies and audio they don’t own.  On the other hand there is a difference between Piracy and copying for fair use  which publisher’s often wrongly lump together.

What really frustrates me is DRM.  It’s waste of time.  I shouldn’t have to mess with region codes, and have to use software like MakeMKV to decrypt a movie that I bought! And unfortunately the copy-protection methods in place do nothing to stop piracy and everything to hinder legitimate customers.

For me it doesn’t really even matter because I don’t really like watching movies anyway–there’s not much more painful than sitting for an hour to get through a movie.  I just like making automatic ripping machines.

Well, hope you enjoy the ARM.

War Games DVD in Tray