Living up North in the Winter we have long hours of darkness because of the earth’s tilt away from the sun. This means getting up before the sun rises, and it’s a bit annoying to be jolted awake at 6:00am by an alarm when it’s pitch black outside. Or if I wake up before the alarm goes off, it’s dark, and I can’t tell if I should be going back to sleep or getting ready to get up without consulting a clock.
There are quite a few IoT (Internet of Things) WiFi Light bulbs on the market, the reason I like these is they don’t rely on the vendors software to control them, and they don’t need to connect out to some cloud service on the internet which increases ones surface area to hackers.
Connect to WiFi
When the bulbs are initially turned on they power on and create their own WiFi hotspot, a phone app connects to it and programs it to connect to your WiFi network. As with all IoT devices I suggest having a dedicated IoT WiFi SSID and VLAN to keep them off the main network. They should get an IP address from DHCP, I then give it a static IP assignment with DHCP in pfSense.
Automate with Home Assistant
Next, install Home Assistant (which is a free open source home automation platform) on a server. I spun up an Ubuntu 16.04 VM.
The MagicLight / Flux Bulbs aren’t smart enough to gradually turn on or off, but I used multiple automation tasks to simulate a gradual fade-on over 30 minutes. The example below will gradually make the light brighter. It starts very dim, at 5:15am and stays dim for awhile. This won’t wake me up if I’m still asleep. Then around 5:40am it starts to get brighter at a faster rate until it reaches full brightness at 6:10am.
This wakes me up “naturally” every winter morning. I’m usually awake well before 6 and feel much better than if I had used an alarm.
The light stays on until 8am then it turns off and waits for the next day.
The nice thing about waking up to a gradual light is if I’m already waking up I’ll get up sometime after 5:15am, but if I’m in a deep sleep it won’t wake me up suddenly so I can get a few extra minutes of sleep until around 6am.
Here’s the part I added to configuration.yaml
- platform: flux_led
alias: Office Light 15 at 5:30am
alias: Office Light 65 at 5:40am
alias: Office Light 120 at 5:50am
alias: Office Light 145 at 6:00am
alias: Office Light 165 at 6:05am
alias: Office Light 200 at 6:08am
alias: Office Light 255 at 6:10am
alias: Turn off at 8am
There are also some other things one could do, Home Assistant can also monitor the weather and sunrise times. I could probably spend a little more effort and make the script only activate the bulb if the sun hasn’t risen yet, or I could have the bulb wake me up earlier if there’s a lot of snow so I have more time to shovel. Maybe it could be blue when it’s raining so I know to grab my hat.
My home automation script could definitely use some improvements, but even in it’s present state it’s a big improvement over waking up to an alarm.
When all of your network devices lose access to the internet all at the same time regularly throughout the day, there is not much to blame other than a bad network cable to your Wireless Access Point (AP), or the Access Point itself. It wasn’t the cable. My old Cisco-Linksys E3000’s days were numbered. Skype calls were dropping, Emby videos streams were getting interrupted, websites weren’t loading. As with most technical things, the burden to set things right fell on my shoulders.
It was past time to upgrade to 802.11ac anyway. I use pfSense for my router so all I want is a Wireless AP, I don’t need a combo, so I started my search. I don’t really like researching APs because consumer devices are pretty awful at security, and enterprise devices involve support contracts and enterprise software and sometimes the security is just as bad. But WiFi router recommendations are one of the most frequently asked questions from friends and family, and I’ve never had a good answer …until now. I came across UniFi made by Ubiquiti. These are the wireless AP’s that Linus Torvalds uses. The products appear to be marketed towards Businesses and Enterprises, but the software to run it is free, and pretty much all I need for my home/soho environment can be configured through the web-interface.
UniFi Access Points (AP)
I purchased the UAP-AC-PRO which is their high end model as well as the budget model, the UAP-AC-LITE. There’s also a “Long Range” model which sits in between them, the UAP-AC-LR which I did not get.
The UniFi AP (Wireless Access Point) looks more like a smoke detector than a wireless access point. A typical install is mounting them on the ceiling. Here’s mine mounted on a wall (the circular ring LED is normally blue which is too bright at night, but fortunately it can be turned off).
Power over Ethernet (PoE)
The AP is powered by PoE. This means you don’t need an AC-DC adapter, instead it gets it’s power from the Ethernet cable. This works on standard Cat 5e, Cat6, or Cat6a cable. Normally PoE devices require an expensive PoE capable switch, and I was a bit hesitant of getting into the PoE world, but as long as you buy a single unit and not their bulk pack the UniFi APs usually comes bundled with a PoE injector to get you started.
I had no idea what a PoE injector was, but it turns out to be really simple. It’s a little box with a power cable, and two Ethernet ports, LAN and PoE. Just plug the LAN port into your switch and your AP into the PoE port. Couldn’t be any simpler. Now, if you’re running a fleet of WiFi access points it probably makes sense to get a PoE switch. But for one or two in a house the PoE injector is fine.
Now, there are a couple of different kinds of PoE.
Here’s the difference: Passive PoE is as dumb as an electrical outlet. It just sends power through the Ethernet cable whether you need it or not… and this can damage devices not designed for Passive PoE if you accidentally plug a powered Ethernet cable into them. The much better standard is 802.3af and 802.3at PoE. With this power isn’t provided until the device requests it, which means it’s very safe and you can plug non-PoE devices into PoE ports without blowing them up.
The UAP-AC-PRO uses 802.3af.
The UAP-AC-LITE and UAP-AC-LR products require passive PoE. However, I have seen possible signs that Ubiquiti is switching all their products to the IEEE 802.3af/at standards, so it may be worthwhile waiting for the newer models if you don’t want to spend the extra for the Pro model and can afford to wait.
The UniFi Controller
So, these Access Points don’t run a web-server with a management interface. This is a business/enterprise class solution so it’s meant to be centrally controlled from a single controller. You will need to download the UniFi Controller (which is free). Once it’s running you can access it via web browser or the UniFi App for Android or iOS. The controller can be installed on Windows, Linux, or MacOSX. If you don’t care about collecting stats it doesn’t need to be up and running all the time so it can be run on a workstation, but if you have a server I recommend running it there. I created an Ununtu 16.04 VM called “unifi.b3n.org” I gave it 1GB RAM, 30GB HDD, and 1 core which seems to be plenty.
The install process is straight forward…
Create a file, /etc/apt/sources.list.d.100-ubnt-list
deb http://www.ubnt.com/downloads/unifi/debian unifi5 ubiquiti
Go to https://unifi.example.com:8443 (See the bottom of this post for info on setting up a real certificate).
The first time you access it you get Wizard to set it up, after creating an account and such it will have you adopt the UniFi APs on your network. If they’re plugged in it will find them automatically. It not only manages APs but also manages UniFi branded routers, switches, cameras, VoIP devices, etc.
I can see how it would help manage a fleet of wireless equipment across multiple sites. You can see all the devices connected, the AP they’re connected to, signal strength, connection speed, data they’ve used, how they’re authorized to be on the network, VLANs, etc. I’ve hidden a lot of columns in the screenshot below but it gives you an idea of the data you can get on wireless clients.
The UniFi also keeps track of every wireless AP that it has seen. My neighbors seem to have a lot of HP Smart Printers and TVs that need to waste RF spectrum running their own APs for some reason. Cars Have APs? It looks like a lot of cars have their own APs now days? At least I’m guessing these MitsumiE APs are automobiles that have driven by my house.
UniFi Android / iOS App
The Android app is is just as capable (and I presume it is on iPhone as well), I didn’t do a thorough comb through but at a quick glance it appears every screen and configuration setting in the web interface is available in the Android App.
AP Models Comparison
The APs perform well. Since I installed the UniFi we have not had a single wireless connection drop, even if I put the AP power settings at their lowest it has better range than my previous AP. I also setup both APs and my devices had no trouble roaming between the two APs as needed while maintaining connections.
The three main models are:
UAP-AC-LITE – 2×2 MIMO on both bands (budget)
UAP-AC-LR – 3×3 MIMO on 2.4, 2×2 MIMO on 5GHz (middle)
UAP-AC-PRO – 3×3 MIMO on both bands (fastest)
Does 3×3 MIMO make a difference for 2×2 clients? You might get better reception, but probably not a noticeable difference. However, if you do have 3×3 capable clients you should see a benefit going to a 3×3 AP.
UAC-AP-PRO vs UAC-AP-LITE Performance and Coverage with 2×2 Clients
Most wireless clients are only 2×2 MIMO these days, and even though I tend to run the latest hardware I only have 2×2 devices which can connect at a maximum speed of 866.7Mbps. A 3×3 MIMO AP can improve performance of 2×2 MIMO clients because the extra antenna might provide a better signal. That’s the theory anyway.
I can’t really tell a difference between the two routers in my house, in the Android App Wireless Test I get better uploads speeds on the Pro than I do the Lite, which might be due to it’s extra antenna but I don’t see that performance benefit on our laptops when transferring files back and forth between them and my FreeNAS unit.
I do think I get slightly better upload speeds on the Pro model when I’m far away from the AP. This may be due to the extra antenna or it could just be subjective.
As far as real life performance on 5GHz setting the channel width to 80Mhz I get about 50-60Mb/s down and 30-40Mb/s up pretty consistently throughout the house, and that’s with multiple wireless clients connected and a pretty saturated RF spectrum. Here’s an RF Scan at my house… there’s really not a single empty channel even on 5Ghz.
UniFi Managed Switches
Ubiquiti also sells managed switches, ranging from 8 to 48 ports with a variety of PoE options. I’ve been wanting try out managed switches so I picked up their small 8-port. Since I’m running these at home low noise is extremely important. The two switches that fit the bill are the 8-port US-8-60W (with 4 PoE ports) and the 24-port US-24 (without PoE), both of these models are fanless and silent.
The US-24 doesn’t have PoE on it. The US-8-60W has four 802.3af PoE ports. I should note that this switch cannot do passive PoE so it won’t be able to power UniFi’s passive PoE equipment (such as the UAP-AC-LITE).
There are two banks of LEDs, top row is only for the four PoE ports on the right and light up orange if PoE is activated. The bottom row lights up green on gigabit links and orange for 100Mbps links. There’s also a blue/white LED on the far left front of the router that’s off. I do not like blue or white LEDs. Fortunately as soon as I provisioned it the UniFi Controller automatically turned it off based on my site preferences.
After getting a quick primer from a Network Engineer on how VLAN tagging works I decided to start VLAN tagging my network.
Under the UniFi Controller you can setup your VLANs, I programmed all of mine in above. Something that is a bit confusing is there are two Network Purpose types that support VLAN tagging, “Corporate” and “VLAN-Only”. There is no difference between the two, unless you are using the USG (UniFi Security Gateway), which can run a DHCP server for each “Corporate” network type. Since I’m using pfSense instead of the USG I setup mine as vlan-only.
Then it’s fairly trivial to manage the ports, setting up trunking and access ports for certain VLANs. In my case port 2 is my trunk port and goes to my pfSense router. I also ran my Northland Cable connection through the switch so I could get some bandwidth insights.
As always, the UniFi Controller provides some pretty neat insights, it picked up devices not only connected to it but also found devices connected to other switches (notice most of the devices below were found on port 2 which connects up to my VMware vSwitch).
And UniFi provides great statistics and insights into traffic flow on the switch.
Appendix A: Setting an SSL Certificate on the UniFi Controller.
By default the Unifi controller runs on port 8443 with a self-signed SSL certificate. It is ridiculously difficult to set a custom cert… I know how to work with Java keystores but I just couldn’t get the ace.jar Java cert importer to accept my intranet cert. Then I read the CA cert had to be in DER format which also didn’t work…. arrgh. Suddenly it hit me that setting up certs on nginx is easy, it would be much simpler to set up an SSL certificate on an nginx reverse proxy on port 443. I want the UniFi Controller listening on 443 anyway, and even better, I don’t have to touch any UniFi configuration files or certs.
If you’re running an internal CA like I am you can just generate an internal Cert, or if you need a public cert Let’s Encrypt should work just as well. Here’s an example of generating one from FreeNAS.
Export the certificate and key and save them to /etc/nginx/cert.crt and /etc/nginx/cert.key. The configuration is a pretty standard nginx reverse proxy, the only issue I initially ran into was the UniFi controller reported a “WebSocket connection error” warning, so I enabled nginx’s proxy support for WebSockets (which the configuration below takes care of). Other than that it’s a straight forward reverse proxy.
When you request a website, say, b3n.org, your computer needs the IP address. So it sends out packets through your router/firewall, your modem, and out to your ISPs DNS Servers. Your ISP’s DNS server will probably have it cached, if not it queries the authoritative (starting with the Root Name Servers) recursively to find out what the authoritative DNS servers are and then queries those DNS servers. It gets the IP address, and sends it back to your computer. Your computer can then query the server IP for b3n.org. Any latency along this process will result in delays. If you ever type in a url in the address bar and nothing happens for a few hundred milliseconds and then suddenly the website starts to load this is likely the problem.
Is Your DNS Hijacked by Your ISP?
It’s pretty easy for ISPs to hijack DNS queries. A small number of ISPs (Comcast, CenturyLink, Time Warner, Cox, Rogers, Charter, Verizon, Sprint, T-Mobile, Frontier, etc.) have been caught doing exactly that. Want to know why? Advertising revenue. When you misspell a domain some ISPs, instead of returning an NXDOMAIN (does not exist) like any RFC compliant DNS server it will resolve the domain anyway, point it at a page they control, and advertise to you! This is a really bad idea. But there is a way to prevent your ISP from doing this…
Using Google’s Nameservers
If you’re not tech savvy using 126.96.36.199 and 188.8.131.52 is probably better than your ISPs nameservers. It won’t hurt, and will probably help, but it may not help… it’s very trivial for an ISP to route those IPs to their own servers and some do.
Even if your ISP is pure goodness and would never do that, someone could setup a rogue DNS server posing as theirs and intercept all your DNS traffic.
The only solution is to query the Root name servers for authoritative DNS servers and use DNSSEC. Cut out any 3rd party DNS provider and run your own DNS server locally.
Setup an Unbound Server on pfSense
Unbound is a high performance caching DNS server. Unbound queries recursively authoritative DNS servers directly, completely bypassing your ISP. It uses DNSSEC to make sure your queries haven’t been tampered with. And best of all, it caches DNS results locally (like your ISP would) but since it’s on your own network, the cached DNS queries are local!
You can setup a local FreeBSD server and run Unbound on it, but if you’re already using a router like pfSense or OPNsense you can setup an Unbound server in a few clicks.
Open up pfSense, first make sure the forwarder under Services, DNS Forwarder, is disabled. Slowness warning: if you are running a low query lookup network such as on your home network having the forwarder disabled may cause lookups to be slower because you’re having to traverse the DNS servers regularly to get results… this can sometimes take a second or two and result in DNS timeouts while it’s trying to traverse the DNS nameservers. If you find that unbound performance is slow I’d suggest turning on forwarding mode which will use the DNS servers specified in pfSense under system, general setup. In this case I’d recommend pointing them at 184.108.40.206 and 220.127.116.11. If you run with forwarding enabled you should verify that your ISP is not hijacking your DNS results, if they are you should switch ISPs.
Go to Services, DNS Resolver.
Enable the DNS Resolver
Select the Network interfaces that you want Unbound to listen on (do not select ALL, you’ll definitly want to select LAN).
System Domain Local Zone Type: Transparent
Enable DNSSEC Support
Do NOT enable Forwarding Mode
You can also choose to register DHCP addresses in the DNS Resolver which is very handy if you’re using pfSense to manage DHCP.
Under System, General Setup
Make sure all DNS Server fields are empty. DNS Server Override and
Disable DNS Forwarder should be unchecked.
Finally, Under Services, DHCP Server, set your DNS Server to your pfSense’s LAN IP. As your DHCP clients renew their lease they’ll start using pfSense for DNS.
As far as performance if you have low latency to your ISPs DNS you probably won’t notice anything. But if you’re on a high latency connection with 70ms pings like I am, this makes a big difference.
Amazon Lightsail has entered the VPS market, competing directly with DigitalOcean and Vultr. I for one welcome more competition in the $5 cloud server space. I wanted to see how they perform so I spun up 24 cloud servers, 8 for each provider and ran some benchmarks.
$5 Cloud Server Providers Compared
DigitalOcean, Vultr, and Amazon Lightsail offer more expensive plans, but this post is dealing with the low-end $5 plans. Here’s how they compare:
1 CPU Core
20GB HDD (extra block storage @ $0.10/GB/month)
1TB Bandwidth ($0.02/GB overage fee in U.S.).
Best team management – DigitalOcean lets you create multiple-teams and you can add and remove users from those teams.
Ubuntu, FreeBSD, Fedora, Debian, CoreOS, CentOS
1 CPU Core
15GB HDD (extra block storage @ $0.10/GB/month)
1TB Bandwidth ($0.02/GB overage fee in U.S.)
Account sharing – allows you to setup multi-user access.
Floating IPs (currently can’t setup automatically, requires support setup)
Ubuntu, FreeBSD, Fedora, Debian, CoreOS, CentOS, Windows, or any OS with your Custom ISO.
1 CPU Core
20GB HDD (block storage not available)
1TB Bandwidth ($0.09/GB overage fee in U.S.)
3 Free DNS zones (redundancy across TLDs as well).
Amazon Linux or Ubuntu
All three providers have multiple geographic locations worldwide. Vultr has the most locations in the United States, while Amazon has more geographic locations in the world (although only Virginia is available to LightSail at this point in time).
Vultr Global Locations
Amazon Lightsail Global Infrastructure
All providers offer an API. In practice DigitalOcean has been around the longest and thus is more likely to be supported in automation tools (such as Ansible). I expect support for the other APIs to catch up soon.
CPU Test – Calculating Primes
Number of seconds needed to compute prime numbers. On the CPU test Amazon Lightsail consistently outperformed, with Vultr coming in second and DigitalOcean last. CPU1 and CPU2 are 1 and 2 threads respectively calculating primes up to 10,000. CPU4 is a 4-threaded test calculating primes up to 100,000.
Lower is better.
Lower is better.
(I accidentally omitted the memory test from my parser script and didn’t realize it until the last test ran, so this is the average of 4 results per provider)
OLTP (Online transaction processing)
Higher is better.
The OLTP load test simulates a transactional database, in general it measures latency on random inserts, updates, and reads against a MariaDB database. CPU, memory, and storage latency all can effect performance so it’s a good all around indicator. This test measures the number of transactions per second. In this area Vultr outperformed DigitalOcean and Amazon Lightsail in 2 and 4 thread tests, while Lightsail took the lead in the 8-thread test. I don’t know why Lightsail started to perform better under multi-threaded tests, however, my guess is that while Lightsail doesn’t offer the fastest single-threaded storage IOPS it may have better multi-threaded IOPS–but I can’t say for sure without doing some different kinds of tests. DigitalOcean performed the worst in all tests–probably due to it’s slower CPU and memory speed.
Higher is better.
Transactions per second. In random IOPS Vultr provided the best consistent performance, DigitalOcean comes in second place with wide variance, and Lightsail comes in last, but it was by far the most predictable.
Sequential Reads / Writes / Re-writes
Higher is better.
This simply measures sequential read/write speeds on the hard drive. Vultr offers the most consistent high performance, DigitalOcean is all over the place but generally better than Lightsail which comes in last.
Latency (Ping ms) U.S. Locations
Lower is better.
The U.S. latency is all close enough that it doesn’t matter.
Latency (Ping ms) Worldwide Locations
Lower is better.
International Latency, again the results are pretty close.
Download Speed Tests from U.S. Locations
Higher is better.
Downloading data from various locations. It’s really hard to conclude any meaningful analysis from this… the faster peering in New York probably has to do with DigitalOcean and Vultr being located in New York vs Lightsail’s location in Virginia.
Upload Speed Tests to U.S. Locations
Higher is better.
Due to the similarities in the test results I think the bandwidth constraints are on the other side, or at peering.
Download Speed Tests from Worldwide Locations
Higher is better.
Who knows what one could conclude from this, it seems like various providers have different quality peering to different worldwide locations, but there are so many variables it’s hard to say.
Upload Speed Tests to Worldwide Locations
Higher is better.
Similar groupings for the most part.
I spun up 24 x $5 servers, 8 for each VPS provider. I spun up 12 servers yesterday and ran tests, destroyed the VMs, then created 12 new servers today and repeated the tests. All tests were run in the Eastern United States. I chose that region because the only location available currently in Amazon Lightsail is Virginia, so to get as close as I could I deployed Vultr and DigitalOcean servers out of their New York (and New Jersey) data centers. New York is a great place to put a server if you’re trying to provide low latency to the major populations in the United States and Europe without using a CDN.
If the provider had multiple data centers in a region I tried to spread them out.
DigitalOcean – I deployed 4 servers in NYC3, 2 in NYC2, and 2 in NYC1.
Vultr – All 8 servers deployed in their New Jersey data center.
Amazon Lightsail. Deployed in their Virginia location, 2 in each of their four AWS high availability zones.
All the tests I ran are relatively short duration, I did not benchmark sustained loads which may produce different results. My general use case is a web-server or small build server with intermittent workloads. I often spin up servers for a few hours or days and then destroy them once they’re done with their tasks.
The testing scripts I used are available in my GitHub meta-vps-bench repository. The testing scripts are very rudimentary and could be improved. It runs sysbench and speedtest benchmarks. The following commands were run on each server as root:
I tried to stagger starting the tests so that multiple speedtests against the same location had a low risk of occurring at the same time… but it may not always work out that way. I ran all tests twice per server which gives 48 total results (16 for each provider).
This script is for testing. I do NOT recommend running this on production servers.
Looking for a Christmas gift idea for your computer geek? Here’s a short gift guide with a few ideas I think would make great gifts. Unlike a lot of other top gift idea lists written by non-tech people just to make a sale, I’m actually a developer and these are the sort of things that I would enjoy (in fact most of them I own or at the very least had a chance to play with).
Here’s some gifts your geek, hacker, developer, programmer, tech enthusiast, etc. may enjoy:
WiFi RGB LED Light
MagicLight WiFi Smart LED Light Bulb ($). This looks like a normal light-bulb, but it can connect to your WiFi network and be controlled by your SmartPhone, or through home automation software, or Python scripts. This Bulb can change to any color. You can send it HTML Hex Color Codes! If you live up in North Idaho like I do you can program your light to gradually get brighter in the morning to wake you up naturally in the months where the Sun doesn’t rise until late in the day. Or program it to redshift in the evenings before bedtime so the blue light isn’t messing with your circadian rhythm. Or have it turn red as a warning when you’ve left the garage door open after dark! Put a few outside on your house and set them to be certain colors during the Holidays (Red & Green at Christmas, Orange during Halloween, Red, White, and Blue for Independence Day).
Raspberry Pi 3
Raspberry Pi 3 Starter Kit ($$). Every technology enthusiast would enjoy a Raspberry Pi. There are so many projects you could do… build your own weather station, automatic sprinkler system, home automation server, arcade, even a small computer, tiny server, thermometer, etc.
Python Programming for Beginners ($) by Jason Cannon. Yes, the name comes from Monty Python. Python is becoming a well loved language and is growing fast, and is fun to learn and practical. I have been seeing a lot of increase of this language lately. This is one of the best programming languages to learn, even if you’re not a programmer. This book is perfect for someone new to Python or even for someone starting out learning to code for the first time.
MasterKeys Pro L Mechanical Keyboard. ($$$). (This is the latest model, I use an older version of this keyboard at work). If your hacker is on the young side there’s a good chance he has never experienced the joy of typing on a mechanical keyboard and may not even know they exist! Does your keyboard let you press every single key on the keyboard simultaneously and they ALL register? This keyboard does. This keyboard has 3 switch options. Cherry MX Red, Brown, or Blue. I linked to the Cherry MX Brown version but there are several different switch types: Cherry MX Reds have no tactile bump, they are linear so great for FPS or RTS gaming where speed matters. Cherry MX Blues provide an audible click and a tactile bump and are great for typing (unless noise is a concern), Cherry MX Browns provide subtle tactile feedback with no audible click making it a great all-purpose keyboard. The MX browns are my favorite Cherry switch and it’s what I recommend starting with for most people if you don’t know what you want.
I should mention, that by “no audible click” I mean no added click noise. Kris tells me the “silent” Browns and Reds are loud compared to a typical keyboard. The Blues are even louder.
Civilization VI ($$). This game is one of the longest running series, and in my opinion one of the best turn-based strategy games on the market. Your gamer geek can play single-player, or online with friends. Starting out with a single Settler and building cities… what I like about Civilization is the unique ways to win. Most games are about World Domination through force. But in Civilization that is just one of many ways to win. In addition to Domination you can obtain Victory through Culture, Religion, or Science.
Chicory Coffee & Beignet Donuts
Chicory Coffee & Beignet Donuts ($). If you are ever visiting New Orleans you should stop by the Cafe Du Monde (open 24/7) for some Beignet Donuts and Café au lait. But the next best thing is giving the gift of coffee and donuts for those early mornings or late night programming sessions. This is one of my favorite coffee flavors, it has a unique taste and everyone I’ve brewed it for loves it.
YubiKey Neo ($). If your hacker is concerned about security you might consider getting him the YubiKey Neo. It’s a 2nd Factor Authentication device which works with Android (using NFC), Linux, Mac, and Windows. Everyone should be locking down their accounts (Email, Github, etc.) with a Yubikey. Yubico is one of the more reputable companies. Last year a security bug was discovered in the OpenPGP applet and they offered free replacement (including free shipping) for all the affected devices. Their software to work with they key is open source on GitHub. YubiKey supports such a large variety of MFA authentication methods including FIDO U2F, HOTP, TOPT, Yubico OTP, PIV-Compliant SmartCard, HMAC-SHA1 challenge response, etc. It’s really the only authentication device you need. I can authenticate with just about any service and protocol using a single YubiKey.
ESV MacArthur Study Bible Personal Size ($). Of course, it would be remiss of me not to include a gift that has to do with the very reason we celebrate Christmas. From the Creation and Fall of man, the Son of God coming to earth to die on the cross to take the penalty for our sins, and raising from the dead so that anyone who believes in Him will have eternal life. I received this as a gift a few years ago and it’s to date my favorite Bible. I don’t think you’ll find a higher quality Bible at this price point, it’s even Smyth Sewn which surprised me! MacArthur has some of the most scholarly and practical (easy enough for me to understand) Study Bible notes on the market today. His notes are extensive enough to be helpful, yet the personal edition is still small enough to be portable.
Well, that’s my guide for this year. Wishing everyone a Happy Thanksgiving and a Merry Christmas.
The most Frequently Asked Question from my Family, Friends, and FOAFs…
Laptop Buyer: What Kind of Laptop Should I buy? Ben: Get one with an Intel Wireless card
WiFi Cards Matter
The first piece of advice I have is make sure your wireless card is made by Intel. Do not get anything else. You might see other tempting wireless cards for so much less by Dell, Broadcom, Ralink, Killer, Realtek, etc. These WiFi cards might work with most WiFi hotspots, they might work most of the time, but don’t get them. The problem is they aren’t robust. I’ve seen them drop connections randomly, not be able to connect to certain wireless APs, drop out the signal when the Microwave is running, etc. At best case it works fine but later on a driver update might make it worse. It is not worth saving a few bucks to deal with these issues. Pay extra for an Intel branded WiFi Card. It might cost you $20 more and save you months of frustration. You’ll thank me later when your card isn’t disconnecting randomly.
This brings me to the 2nd most Frequently Asked Question….
My Wireless Keeps Disconnecting. Help!
Laptop buyer: So, my wireless signal keeps dropping out. Ben: Did you get an Intel Card like I told you? Laptop buyer:No…. Ben: Were you trying to save money and went too cheap? Laptop buyer:Yes…..
And the 3rd most Frequently Asked Question….
Can You Fix My WiFi Stability?
If Eli can fix it, you can fix it.
You will need to swap out your WiFi card.
If you’re in the situation where you bought a laptop with a flaky WiFi card, it’s easy to fix! Grab an inexpensive Intel 7260 WiFi Card from Amazon. On most laptops the WiFi card is easily accessible from behind the back cover, usually it’s not more work than a memory upgrade. Unplug the antenna connectors from your unstable wireless card, pop it out, and put the new card in and hook it up. Your WiFi connections will now be robust.
I don’t say this because I’m an Intel fan. I just want things to work. Every couple of years I give another brand a try just to make sure my “only Intel” advice is relevant. I’ve had the same experience with non-Intel brands the last 15 years!
Last year I decided to buy a cheap laptop to watch movies on (we don’t have a TV) and it came with a Dell DW 1704 / Broadcom 4314 Wireless Card. I bought it just to see if things had gotten better. They haven’t. This wireless NIC can’t stream a full length movie from my media server without losing the wireless signal several times.
And it’s not just me, earlier this year several of my colleagues bought Dell XPS laptops with Killer Branded WiFi cards. They just don’t work reliably in scenarios that Intel chips do. In their case they couldn’t connect to several APs. In my case the connection would drop several times a day. This was both in Windows 10 and in Linux. And yes, I tried disabling power saving mode on the WiFi adapter.
I’ve had friends and family not be able to even connect to certain APs at all until they swapped out their Broadcom, Killer, or Ralinks for an Intel card. Now, you might get lucky and find another brand that works. To me it’s not worth the hassle.
The next time you buy a computer, get one with an Intel WiFi card.
Phones depreciate in value fast, their useful life is less than their lifespan. Not because old phones don’t work anymore. But because manufacturers stop providing security updates after about 3 years (at best!)
What If I Told You a Hacker Can Take over Your Phone with One Text… And You Don’t Even Have to Open It?
You might be hacked now and not even know it.
Exploits like like this and like this are real. Vulnerabilities have been found in the past and exploited. They will be found in the future and exploited. Some exploits require you to do nothing but receive (not even open, just receive) an SMS message and a hacker can do what he wants with your phone. He can install malware, use your phone to launch a DDoS attack against Krebs on Security, he can spy on you (or your kids if your kids have phones) activating the camera and microphone at will listening in on your conversations and reading every message passing through the device.
The only protection against this is either (1) not have a phone (more secure), or (2) if you must have a phone, keep it up to date constantly (not as foolproof but would block all but the most sophisticated hackers).
One of the big problems with phones is security. For iPhones you get your updates through Apple. For Android things aren’t as clean. The Android OS itself gets security updates, but then it has to trickle down through the manufacturer (who often doesn’t provide an update) and then the carrier you bought the phone from.
Calculating Remaining Life Before You Buy
To calculate the real cost of a phone, find out how long the manufacturer and carrier will support security updates for it. Divide the cost of the phone by the number of months left for security updates and that’s cost of the phone.
monthly cost = cost of phone / remaining life in months
e.g. cost of phone: $500 remaining life for security updates: 29 months monthly-cost: $500/29 = $17.24
Oddly, the price of phones doesn’t usually drop that much after the 1st year even though they have lost 1/3rd of their useful life!
There Are Only Two Options
A lot of phone manufacturers / carriers don’t even provide updates to their phones. They’re unsupported from the moment you bought them!
For the sake of security, I only recommend two phone manufacturers. Google and Apple. Both have a track record of providing timely security updates. Google pushes out a security update every month and Apple doesn’t have a schedule but does a good job getting them out timely. I also only recommend Apple with the caveat that you trust them because it is a proprietary closed source OS. You are trusting them to do the right thing and have decent security.
Google Nexus Devices
Google stopped selling the Nexus, but they still have 2 years of updates left and are reasonably priced on Amazon.
Google Guarantees Security Patches on Nexus devices 3-years from the release date or at least 18 months from when the Google Store last sold the device (whichever is longer).
As of October 2016, here is the cost per month as I calculate it:
Nexus 5X – security updates until October 2018. $332. – 16GB. Ben’s cost over remaining life: $332/24mos = $13.83/mo Nexus 6P – security updates until October 2018. $450 – 32GB. Ben’s cost over remaining life: $450/24mos = $18.75/mo
(If you get a Nexus, note that there are U.S. and International versions of the phone, if you live in the U.S. you’ll want the U.S. version).
Google has not committed to EOL dates on the Pixel line but if it’s similar to Nexus you’re looking at:
Google Pixel – $650 – 32GB – probably until October 2019 Ben’s cost over remaining life: 650/36mos = ~$18.05/mo
Google Pixel XL – $770 – 32GB – probably until October 2019 Ben’s cost over remaining life: 770/36mos ~$21.38/mo
iOS is closed source so I consider it less secure and less open than Android, but they do a pretty decent job at keeping hackers out. Most compromises I hear about are through hooking your iPhone up to a service like iCloud and not the iPhone itself. I used to use an iPhone, but at the time it was the best phone (better than Blackberry). Now that we have Android I don’t see a huge need to use a closed proprietary system. However, it’s always good to have competition.
Here’s a comparison of iPhone models currently getting security updates with a guess of (but not guaranteed) security updates for 3-years.
iPhone 7 Plus – probably until September 2019 Ben’s cost over remaining life: $650/35mos ~$18.57/mo
iPhone 7 Plus – probably until September 2019 Ben’s cost over remaining life: $650/35mos ~$22.00/mo
iPhone 6S – probably until September 2018 iPhone 6 / 6 Plus – probably until September 2017 iPhone 5S / 5C – probably until the next major iOS update
Where Not to Buy a Phone
Mobile carriers typically install a lot of battery sucking bloatware, which can’t be deleted, and often delay pushing out security updates by months, even years, leaving your phone vulnerable to hackers. Not only that some of the extra software installed introduces vulnerabilities.
Also, phones bought from a mobile carrier are usually locked to that carrier so you can’t switch to someone else without purchasing a new phone.
Having an unlocked phone I avoid the main carriers and instead use MVNOs (Mobile Virtual Network Operator). These MVNOs use the same network that Verizon, AT&T, Sprint, and T-Mobile have, but most often for a better price. For great service and prices I like Google Fi (Sprint & T-Mobile Network), Ting (Spring or T-Mobile), and TracFone (Verizon or AT&T) and there are plenty of other MVNO operators to choose from. You can find one that offers the best plan for your situation. Using TracFone (which is a pre-paid service) we pay less than $10/month for a voice/data/text plan for a Nexus 5X on Verizon’s network.
Don’t Save Money with a Used Phone
I used to buy used phones off eBay to save money but now I don’t think it’s a good idea with the recent USB firmware hacks and the amount of malware out there. Used phones are a security risk–you have no idea if a used phone has been compromised, and if it’s been plugged into a compromised USB device that rewrote it’s firmware. Physical security is paramount. To be safe, I always buy my phones new.
Personal Data on Work Phones and Work Data on Personal Phones
Think carefully before using your personal phone for work. If you connect your phone to work email it almost always gives your employer complete control of the device. They can wipe your phone when you leave, track your location, install software on your phone, and have access to all your personal data.
And similarly, if you put your personal information or your personal email account on a work phone your employer has access to that data.
What Phone Do I Have?
Kris and I both use the Nexus 5X. I’ve reviewed the Nexus 5X here. I will likely replace them both when security updates go EOL which will likely be 2018. Pixel phones are bit expensive so I’m hoping they release some new phones on the Nexus line again next year.
Phone Safety Tips
Always use a phone that’s getting regular (monthly) security updates. As soon as the phone goes out of support, get a new phone.
Minimize the number of apps you install. Limit yourself to the official Google Play Store or iOS store and avoid 3rd party stores like the Amazon Store where authors don’t do as good a job at keeping things updated.
Favor installing well known apps with lots of downloads as they’re more likely to be reviewed and have better security practices.
Uninstall apps that you don’t use.
Always buy a new phone.
Don’t use a phone at all.
If you have a Samsung Note 7, you might want to return it before you catch on fire.
So, you want to hide your email from the NSA’s prying eyes? It’s impossible… but here are some steps you can use to make it harder.
This isn’t theoretical. The NSA has and does intercept this traffic.
Common Points of NSA Interception
The NSA has unlimited resources to compromise your communications. You’re not going to stop them. But that doesn’t mean it should be easy. Below are the easy points of NSA interception. In this example of an email from Mom to Ben the NSA can intercept the email at Mom’s ISP, Mom’s email provider, Ben’s email provider, Ben’s ISP, and any internet hop in between.
I’m going to skip over a lot of important stuff, this guide is not intended for security experts or sysadmins of email systems and how to prevent downgrade attacks, etc. This is meant to be a post about what the average American should do to protect their emails.
Step 1. Client to Server TLS Encryption
Ensure your email client (e.g. Thunderbird) or browser is using a TLS connection to the server. If you’re using any major provider like Gmail, Office 365, etc. they will be enforcing TLS. All email providers should be enforcing TLS so if yours is not that’s a good sign you should be switching.
If using webmail your browser should show https, if using Thunderbird you should be using STARTTLS for both inbound and outbound connections.
Note, the entire CA (Certificate Authority) system is broken, the NSA could generate a fraudulent certificate from an amicable CA and do a MITM attack and still intercept the email, but now they have to take some effort to do so. The point is security comes in layers, and we need to start at the basics, we’ll get to more advanced security below.
Step 2. Make sure Your Email Provider is Encrypting Server to Server Traffic
In 2013 Google was outraged after finding out the NSA was intercepting it’s server to server traffic. As a result Google started encrypting all internal traffic between servers (Good for Google). Most major internet providers provide server to server encryption. But the problem is not all ISPs use encryption, so it doesn’t do much good if you send an email from a secure service like Gmail to a small-town ISP that has no security whatsoever. Probably the best way to check is to enter in a recipients email address here: http://checktls.com/ and if their email provider’s MX server’s pass all the test they’re probably secure.
Step 3. PGP Encrypt Your Emails
Now, the NSA can still potentially intercept your emails at rest through a court order, through PRISM, or through hacking into ISPs. Your email should be encrypted not only in transit, but also at rest. The best way to do that is to encrypt it using OpenPGP. This means even if the NSA gets a hold of your email they can’t read it (at least not without spending some serious time and money).
PGP (Pretty Good Privacy) isn’t foolproof. It doesn’t encrypt the metadata (the NSA can still see that you sent me an email, they can see when you sent it and where you were) but it does encrypt the content.
How go you get OpenPGP? Right here: http://openpgp.org/software/ It’s free, open source, and there are plugins for just about everything. It works on Webmail, Thunderbird, Outlook, etc. Check the link above for a complete list but here are two common options:
If you use Thunderbird I suggest Enigmail, and if you use Gmail with the webmail interface Mailvelope is a great plugin.
Here’s a very quick getting started guide for Mailvelope below. If you’re not going to use Mailvelope the concept is pretty much the same nomatter what plugin you choose. You’ll Generate a Public/Private Keypair, obtain the public key of the person you’re sending an email to, and send them en encrypted email.
How to Setup Mailvelope for Gmail and Chrome
Here’s a quick walk-through to set it up. After installing the plugin you should see this icon on the top-right in Chrome. Right-click on it and choose Options.
Next Generate a Key….
I should note that “Password” is traditionally called a Pass Phrase, it should be long, but you don’t ever want to forget it or you won’t be able to read any encrypted messages sent to you. I strongly suggest writing it down and keeping it someplace safe.
Now, to send an encrypted email to me, you’ll need to import my key. Go to “Import Keys” and type in my email address and hit search. You should click on the keyID: 13E708FC. A key will pop up, click on it to import my key.
Now, you can send me an encrypted email. Go to compose a new email in Gmail. You’ll notice a button in the compose menu. Click the button.
Write me a message…
When you receive an encrypted email, it will look like this. Click on it and enter your passphrase to decrypt.
And there you have it. I wouldn’t say this is foolproof…. it doesn’t protect against a lot of other attack vectors…
But I say if the NSA is going to intercept my communications it shouldn’t be easy. I want them to spend some effort and money to do so.
ZFS is flexible and will let you name and organize datasets however you choose–but before you start building datasets there’s some ways to make management easier in the long term. I’ve found the following convention works well for me. It’s not “the” way by any means, but I hope you will find it helpful, I wish some tips like this had been written when I built my first storage system 4 years ago.
Here are my personal ZFS best practices and naming conventions to structure and manage ZFS data sets.
ZFS Pool Naming
I never give two zpools the same name even if they’re in different servers in case there is the off-chance that sometime down the road I’ll need to import two pools into the same system. I generally like to name my zpool tank[n] where is an incremental number that’s unique across all my servers.
So if I have two servers, say stor1 and stor2 I might have two zpools :
stor1.b3n.org: tank1 stor2.b3n.org: tank2
Top Level ZFS Datasets for Simple Recursive Management
Create a top level dataset called ds[n] where n is unique number across all your pools just in case you ever have to bring two separate datasets onto the same zpool. The reason I like to create one main top-level dataset is it makes it easy to manage high level tasks recursively on all sub-datasets (such as snapshots, replication, backups, etc.). If you have more than a handful of datasets you really don’t want to be configuring replication on every single one individually. So on my first server I have:
I usually mount tank/ds1 as readonly from my CrashPlan VM for backups. You can configure snapshot tasks, replication tasks, backups, all at this top level and be done with it.
Name ZFS Datasets for Replication
One of the reasons to have a top level dataset is if you’ll ever have two servers…
stor1.b3n.org | - tank1/ds1
stor2.b3n.org | - tank2/ds2
I replicate them to each other for backup. Having that top level ds[n] dataset lets me manage ds1 (the primary dataset on the server) completely separately from the replicated dataset (ds2) on stor1.
Advice for Data Hoarders. Overkill for the Rest of Us
The ideal is we backup everything. But in reality storage costs money, WAN bandwidth isn’t always available to backup everything remotely. I like to structure my datasets such that I can manage them by importance. So under the ds[n] dataset create sub-datasets.
stor1.b3n.org | - tank1/ds1/kirk – very important – family pictures, personal files | - tank1/ds1/spock – important – ripped media, ISO files, etc. | - tank1/ds1/redshirt – scratch data, tmp data, testing area | - tank1/ds1/archive – archived data | - tank1/ds1/backups – backups
Kirk – Very Important. Family photos, home videos, journal, code, projects, scans, crypto-currency wallets, etc. I like to keep four to five copies of this data using multiple backup methods and multiple locations. It’s backed up to CrashPlan offsite, rsynced to a friend’s remote server, snapshots are replicated to a local ZFS server, plus an annual backup to a local hard drive for cold storage. 3 copies onsite, 2 copies offsite, 2 different file-system types (ZFS, XFS) and 3 different backup technologies (CrashPlan, Rsync, and ZFS replication) . I do not want to lose this data.
Spock – Important. Important data that would be a pain to lose, might cost money to reproduce, but it isn’t catastrophic. If I had to go a few weeks without it I’d be fine. For example, rips of all my movies, downloaded Linux ISO files, Logos library and index, etc. If I lost this data and the house burned down I might have to repurchase my movies and spend a few weeks ripping them again, but I can reproduce the data. For this dataset I want at least 2 copies, everything is backed up offsite to CrashPlan and if I have the space local ZFS snapshots are replicated to a 2nd server giving me 3 copies.
Redshirt – This is my expendable dataset. This might be a staging area to store MakeMKV rips until they’re transcoded, I might do video editing here or test out VMs. This data doesn’t get backed up… I may run snapshots with a short retention policy. Losing this data would mean losing no more than a days worth of work. I might also run zfs sync=disabled to get maximum performance here. And typically I don’t do ZFS snapshot replication to a 2nd server. In many cases it will make sense to pull this out from under the top level ds[n] dataset and have it be by itself.
Backups – Dataset contains backups of workstations, servers, cloud services–I may backup the backups to CrashPlan or some online service and usually that is sufficient as I already have multiple copies elsewhere.
Archive – This is data I no longer use regularly but don’t want to lose. Old school papers that I’ll probably never need again, backup images of old computers, etc. I set set this dataset to compression=gzip9, and back it up to CrashPlan plus a local backup and try to have at least 3 copies.
Now, you don’t have to name the datasets Kirk, Spock, and Redshirt… but the idea is to identify importance so that you’re only managing a few datasets when configuring ZFS snapshots, replication, etc. If you have unlimited cheap storage and bandwidth it may not worth it to do this–but it’s nice to have the option to prioritize.
Now… once I’ve established that hierarchy I start defining my datasets that actually store data which may look something like this:
With this ZFS hierarchy I can manage everything at the top level of ds1 and just setup the same automatic snapshot, replication, and backups for everything. Or if I need to be more precise I have the ability to handle Kirk, Spock, and Redshirt differently.
Let’s Encrypt is a great service offering the ability to generate free SSL certs. The way it normally works is using http-01 challenge… to respond to the Let’s Encrypt challenge the client (typically Certbot) puts an answer in the webroot. Let’s Encrypt makes an http request and if it finds the response to the challenge it issues the cert.
Certbot is great for public web-servers.
Generating Intranet SSL Certs Using DNS-01 Challenge
But, what if you’re generating an SSL certificate for a mail server, or mumble server, or anything but a webserver? You don’t want to spin up a web-server just for certificate verification.
Or what if you’re trying to generate an SSL certificate for an intranet server Many homelabs, organizations and businesses need publicly signed SSL certs on internal servers. You may not even want external A records for these services, much less a web-server for validation.
ACME DNS Challenge
Fortunately, Let’s Encrypt introduced the DNS-01 challenge in January of 2016. Now you can respond to a challenge by creating a TXT record in DNS.
Lukas Schauer wrote dehydrated (formerly letsencrypt.sh) which can be used to automate the process. If you need to generate SSL certs for Windows I’ve added the ability to output to PFX / PKCFS 12 in my fork.
Here’s a quick guide on Ubuntu 16.04, but it should work on any Linux distribution (or even FreeBSD).
At this point, you need to install a hook for your DNS provider. If your DNS provider doesn’t have a hook available you can write one against their API, or switch to a provider that has one.
If you need to pick a new provider with a proper API my favorite DNS Providers are CloudFlare and Amazon Route53. CloudFlare is what I use for b3n.org. It gets consistently low latency lookup times according to SolveDNS, and it’s free (I only use CloudFlare for DNS, I don’t use their proxy caching service which can be annoying for visitors from some regions). Route53 is one of the most advanced DNS providers. It’s not free but usually ends up cheaper than most other options and is extremely robust. The access control, APIs, and advanced routing work great. I’m sure there are other great DNS providers but I haven’t tried them.
Here’s how to set up a CloudFlare hook as an example:
Create an /etc/dehydrated/domains.txt file, something like this:
b3n.org www.b3n.org dev.b3n.org www-dev.b3n.org
The first four lines will each generate their respective certificates, the last line creates a multi-domain or SAN (Subject Alternate Name) cert with multiple entries in a single SSL certificate.
The first time you run it, it should get the challenge from Let’s Encrypt, and provision a DNS TXT record with the response. When validated the certs will be placed under the certs directory and from there you can distribute them to the appropriate applications. The certificates will be valid for 90 days.
For subsequent runs letsencrypt.sh will check to see if the certificates have less than 30 days left and attempt to renew them.
It would be wise to run dehydrated -c from cron once or twice a day and let it renew certs as needed.
To deploy the certs to the respective servers I suggest using an IT Automation tool like Ansible. I have a dedicated VM that runs Ansible. You can configure an ansible playbook to run from a daily cron job to copy updated certificates to remote servers and automatically reload services if the certificates have been updated. Here’s an example of an Ansible Playbook which could be called daily to copy certs to all web-servers and reload nginx if the certs were updated or renewed:
Create a file web-servers-nginx.yml
- hosts: web-servers-nginx
- name: Copy SSL certificates
- name: Reload Nginx when certs change
service: name=nginx state=reloaded
Add the below to your Ansible inventory file (mine is namned ‘production’). “b3n.org” matches the primary name of the certificate, found in /etc/dehydrated/certs/
(note that the user that runs this needs to have permissions to read the certificates that dehydrated generated. Easiest way to do that is to use the same user account to run dehydrated as you do for Ansible. Also Ansible will need public/private key authentication setup to connect to the remote server without a password).
Then obviously you would have something like this in nginx: