Cloudways Managed WordPress Hosting

Save Time Managing WordPress

Last week I moved b3n.org from DigitalOcean to Cloudways Managed WordPress Hosting. Why? Well, there is nothing wrong with DigitalOcean, they’ve been fantastic.

But my problem is I hardly had time to maintain the technology stack. A few weeks ago I was in the process of adding a couple of WordPress sites. This isn’t difficult, but it’s tedious. You have to create user accounts, modify NGINX site files, setup SSL Cert Automation, configure Varnish and Redis for caching, install WordPress itself and set all that up for security, auto-updates, caching, etc. Then a year from now I’m going to have to migrate everything to a new host when Ubuntu 16.04 goes EOL (End of Life) for security updates. As I was working on this I thought to myself… What am I doing!?

Logos of Apache, PHP, MariaDB, redis, WordPress, Memcache, Varnish, NGINX, and Let's Encrypt

Before: On DigitalOcean I spent a lot of research and testing and setup plus several hours a month maintaining the OS, technology stack, security updates, and performance tuning necessary to run WordPress.

After: Now I host WordPress on Cloudways and they take care of it for me. When I want a new WordPress instance or to make a change I push a button on a web interface. Done.

What did that time savings cost me? It cost me dearly. My monthly hosting went from $5 to $10.

Before finding Cloudways I had a bit of a journey. I started by looking into hosting options… and decided I wanted managed hosting. This is mostly because I feel like I’ve done a much better job at tuning WordPress than shared hosting providers I’ve used in the past.

Managed Hosting vs. Shared Hosting

Managed hosting typically differs from shared hosting in the service level they offer. I say typically because many managed hosting providers fall short, and many shared hosting providers excel in these areas. But in general Managed Hosting providers are better at:

  • Automated backups
  • Multiple environments (Dev/Stage/Prod) and migration between them
  • Performance tuning
  • Caching and CDN
  • Security updates
  • Guaranteed or dedicated resources (cpu, memory, I/O, bandwidth)
  • Monitoring
  • Self-healing
  • Better control of when core components get upgraded (PHP, MySQL, MariaDB, etc.). This is useful because if you want to take advantage of the latest version of PHP like 7.3 you can, but if you have a plugin that isn’t compatible you can stay on an older version.

Managed Hosting Options

I had my shortlist. SiteGround, Bluehost, WPEngine, etc. Note that I am not looking at the cheaper shared hosting, but at their managed hosting plans.

All looked like they’d be great but what irked me is they want you to pre-pay for several years in advance to get the advertised price. I am used to hourly billing with DigitalOcean. The thing with technology is things change fast so I want flexibility. I don’t ever want to be locked into a situation where I’ve prepaid 2 years of hosting.

The other concern is the affordable plans had monthly visitor limits, bandwidth limits, or number of WordPress install limits. Most were under what b3n.org needs which would push me into the $100+ plans. Maybe my DigitalOcean droplet isn’t so bad after all!

So back to Google searching… I came across Cloudways. What’s the best way I can describe Cloudways? The DigitalOcean of WordPress.

What Separates Cloudways

What makes Cloudways unique is when you deploy WordPress, you’re not just getting a managed WordPress Application. You’re getting your own Cloud Server and you can install as many WordPress instances under it at no additional cost. So the hierarchy is:

  • Server
    • WordPress site 1
    • WordPress site 2
    • etc…

If you run out of capacity you can scale horizontally (deploy more servers) or vertically (more cores, memory, and ssd space).

Logos of DigitalOcean, Linode, Vultr, AWS, and Google Cloud Platform that Cloudways allows you to deploy to.

Cloudways doesn’t have their own infrastructure. Rather they partner with DigitalOcean, AWS, Google Cloud, Linode, and Vultr so you pick the underlying cloud vendor. So when you deploy a server on Cloudways you’re actually getting a managed cloud server.

Features I like from Cloudways

  • You can choose your desired cloud provider based on your needs.
  • Price is affordable ($10/month for a small DO droplet)
  • Per hour billing (no pre-paying years in advance).
  • Unlimited sites and WP instances, you can scale up as needed.
  • Choose any location you want
  • Staging Environments!
  • WordPress migration (mine migrated over flawlessly) from your old server
  • 24/7 Support … now when my server has trouble I don’t have to call myself.
  • Linux, Apache, NGINX, SSL Cert automation, Varnish, redis, security updates and all of that stuff I used to maintain myself is now taken care of for me! |:-)
  • Monitoring and Auto-healing can correct problems proactively.
  • There are a lot of checks for best practices and server health. I temporarily disabled the Breeze cache plugin and got an email the next day telling me it was still disabled. Similarly there are checks for load and performance.
  • You can choose which version of PHP and MariaDB to run on.
  • And now when Ubuntu 16.04 LTS goes EOL…. I don’t care!
  • WordPress Instances come pre-optimized (have Breeze caching plugin installed, Memcached, etc.).
  • It’s not limited to WordPress so Drupal and other PHP applications are supported as well.

Where Cloudways Could do Better

  • I’m a bit unclear what what happens when the server I deployed goes EOL for security updates. I can’t imagine they would upgrade it autonomously since that would be risky. I’m guessing it would fail a health check and I’d get a notification to upgrade? It’s something I’ll have to keep an eye on, but it could be made clearer. If the solution for this is to deploy a new server and move your WordPress Instance over to a new server that can be done with a few clicks from the web interface.
  • The Cloudways interface is not snappy. It can take a few seconds to bring up monitoring metrics.
  • Where are floating IPs?! With DigitalOcean I can get a floating IP that I can assign to one droplet and then reassign it to another droplet. With Cloudways it looks like moving to another server would require DNS changes.

Conclusion

In the the chart below I have:

  • IaaS (Infrastructure as a Service)
  • PaaS (Platform as a Service)
  • SaaS (Software as a Service)

Cloudways would fall in PaaS. They manage everything that WordPress runs on (PHP, MariaDB, Varnish, Apache, NGINX, etc.). Although they step in the SaaS world a bit since they will automatically deploy optimized WordPress instances for you with things like caching pre-configured, but for the most part you’re still managing WordPress yourself.

Chart showing IaaS, PaaS, and SaaS.  Cloudways falls under IaaS

All in all Cloudways Managed Cloud Hosting seems to be a decent offering. One side benefit is they’re just better at performance tuning than I am. On DigitalOcean where I was maintaining the platform myself b3n.org was able to handle a sustained load test of 150 clients per second, on Cloudways it handles over 1000 clients per second.

My First 3D Printer! Ender 3 Pro

Eli Assembling Ender 3 Pro

I’m not sure exactly how it started, it might have been when Eli and I were trying in vain to find Lego Technic sets with lots of gears, or when Kris was discussing with me purchasing learning aids for school. … and I started to realize we could 3D print this stuff!

Just with the things we buy for school each year a 3D printer will pay for itself in 2 years.

What is 3D Printing?

3D Printing is also known as Additive Manufacturing (AM). This means instead of injecting molding, items are created by printing layers on top of layers. Now injecting molding is fine for mass production, but for small quantities it doesn’t make since because molds aren’t cheap to make. For 3D printing a variety of methods and materials can be used. I use PLA (Polylactic acid), the plastic is fed to the printer and heated to the point of melting. It then comes out a nozzle where it is cooled and solidifies. The nozzle is controlled by X, Y, and Z axis stepper or servo motors allowing the nozzle to be positioned anywhere in the print area.

Octopus with articulating legs… the 3D printer can print the leg segments in place interlocked. I don’t think this is possible in traditional manufacturing.

Of course, I know very little about 3D printing so I turned to my coworker, Brad, who has designed and printed out prototype aircraft components and has actually flown them. I asked him for the best quality budget 3D printer. He has a few of the larger fancier Creality printers and told me the next one he would likely buy for himself for smaller prints was a little Creality Ender 3 Pro. One thing I’ve learned: if the expert is willing to buy something for himself, that’s what you want.

The Ender 3 Pro comes with all the tools needed for someone new to 3D printing. Allen keys, wrench, screwdriver, pliers, SD card and USB adapter, nozzle cleaner needle, blade, etc. The Pro version adds a few features that I think make it worth the extra cost of the normal Ender 3: It is a bit more sturdy, has a better (quieter) PSU, can resume printing after a power failure, and has a magnetic flexible print bed which eliminates the need for glue or hairspray to get prints to stick. The 3D prints adhere very well during printing and peels right off when done. I hardly ever need to print with rafts or support structures. I don’t even print a brim.

It arrived noon on Friday, Eli couldn’t wait so he and Kris mostly had it together before I got home. We finished the assembly, I didn’t level the bed or anything, I popped in the SD card that came with the machine, selected the cat model that was already on the card, and it started printing, and printing, and printing…. okay, so it took a long time. So we all all went to bed.

Next morning I woke up to hearing, “It finished!” We had a cat! Which Eli promptly painted. …here’s our first print:

Not bad for a first try.

For our second print we decided to print something simple like the Eiffel tower. I found one on ThingiVerse and opened it up in the Creality Slicer (a slicer is a program that converts 3D models into a gcode file that the printer understands) that came with the printer. It took me 3 tries.

This was my last print using the Creality Siicer. I had to go crazy on the rafts and support structure but this isn’t needed with the default Ender 3 Profiles that come with Cura.

Our first attempt ran for an hour or two then one of the 4 legs fell over. I tried it again with a raft but it still fell! Then I made huge rafts and a support structure and it worked! But the print came out stringy. I was using Creality Slicer since it came with the printer. Then I remembered Brad told me to try Cura. So I downloaded that… and it was a night and day difference (even though Creality Slicer is based on Cura). I told Cura what printer I had and it had pre-loaded sane defaults for everything from print speeds to head extraction. Now that I’m printing with Cura, I don’t need support structures, and no stringing. I’m guessing most of the difference was in the default profiles.

Print Workflow

Business Card Holder from ThingiVerse

What does a 3D print workflow look like?

1. Go to Thingiverse and search for a 3D object. Thingiverse is a huge library of 1,500,000 3D printable models. I’ve found everything from Craftsman versatrack compatible bike hangers to spare parts for my car. Download the STL file (this is essentially a CAD file).

2. Open the STL file with Cura (free open source) which is a slicer to convert the object into instructions the 3D printer can understand. Cura has well tuned default profiles for the Ender 3. The instructions will output into a .gcode file. I popped one open and it is a text file with line by line instructions for the printer to go to these x y z coordinates at these speeds at this temperature, etc. Essentially you copy this to an SD card, insert it into the printer, and print the object.

3. The printer will pre-heat the bed and PLA, then start printing. I would say we have a success 9 out of 10 times. Sometimes I won’t have the bed quite level or the temperature won’t be right for the specific PLA brand/color I’m using (even different colors print at different temperatures). But you can save those color profiles in Cura so once you have it dialed in it should work going forward. Generally if the first layer succeeds the print will be a success.

4. When done, let it cool for 30 seconds, bend the magnetic bed and the print peels off.

Can you Design and Print Your Own 3D Models? Yes!

Gears Eli designed in Tinkercad

Just about everyone has asked me this question. You can.

Tinkercad (by Autodesk) is a free web CAD designer that makes it simple to design 3D objects. The very first thing Eli designed in Tinkercad was a set of gears.

Stepper Motor Noise

Okay, so one issue I had with the Ender 3 Pro is the noise the servo motors make. The best way I can explain it is the printer sounds like R2D2 and C3PO are arm wrestling and you hear it throughout the entire house. I ended up swapping out the control board for one with silent stepper drivers. Once I did that, the only noise you hear is the fans. Much better. We have it near the kitchen and I’d say it isn’t silent. The fan is noisier than a typical computer fan but not nearly as loud as the dishwasher.

Motherboard

Infill Patterns

In Cura, you can choose from a number of Infill Patterns. Each has their advantage. Some are designed to be stronger, print faster, save on material. One of the huge advantages 3D printing has is you can pick a pattern and density to provide the strength needed for a particular use. This greatly reduces the amount of plastic needed to fill in a part. Here are the infill patterns in Cura:

Infill Patterns in Cura

Left to right the infill patterns are:

  • Gyroid
  • Cross 3D
  • Cross
  • Zig Zag
  • Concentric
  • Quater Cubic
  • Octet
  • Cubic-Subdivision
  • Cubic
  • Tri-Hexagon
  • Triangles
  • Lines
  • Grid
Infill Patterns Test
Cubic-Subdivision Infill Pattern

I usually use lines for quick prints. If I have a larger shape that needs to bear stress in multiple directions I’ll either use Gyroid (which is a 3D pattern found in creation) and or Cubic-Subdivision which will use more density around the perimeter and less in the middle (like bones).

Getting Started in 3D Printing

Here’s what I bought to get started.

  1. Creality Enter 3 Pro 3D Printer. 3D printer along with an essential set of tools.
  2. Silent Stepper Drivers Motherboard Upgrade
  3. Gizmo Dorks PLA Filament 1.75mm, 200g 4 Color Pack. I wanted to try a few different colors. These were easy to start with.
  4. Hatchbox PLA 1 kg spools in various colors.

One thing I’d say about 3D printing is at my budget it is not quite there when it comes to easy of use. There was nothing me, Kris, and Eli couldn’t figure out and get working, but it look us a bit of time to get the bed leveled and temperature settings dialed in. If you want something that is closer to “hit print and it just works” then you may want to pay a little more and get a Prusa Mini. It has auto-bed leveling and a network interface which makes it much more user-friendly. But you will pay quite a bit more for those features.

The Future

3D printing is the future. In the home it is going to replace the need to run to the store to get something small, and allow for 3D printing small parts to repair items instead of tossing them. Just like printers moved from businesses to homes, so will the ability to manufacture small plastic items. 3D Printing isn’t instant, but it’s already faster than Amazon Prime. And if it saves me from having to make a trip to Spokane to find some part it’s worth it.

For manufacturing, it greatly reduces the tooling costs. Injection molding will still be used for items produced in mass. But 3D printing lowers the tooling costs for smaller runs and one-off items. Not to mention the agility: a factory of general purpose 3D printers can instantly start printing something else to instantly meet new demands and market changes.

Transformer Pumpkin parts. I am amazed the printer can do those overhangs.

Things We’ve Printed (So far)…

  • Cat
  • Eiffel Tower
  • Pumpkin
  • Octopi to hand out as prizes
  • Pumpkin Transformer
  • UniFi USG and Switch mini racks
  • 3D Topography Maps of the 7 Summits
  • Craftsman Versatrack compatible bike hook
  • Drawers to store tools for the Ender 3
  • Impossible 3D shapes
  • Jig for drilling axles in a Pinewood Derby car
  • 3D Luther Roses for Reformation Day prizes
  • Business Card Holder
  • Gears
  • Benchy Boat
  • Lego compatible bricks
  • Carabiner

I switched to Duplicati for Windows Backups and Restic for Linux Servers

So long, CrashPlan! After using it for 5 years, CrashPlan with less than a day notice decided to delete many of my files I had backed up. Once again, the deal got altered. Deleting files with no advanced notice is something I might expect from a totalitarian leader, but it isn’t acceptable for a backup service.

Darth Vader altering the deal
I am altering the deal. Pray I don’t alter it any further.

CrashPlan used to be the best offering for backups by far, but those days are gone. I needed to find something else. To start with I noted my requirements for a backup solution:

  1. Fully Automated. I am not going to remember to do something like take a backup on a regular basis. Between the demands from all aspects of life I already have trouble doing the thousands of things I should already be doing and I don’t need another thing to remember.
  2. Should alert me on failure. If my backups start failing. I want to know. I don’t want to check on the status periodically.
  3. Efficient with bandwidth, time, and price.
  4. Protect against my backup threat model (below).
  5. Not Unlimited. I’m tired of “unlimited” backup providers like CrashPlan not being able to handle unlimited and going out of business or altering the deal. I either want to provide my own hardware or pay by the GB.

Backup Strategy

Relayed Backups

This also gave me a good opportunity to review my backup strategy. I had been using a strategy where all local and cloud devices backed up to a NAS on my network, and then those backups were relayed to a remote (formerly CrashPlan) backup service. The other model is a direct backup. I like this a little better because living in North Idaho I don’t have a good upload speed so in several cases I’ve been in situations where my remote backups from the NAS would never complete because I don’t have enough bandwidth to keep up.

Now if Ting could get permission to run fiber under the railroad tracks and to my house I’d have gigabit upload speed, but until then the less I have to upload from home the better.

Direct Backups

Backup Threat Model

It’s best practice to think through all the threats you are protecting against. If you don’t do this exercise you may not think about something important… like keeping your only backup in the same location as your computer. My backup threat model (these are the threats which my backups should protect against):

  1. Disasters. If a fire sweeps through North Idaho burning every building but I somehow survive I want my data. So must have offsite backups in a different geo-location. We can assume that all keys and hardware tokens will be lost in a disaster so those must not be required to restore. At least one backup should be in a geographically separate area from me.
  2. Malware or ransomware. Must have an unavailable or offline backup.
  3. Physical theft or data leaks. Backups must be encrypted.
  4. Silent Data Corruption. Data integrity must be verified regularly and protected against bitrot.
  5. Time. I do not ever want to lose more than a days worth of work so backups must run on a daily basis and must not consume too much of my time maintaining them.
  6. Fast and easy targeted restores. I may need to recover an individual file I have accidentally deleted.
  7. Accidental Corruption. I may have a file corrupted or accidentally overwrite it and may not realize it until a week later or even a year alter. Therefore I need versioned backups to be able to restore a file from points in time up to several years.
  8. Complexity. If something were to happen to me, the workstation backups must be simple enough that Kris would be able to get to them. It’s okay if she has to call one of my tech friends for help, but it should be simple enough that they could figure it out.
  9. Non-payment of backup services. Backups must persist on their own in the event that I am unaware of failed payments or unable to pay for backups. If I’m traveling and my CC gets compromised I don’t want to not have backups.
  10. Bad backup software. The last thing you need is your backup software corrupting all your data because of some bug (I have seen this happen with rsync) so it should be stable. Looking at the git history I should be seeing minor fixes and infrequent releases instead of major rewrites and data corruption bug fixes.
Raspberry Pi and 4TB drive on wooden shelf
Raspberry Pi 4TB WD Backup

My friend Meredith had contacted me about swapping backup storage. We’re geographically separated so that works to cover local disasters. So that’s what we did, each of us setup an SSH/SFTP server for the other to backup to. I had plenty of space on my Proxmox environment so I created a VM for him and put it in an isolated DMZ. He had a Raspberry Pi and bought a new 4TB western digital external USB drive that he setup at his house for me.

Duplicati Backup Solution for Workstations

For Windows desktops I chose Duplicati 2. It also works with Mac, and Linux but for my purposes I just evaluated Windows.

Duplicati screenshot of main page

Duplicati has a nice local web interface. It’s simple and easy to use. Adding a new backup job is simple and gives plenty of options for my backup sets and destinations (this allows me to backup not only to a remote SFTP server, but also to any cloud service such as Backblaze B2 or Amazon S3).

Animation of setting up a duplicati backup job

Duplicati 2 has status icons in the system tray that quickly indicate any issues. The first few runs I was seeing a red icon indicating the backup had an error. Looking at the log it was because I had left programs open locking files it was trying to back up. I like that it warns about this instead of silently not backing up files.

Green play icon
Grey paused icon
Black idle icon
Red error icon

Green=In Progress, Grey=Paused, Black=Idle, Red=Error on the last backup.

Duplicati 2 seems to work well. I have tested restores and they come back pretty quickly. I can backup to my NAS as well as a remote server and a cloud server.

Two things I don’t care for Duplicati 2.

  1. It is still labeled Beta. That said it is a lot more stable than some GA software I’ve used.
  2. There are too many projects with similar names. Duplicati, Duplicity, Duplicacy. It’s hard to keep them straight.

Other considerations for workstation backups:

  • rsync – no gui
  • restic- no gui
  • Borg backup – Windows not officially supported
  • Duplicacy- License only allows personal

Restic Backup for Linux Servers

I settled on Restic for Linux servers. I have used Restic on several small projects over the years and it is a solid backup program. Once the environment variables are set it’s one command to backup or restore which can be run from cron.

Screenshot of restic animation

It’s also easy to mount any point in time snapshot as a read-only filesystem.

Borg backup came in pretty close to Restic, the main reason I chose Restic is the support for backends other than sftp. The cheapest storage these days is object storage such as Backblaze B2 and Wasabi. If Meredith’s server goes down, with Borg backup I’d have to redo my backup strategy entirely. With restic I have the option to quickly add a new cloud backup target.

Looking at my threat model there are two potential issues with Restic:

  1. A compromised server would have access to delete it’s own backups. This can be mitigated by storing the backup on a VM that is backed by storage configured with periodic immutable ZFS snapshots.
  2. Because restic uses a push instead of a pull model, a compromised server would also have access to other server’s backups increasing the risk of data exfiltration. At the cost of some deduplication benefits this can be mitigated by setting up one backup repository per host, or at the very least by creating separate repos for groups of hosts. (e.g. a restic repo set for minecraft servers and separate restic repo for web servers).

Automating Restic Deployment

Obviously it would be ridiculous to configure 50 servers by hand. To automate I used two Ansible Galaxy roles. I created https://galaxy.ansible.com/ahnooie/generate_ssh_keys which automatically generates ssh keys and copies the key ids to the restic backup target. The second role https://galaxy.ansible.com/paulfantom/restic automatically installs and configures a restic job on each server to run from cron.

Utilizing the above roles here is the Ansible Playbook I used to configure restic backups across all my servers. This sets it up so that each server is backed up once a day at a random time:

Manual Steps

I’ve minimized manual steps but some still must be performed:

  1. Backup to cold storage. This is archiving everything to an external hard drive and then leaving it offline. I do this manually once a year on world backup day and also after major events (e.g. doing taxes, taking awesome photos, etc.). This is my safety in case online backups get destroyed.
  2. Test restores. I do this once a year on world backup day.
  3. Verify backups are running. I have a reminder set to do this once a quarter. With Duplicati I can check in the web UI, and with a single Restic command it can get a list of hosts with the most recent backup date for each.

Cast your bread upon the waters,
for you will find it after many days.


Give a portion to seven,

or even to eight,
for you know not what disaster may happen on earth.

Solomon
Ecclesiastes 11:1-2 ESV

How to Get Longer Life Out of Your Dell Laptop Battery

In 2015 I bought myself and Kris Dell Latitude E5450 laptops. 1 year later her battery was fine, however mine lasted 60 seconds on a full charge. I attribute this to Kris often using her computer on battery and not having it plugged in all the time, and me always having my computer in the docking station so it’s constantly charging.

60 seconds of run-time

I lived with a bad battery for 3 years… 60 seconds is enough to run from one outlet to the next without having to power down… which is really all I need. Although I’ll admit 120 seconds would be nice!

Battery Swelling Issue

A couple weeks ago I noticed a crack near my touchpad… and a bulge. My laptop was growing! Or rather, the battery was expanding! The battery pack is about 175% the height of what it should be!

That Dell battery pack on the left is a little swollen….

I quickly waited a few months, and decided that despite the battery still giving me my 60 seconds, this could be a safety or fire risk or my laptop might break if it swells much more, so out of prudence decided to buy a new Dell G5M10 battery. After installing it I went into the BIOS and noticed settings to change how Dell manages the battery! You can opt for faster charging, more run-time, or more longevity.

Here are the batttery life settings.

Charge Time, Run Time, or Lifespan. Pick any 1, sometimes 2.

  • ExpressCharge – Faster charging. This was the default! The problem is the faster you charge a battery the more you cause it to wear out sooner. This makes sense for people on the road who don’t have a lot of time to recharge. But it doesn’t make sense if you’re almost always on AC power like me. This setting probably has a high charge stop up to maximum capacity (100%?) and high custom charge start (95%) so that it’s always ready. I’m not an expert in batteries, but I believe batteries naturally lose power over time so each time it drops 5% of it’s power it charges back up to 100%… those constant charge cycles cause a lot of wear not to mention the battery is being held at full charge which causes it to degrade faster. Running in this setting is giving you the best performance but you’re pushing the limits.
  • Standard – This is the same as ExpressCharge as far as I can tell but a little slower charge. Other than that it’s still going to wear the battery out fast.
  • Primary AC User – Designed to extend the battery lifespan for laptops that are usually plugged in. I assume this does two things: It probably slows down the charge rate, sets the Charge stop to a lower value like 70%, and sets the charge start to around 50% (I’m completely guessing at these numbers). This reduces the number of charge cycles needed to maintain the battery and is generally charging the battery up to levels suitable for long-term storage instead of maximum performance giving you the best lifespan at the cost of run-time. If you want longevity at the cost of run-time this is the setting you want.
  • Adaptive – This is what the default should be! It’s a trade-off between the two. It optimizes battery settings based on how you typically use the computer. Meaning if you’re running on AC power all the time it will act more like the Primary AC power setting, but if it sees you are using the battery a lot it will start behaving like the ExpressCharge.
  • Custom – Could also set custom values

Dell BIOS Settings for Battery Maintenance

Optimizing Battery for both performance and longevity depending on the time of day

This will only drive your battery hard when you might need the run-time, but go easy the rest of the time. If you have a fixed schedule you can tell your Dell laptop what times of day you need more run-time. But then outside of those hours it will maximize longevity.

Dell BIOS Settings for Battery

Well, I’ll be changing my BIOS setting to Primary AC User.

And with my brand new battery I’m liking the new 4-hour run-time again. Now days I walk from outlet to outlet instead of running.

How to Get Longer Life Out of your ThinkPad Battery

If you use a ThinkPad read this KB on How to Increase Your Battery Life by changing the Battery Maintenance settings.

Running Chrony NTP under a VMware Guest

Here’s a quick guide to run an NTP (Network Time Protocol) server using Chrony with a GPS (optional) receiver on a VMware ESXi Guest running Ubuntu 18.04.  I should note this is experimental and something I setup in my homelab temporarily.  For production environments I would run NTP on physical hardware and not VMware.

Create and Configure VM

Be sure to disable Guest Tools Time synchronization by editing the VM settings and uncheck Synchronize guest time with host.

Disable VM Tools Time Synchronization

Set the CPU shares to High… we want the NTP server to have priority if there is processor contention.

High CPU Shares

Install Chrony

I diversified between Ubuntu’s, NTP.org’s and NIST’s time server pools.

That’s it, after restarting the chrony service (service restart chrony) you should be able to get time reports by running:

Why You Shouldn’t Run an NTP Server in a VM Guest

VM’s can’t keep accurate time

I’ve generally found that VMs keep great time inside of VMware.  One thing that can help with this is setting the CPU shares to high so your time server always has a priority.  I ran Chrony in a VM for several weeks, compared it with Chrony on a Raspberry Pi.  Both were acceptable, and both had a smaller standard deviation than public NTP servers over the internet, but the VM had a much smaller standard deviation than the Pi.  That tells me VMs running on better hardware may be better than lesser bare physical hardware at time tracking under certain conditions, and a local NTP server in a VM can be more precise than grabbing time off the internet.

VMs can become out of sync during snapshots, suspend, failover, etc.

I ran a suspend test and this is true.  I paused a VM, waited 10 seconds, then resumed it.  It reported the wrong time to NTP clients for several minutes before it corrected itself from external NTP servers.  Here’s a screenshot of my NTP server being 11 seconds off after a pause!

Chrony after VMware Suspend

This is a valid reason to run an NTP server on physical hardware.  However, I think it is possible to run an NTP server under VMware with the following precautions:

  1. Your NTP servers under VMware should never be paused.  That means they should be excluded from failover (instead of failover it’s better to configure multiple NTP servers for your clients to connect to since it’s better for an NTP server to be down than report a wrong time).
  2. Have multiple NTP servers.  At least three. You’ll notice in the screenshot above Chrony (running on a separate physical machine) flagged the server as not being accurate.  This way if one of your VMs gets paused chrony will switch to another time-source automatically.
  3. Set makestep 1 -1 in the chrony.conf file (this tell chrony that any difference greater than one second will get stepped which allows for faster correction after a resume).

GPS Receiver

This is not really related to VMware.  But I had a GPS receiver so thought I’d see how it works with Chrony….

GlobalSat GPS Receiver

I have a GlobalSat BU-353S4 USB GPS Receiver.  This isn’t the best GPS receiver for accuracy.  For me it’s accurate to within a few hundred milliseconds which is good enough for my experimental purposes but worse than just grabbing time off the internet.  For serious time-keepers you’ll be wanting to use something faster than USB and more accurate than what a cheap GPS receiver can provide.

Configure gpsd

Install Chrony

So, how did I get the values on the refclock line…

The way I came up with my offset of 0.250 is by initially setting the offset to 0.0, restarting chrony, and running chronyc sources -v several times taking note of the offset.  I’d get numbers like +249ms, +253ms, +250ms, etc.

Since my GPS is off by about 250ms I set the offset to 0.250.  Now it’s usually not off by more than 100ms.

Chrony Sources

The 100ms+- variance is not a problem when being combined with other sources, but if it was the only time source I’d be better off tolerating drift than the high variance of GPS for a short period without access to the NTP pools, if I had no internet for several months or an air-gapped network then time via GPS would probably be better than nothing–but a better GPS receiver should be used in those scenarios.

For most networks running chrony in a VM and using a GPS is unnecessary.  It’s better to keep it simple.  I just use the NTP service on my pfSense router and set all the clients to that.

Don’t forget to watch your clocks adjust themselves next Sunday!

Programming Management & Leadership Books

There are plenty of books on managing people; but there are few books targeting management of software development, and even fewer aimed at people who got promoted into leadership positions with no management skills.  I’ve read countless books looking for resources in that area…  I can find plenty of books about how to manipulate people or promote yourself (and I’ve had plenty of training to that affect) but those are not the books I’m looking for.

I want real authentic leadership and practical management.  Below you will find the best of what I’ve found over the last four years. And unlike some “Best Books for Programming Managers” and “Top 10 books on Leadership” lists you’ll find online… I actually read every book listed below. 

I should also note that even if you aren’t in a position of management these books should be beneficial.  Whether you have the position or not, everyone has the opportunity to lead.

Managing the Unmanageable

Managing The Unmanageable Book

“Most successful programming managers are former programmers: They can quickly grasp whether a developer is on track through the most informal of conversations, without having to ferret out the assessment through long strings of questions that can feel pestering.”



Managing the Unmanageable By Mickey W. Mantle and Ron Lichty (2012)

Managing the Unmanageable is the comprehensive handbook to gain a variety of insights and a tool set to manage software development teams.  I didn’t find it lacking coverage on any topic.

It rightly points out how managing programmers is like managing artists–programming is a creative job so you can’t manage that the same way you would manage most other jobs.

It goes over how to build relationships with and manage HR, your boss, other departments, etc.  How to define developer levels, how not to do incentives (which can often be more demotivating than motivating), job descriptions, how to conduct interviews, build culture, motivate developers, etc.  This is a wide book in what it covers.  The vastness of topics is unmatched by any other management book I’ve read.  It may only devote a few pages to some subjects but I haven’t found an area that it doesn’t cover at all. Even for areas it doesn’t go into great depth it references sources for further study.

I think this is the best resource for a new manager to get a comprehensive overview of every topic related to managing programmers.  What I really like about the book is from the experience of the authors it anticipates and provides guidance on a lot of challenges I had to deal with–reading this book helped me proactively plan how to deal with those situations.

For me, reading Managing the Unmanageable is like sitting down at a coffee shop with some seasoned managers and listening to their experience and wisdom.  Today I still use it as reference book.

Peopleware

Peopleware Book on Productive Projects and Teams

“The major problems of our work are not so much technological as sociological in nature.” 

“Most managers are willing to concede the idea that they’ve got more people worries than technical worries.  But they seldom manage that way.  They manage as though technology were their principal concern.  They spend their time puzzling over the most convoluted and most interesting puzzles that their people will have to solve, almost as though they themselves were going to do the work rather than manage it.”



Peopleware: Productive Projects and Teams (3rd Edition) by Tim DeMarco & Timothy Lister (originally published in 1987, I read the 3rd edition published in 2013)

Peopleware, as it’s title suggests is all about the people aspect of managing software developers.  It’s not a generic management book.  Most of it only applies to managing creative and intellectual workers.  It covers why programmers are distinct from and must be managed differently than other types of jobs, such as accountants or manufacturing workers.  The book covers topics like the importance of allowing time to think on the job, giving teams a sense of elitism to increase productivity, creating environments where teams can naturally form and jell, the importance of an interruption free office environment, why the surest way to improve productivity is by focusing on quality.

I learned environmental factors for a programmer cause a 10 to 1 performance difference.  A large section deals with the work environment.  Office design, layouts, how bad cubicles are, the importance of natural light, office size, privacy, etc.  This is a timeless classic.  It would benefit any manager, executive, head of HR, architect, or programmer (even if you aren’t in a management position, this book will help you manage yourself).

The Mythical Man-Month

The Mythical Man-Month

“Why is programming fun? What delights may its practitioner expect as his reward? First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things by his own design. I think this delight must be an image of God’s delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.”

The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition) by Fred Brooks (originally published in 1975, I read the 20th Anniversary edition published in 1995)

This is a collection of essays about managing and organizing large software projects. Most important is Brooks’ observation that adding more man-power to a late software project will make it even later. My favorite observation of his was how the most productive teams are smaller because of the communication overhead, you only get fractional gains by increasing the size of large teams. Although pre-Agile, many of his ideas influenced Agile project management. He was well ahead of his time. This is a classic. 

“Adding manpower to a late software project makes it later.”

The Conviction to Lead

The Conviction to Lead

“Whenever Christian leaders serve, in the church or in a secular world, their leadership should be driven by distinctively Christian conviction.”

“Leadership is all about putting the right beliefs into action, and knowing, on the basis of convictions, what those right beliefs and actions are.  This book is written with the concern that far too much of what passes for leadership today is mere management.  Without convictions you might be able to manage, but you cannot really lead.”

The Conviction to Lead: 25 Principles for Leadership That Matters By Albert Mohler, 2014

This was not an easy find. I read fluffy leadership book after fluffy leadership book… and finally read Mohler’s book at my dad’s recommendation.  It has far more substance on leadership than anything else I’ve read.  Where others give you mechanics, tools and methods, Mohler gives you conviction and motivation based on well grounded beliefs.  It is not written just to pastors, nor just to leaders of Christian institutions (although this appears to be the main focus), but also to Christians who happen to be leaders in secular organizations–and that’s quite rare for a book on leadership written by a devout Christian.

Mohler’s book is practical because it provides the foundation for why and how Christians should be leading and the basis for leading in a secular world.  I would say the book is primarily written to C-level, but almost all of it I was able to apply to a smaller realm for lower levels of management if I limited the scope to my area of influence.  This is a good book for any Christian in a position of leadership.

Kindle vs Paper Books

I’ve been using a Kindle for about 6 years.  And have been reading paper books for longer than that!  I have two Kindles, one is the discontinued Kindle Touch, and the other is the newer Kindle Paperwhite.  Here are my thoughts on the Kindle and how eBooks compare to Print Books.

The Kindle Reading Experience

For much of the reading experience I prefer the Kindle.  It’s compact, lightweight, and easy to carry around.  With a kindle I don’t have to awkwardly hold a book open while my other hand is trying to not spill my cup of coffee.  Also when it starts to dim outside and I don’t quite have enough light I can turn on the backlight instead of the house lights.

Kindle Paperwhite vs Book

Backlight

So, e-ink displays don’t have as good of a contrast as real paper.  The reason Amazon calls their latest Kindle the “Paperwhite” is it has a backlight that can sort of match the brightness of paper by supplementing the light from your environment–the idea is you turn the backlight on just enough so that it still looks like it’s reflecting light like a book, but there’s just enough extra light to make it as readable as paper.   This does work, however I think the LED color Amazon chose is a failure.  The pure white LED backlight is too much in the blue spectrum and that’s very obvious when I’m reading under incandescent lights.  It’s okay in natural light but under incandescent lighting it should be warmer to match the surrounding atmosphere  This could affect health if reading right before going to bed.  I hope Amazon fixes this in the next version…maybe it should have RGB bulbs and a sensor to match the ambient light.

In very bright light paper wins out, but if the ambient light is dim as it often is in the Fall in Idaho the Kindle let’s me read a little longer before turning on the house lights.  This probably saves me 1 or 2  cents a year.

Physical Library Size

Kindle Library Size

The Kindle does have the advantage of being able to store my entire Kindle library wherever I am… not only is it smaller than 99% of my books, it can store all of my books in that space.

Fonts

90% of paper book publishers choose great fonts–but some don’t.  For some reason some publishers think their book needs a sans-serif font, or they pick a huge font, or too small a font, or the kerning is not normal.  It bugs me!  If you get the Kindle version you can override the publishers horrible font decision.  As an added bonus the font-size is adjustable so I can read anything without glasses.

Quality

I always prefer a good hardbound paper book to an eBook, however I’ve noticed lately a lot of authors are using cheap (self-publishing?) services–it seems to me the books are printed on demand and the quality is sometimes bad–I’ve had books that–the best way I can describe it is the book feels like I’m holding some ad-hoc document put together at a business conference rather than a book.  I’ll often opt for an eBook if I see the author is using a self-publishing service (not all self-publishing books come this way–I think it’s just a quality control issue so it’s a hit and miss).

Enjoying Books with Others

Eli and Jon reading maps

The social aspect of eBooks is poor.  Often when I’m on an airplane or a friend is at my house they’ll show interest in a book I’m reading or I have on the shelf and it makes a great conversation starter.  You just don’t get that with Kindle books because nobody can see what you’re reading.  Kids love physical books and will spend hours poring over maps, illustrations, and pictures which would be boring on a tablet.  I can easily give a paper book to a friend.  While Amazon has some provision for lending it’s very limited and it’s not as simple as handing your friend a book.

Highlighting and Taking Notes

For highlighting it’s a wash–the Kindle is sometimes a bit finicky when I try to highlight a passage and sometimes gets the wrong portion highlighted but for the most part I can get it.  I always read a book with a pen or pencil but I find underlining a passage without the line going through the words to take a little more effort.  For taking notes in the margin nothing can beat pencil or pen on paper.

Diagrams and Illustrations

Diagrams are pictures are generally bad on eBooks.  For simple graphics it does fine.  But if the book has illustrations they don’t look as great because the screen is smaller and you lose color.

Kindle Lack of Color

Also, the Kindle completely fails at tables… this table below has data that is illegible on the Kindle… it’s too small to read and there’s no way to rotate it into landscape mode.

Kindle Table Fail

Flipping Through Pages

The Kindle is useless here.   Even in the flip through the pages mode the e-ink display takes too long to refresh.  A real book is much easier–plus I remember the layout of a page and generally know what I was looking for was in the 1st quarter of the book so can find it in seconds.

Searching

Here the Kindle shines.  If you are looking for a keyword or phrase you can find it very quickly.

Visual Indicators of Progress

Kindle Progress indicatorThis is a big deal.  I am very spacial and use the physical feel of how many pages I have read and how far to go as part of my memory.  This is all lost on eBooks.  With paper books it’s easy to see your overall progress at a glance, and if you want to thumb a few pages ahead to see when the chapter ends it takes half a second.  With an eBook I get something like location 675 or 24%.  That’s meaningless to me.   A progress bar might be nice!  Something visual and not just numbers.  Even web-browsers have scrollbars!

Reading Books as a Group

When reading books for study with others eBooks fail–I tried this once but everyone else was referring to page numbers and I couldn’t get page numbers out of my kindle.

Free eBooks

Amazon has a lot of free Kindle books for Prime members.  I’ve found the free books aren’t really that good so not much of a gain.

Free Classic Books

There are a number of great classic books you can download from the Guttenberg project, this may save you from purchasing a few paper books.

Updates to Books

Some of my more technical books have received free Kindle updates when the author chooses to update the text.  This is a benefit in my mind.  I think it would be better if the Kindle would highlight the differences.

X-Ray

Kindle X-Ray People

One nice feature on the Kindle Paperwhite is the X-Ray.  You can enable it for the page you’re on and it will tell you about the characters and give you some context (if you’ve forgotten the previous chapters or missed it).

Kindle X-Ray Terms

Newspapers

You can read newspapers on the Kindle.  But it’s worthless.  The Wall Street Journal digital subscription is completely separate from the Wall Street Journal Kindle Digital Subscription.  I’m not going to buy a Digital subscription for both my computer and my Kindle.

Synchronization

One great thing about eBooks is I can read them on my Kindle, then bring up the book on my computer to review my highlights while typing up notes–but it’s a hit and miss.  This works for Amazon books I bought from the Amazon store.  But if you buy Kindle formatted books from not Amazon there’s no way to get them to open up in the Kindle for PC program (even though they are available in Kindle for Android).  Very annoying.

So, What’s Better?  Kindle eBooks or Old Fashioned Physical Books?

It really depends.  I like both for different reasons.  I do have a preference for Print Books and mostly because I can visually track progress and visually see the layout of pages and flip through them.  Generally if it’s a book I’ll probably read once I’ll just get what is cheaper… but obviously some I’m going to insist on getting the physical version.  One feature that Amazon does for /some/ books is if you buy a physical, you can get the Kindle version for free, or heavily discounted.  I do hope that this becomes standard practice going forward–that’s the best of both worlds.

Of making many books there is no end, and much study is a weariness of the flesh.   The end of the matter; all has been heard.  Fear God and keep his commandments, for this is the whole duty of man.  For God will bring every deed into judgment, with every secret thing, whether good or evil.

– Solomon, Ecclesiastes 12:12b-14

7 Homelab Ideas | Why You Should Have A Homelab

Why You Should Have a Homelab

In 1998 my friend gave me a RedHat Linux CD.  I spent hours each day experimenting with Linux–I loved it.  2 years later I’m in a room with 30 other students at a University applying for the same computer lab assistant job–I’m thinking my chances are grim.  Part way through the mass interview a man walks to the front of the room and asks if anyone has ever used Linux.  I raise my hand–I’m the only one.  He takes me out of the interview for the lab assistant job, introduces me to the department director.  They took me out to lunch.  By the end of the day I had my first job as a Systems Administrator.

Learn things on your own and it will broaden your opportunities.

One of the best ways to learn about systems, applications, and technology is starting a homelab.  A Homelab can give you an enjoyable, low stress, practical way to learn technology.  A homelab will also help you find out the technical areas in which you are interested.  It’s also practical in that you can use it to service your own home.

Here’s 7 Ideas for Your Homelab

1. Router /  Firewall

Ubiquiti EdgeRouter X

The most essential piece of equipment will be your router.  I started out with consumer routers that I’d flash to DD-WRT / Tomato but now I use a virtual pfSense router.  Routers are great to learn about DHCP, DNS, VPN, Firewalls, etc.  I discourage using the router provided by your ISP, they’re usually not very capable and often not secure.  In most cases you can buy a DSL or Cable modem instead of the ISP provided modem combined with the router.  One inexpensive physical router I’d recommend is the Ubiquiti EdgeRouter X.   Ubiquiti provides free software updates (their model is you buy the hardware and the software is free), and you’ll get a handful of advanced features–it’s a very capable router and much better than a typical consumer router–to step up from Ubiquiti you’d be going to pfSense, Juniper or Cisco.

2. Storage

Supermicro StorageThe main reason I started my homelab was storage.  I was taking a lot of family pictures and videos and wanted to save them.  I know there are cloud services, but at the time they were expensive, and then you’re sort of trusting that provider to not delete all your photos or get bought out by a larger company and shutdown.

Then I started using VMware.  I needed faster storage with more IOPS.  One of the best Homelab storage solutions is ZFS.  ZFS takes the best of filesystems, and the best of RAID, and combines them into a software defined storage solution that I’ve not seen any hardware technology able to match.  Two popular free ZFS appliances I like are Napp-It (based on OmniOS) and FreeNAS.  OmniOS is a fork of OpenSolaris and is very robust and has tight integration with ZFS.

FreeNAS LogoI’m currently using FreeNAS which is the free open source version of iX System’s TrueNAS which is used by organizations of all sizes–from small businesses with a few TB of storage to large government agencies with PBs of storage.  FreeNAS has done a great job at technology convergence.  It is both a NAS and a SAN allowing you to try both approaches to storage (I prefer NAS because it takes better advantage of ZFS, but many prefer using SAN and there are benefits and drawbacks to both), it also has many built-in storage protocols:  FTP,  iSCSI, NFS, Rsync server, S3 emulator, SMB (Windows file server), TFTP, WebDav, it can join AD, it can even be an AD DC  (if you like living on the edge) it has a built-in hypervisor (bhyve) to run VMs for whatever you want.  This is now marketed as hyper-converged storage.  All of it is completely free.  You can build your own FreeNAS server like I did, or get started with a FreeNAS Mini from iX Systems.

A few years after I learned ZFS for home, my employer was looking for a new storage solution so having this knowledge and experience was helpful.  I was able to determine one vendor with a traditional RAID solution didn’t handle the RAID-5 write-hole problem properly.

3. Virtualization

VMwareVirtualization allows you to run multiple virtual servers on the same piece of hardware.  VMware is king in the small to mid-size business hypervisor market, and VMware offers their hypervisor for free.  The free version is just like the paid versions except you won’t be able to use some features (most involving high availability and fail-over with multiple servers).  But you can learn most of the concepts and features of VMware.  I’ve tried to use a number of hypervisors but I always come back to VMware.  I consider VMware my basic infrastructure.  From there you can learn about other things like networking, storage, and play with any OS or Linux distribution you want to.

Knowing VMware was hugely beneficial, I’ve implemented it for several businesses, and one of my previous employers.  And knowing how it works means I can discuss the VMware stack intelligently with the ops team.

See my FreeNAS on VMware Guide if you’re interested in running a virtual FreeNAS server inside VMware.

4. Networking

A Homelab without decent networking won’t get you far.  Fortunately if you use VMware you can leverage it to use virtual network switches.  For physical switches I really like the Unifi products.  They are simple enough for non-network engineers like me.  Everything can be configured using the GUI.  Unifi exposes you to managed switches, central management (with the Unifi controller), VLANs, and PoE (Power over Ethernet), port trunking, port mirroring, redundant paths with spanning tree, etc.

Unifi 8 Port SwitchI started with this little UniFi 8-port switch (4 are PoE ports).  I also added a UniFi 24-port switch so I could learn how to do setup a LAG and configuring VLANs across multiple switches (which was really simple using the Unifi interface).  I also like Unifi’s philosophy–they sell you the hardware but the software is free–which means you don’t pay for maintenance or support but continue to get free updates.  In a homelab you may not need to go crazy on VLANs, but separating your main network from your IoT devices may be prudent.

Learning how to setup VLAN tagging, and link aggregation and understanding how networking works helps me communicate better with the network engineers when discussing design and deployment options–they may be working on Juniper or Cisco equipment but I know the concepts of what they’re doing.

5. Wireless APs

Having a robust wireless setup is also a necessity for a homelab.  If you have a large house you get to setup multiple APs and make sure they can handoff connections.  If I was buying today I’d get a UniFi nanoHD AP.  I use an older model, the Unifi UAP AC Pro (I just have one because that’s all I need to cover my house, but if you can find an excuse to have 2 or more I’d recommend it since you can practice rolling updates without downtime, wireless handoff, etc.).  These are managed by the same Unifi controller as the switches.  I first gave them a try because I read Linus Trovalds uses Unifi APs, and they seem to be highly rated by tech professionals–and now I don’t think I’d go back to anything else.

I have written more about Unifi Equipment here.

6. Network Monitoring

Icinga

It is hard to maintain a reliable network and application stack without monitoring for failures.  There are hundreds of network monitoring solutions and it really depends on your needs.  The most widely deployed solution is Nagios.  I have had that on my Homelab, but lately I’ve been using Icinga because it’s simple and it integrates into Ansible.

7. Infrastructure Automation

Automating your infrastructure may not make as much sense in a small Homelab, but it does make sense to automate any task you do repetitively or a manual task that could be automated.  For me, this was  installing updates, deploying servers and renewing SSL certificates with Let’s Encrypt.  To manage this I use Ansible which is one of the most well thought out infrastructure automation tools I’ve seen.  Ansible can manage Linux and Windows servers.  Learning infrastructure automation, especially if you do it using version control and CI/CD tools like Azure DevOps (you can get a free account for up to 5 users with unlimited private repositories) is a great thing to learn for your career if you’re interested in the DevOps world.   The book, Ansible for DevOps by Jeff Geerling helped me get started.  I suggest getting the eBook since he has been known to provide updates to the book (not sure if he will continue to provide updates, but just in case).

At work we completely automated the deployment of Linux servers using Ansible–infrastructure as code.  It took a month of investment but it paid off big time with developers now being able to deploy VMware VMs at will with Ansible by making a Git Pull Request, our entire fleet of servers is updated automatically, and our server and configurations are all consistent.  This replaced an old process of waiting several weeks for a VM to be provisioned and configured by hand.

Bonus homelab application server ideas…

  1. Minecraft Server — popular Java game–it’s like playing with Legos and a great way to get your friends together for some casual games.
  2. Mumble Server – one of the best voice protocols for in-game communication.
  3. Emby Media Server — Anyone that has kids realizes those flimsy blu-ray drives aren’t going to last long.  It’s great to store and host movies, home videos, pictures, and audio.
  4. Asterix PBX Server – VoIP Phone server (use Twilio or Flowroute for SIP trunking).  Polycom makes great VoIP phones.  With Twilio SIP Trunking you can have a real landline phone number with E911 capability for a few dollars a month–and if you get multiple phones you can use it as an intercom system.
  5. Web Server (maybe start a blog) — I hosted this blog from a server in my house for years–until my ISP couldn’t handle the bandwidth.  Now days you can also use a service like CloudFlare to act as a CDN which really reduces your bandwidth usage.  Hosting your own blog is a great learning experience and gives you a place to log your homelab experiments, and share solutions to problems.
  6. Automatic Ripping Machine — Get all your Blu-Rays, DVDs, and CDs loaded onto your Emby server
  7. Backup server — I use a CrashPlan Business subscription to backup my FreeNAS server to the cloud (one of the main reasons I use a NAS as this would be less efficient with a SAN).  BackBlaze B2 is another great option to backup FreeNAS.

There are many more areas than I listed, but I think the above is a good baseline to get started.  Pick one area at a time–my homelab was built over many years–often the case is I will improve an area after a piece of equipment fails or I need to replace it for some other reason–that’s a great time to do research.  If you aren’t sure where to start, pick the area that you enjoy the most.  For areas you have no interest the best thing to do is something else–you’re probably not going to be great at something you don’t enjoy.   Certainly a homelab isn’t going to be a substitute for real-work experience.  But it does provide an environment to learn, experiment and enhance your abilities–and the great thing is since it’s your own lab you can learn things that interest you.

I think that’s the largest benefit of a homelab.  To me it’s a playground.  It’s a place put the love of learning into practice.  It’s a place of freedom.  Nobody else is dictating what you do here.  It’s a place to have fun while enhancing your skill.

Do you see a man skillful in his work?
He will stand before kings;
he will not stand before obscure men.     – Proverbs 22:29 ESV

OpenDNS and CleanBrowsing | DNS Content Filtering

What is DNS Content Filtering?

A DNS Based Content Filtering service can prevent certain websites from loading on your network.  Most services can filter by specific categories like malware, phishing, pornography, etc.  Unlike some content filtering which can introduce security risks, DNS filtering does not intercept traffic between you and the website you’re visiting.  It doesn’t require installing any software on your computer or device making it one of the safest ways to filter web content.

Using ClearBrowsing's DNS Service a typoed domains returns a code showing the domain does not exist
Google’s DNS server returns the IP address of the phishing site, while CleanBrowsing returns NXDOMAIN

If you you accidentally typo a popular domain (such as typing .cm instead of .com) it would normally take you to a phishing site.  A DNS filtering service would block your computer by returning an NXDOMAIN (domain does not exist) instead of the IP address effectively blocking the website from loading.  The same technique can be used to prevent any undesirable category such as malware, pornography, adware, etc. from loading on your network.

The other benefit of using a DNS filtering service is it can force certain search and media services (like Google and YouTube) into safe mode preventing anyone using your network from even seeing adult content in their search results.

Why Should I use One?

It’s not only a wise way to protect yourself from malware and temptation, but also when letting guests on your WiFi network–you don’t have to worry (as much) about what they’re doing, and also a good idea when you start letting kids online.  DNS filtering doesn’t take the place of parenting, and anyone with a little technical skill can bypass it, but it may help prevent your family and anyone using your network from accidentally stumbling across bad sites.  If it prevents one cryptolocker infection it’s worth it.

I think families, churches, home networks, small businesses, organizations, schools, large enterprises, and governments could benefit from DNS filtering.   You may not want to go overboard blocking content about illegal drugs and gambling, but at the very least you probably don’t want malware on your network!

Two DNS Filtering Services

I use two DNS content filtering providers services:  OpenDNS and CleanBrowsing.  Both have simple instructions to get started so I won’t repeat that here.  Both are free, work well, and my decision to use one or the other on a particular network just depends on the situation–although in most cases either would be fine.  It’s nice to have multiple options.

OpenDNS

OpenDNS Logo

OpenDNS has been around since 2006 and was acquired by Cisco in 2014.  It offers several free plans and some paid options as well:

  • OpenDNS Family Shield (Free).  Very simple–just set your router’s DNS servers to 208.67.222.123 and 208.67.220.123 and it’s pre-configured to block malicious and adult content.
  • OpenDNS Home (Free).  For more advanced control, allows for granular category filtering as seen in the screenshots below.  If your ISP has a dynamic IP you will need to use a DDNS client to update OpenDNS with your public IP.  Below are some screenshots to show the granularity:

OpenDNS Filtering Categories

OpenDNS Filtering Security Categories

  • OpenDNS Home VIP ($20/year) — Very affordable and adds the ability to white-list specific domains if they’re on the block list.
  • Cisco Umbrella — For businesses and larger enterprises.

CleanBrowsing

CleanBrowsing Logo

CleanBrowsing is a fairly new service, starting in February of 2017.

It offers three easy free filtering plans and 2 paid plains:

  • Security Filter (Free) – Set your router’s DNS to 185.228.168.9 and 185.228.169.9 to only block malicious domains (phishing and malware).
  • Adult Filter (Free)– Set DNS to 185.228.168.10 and 185.228.169.11 to block Adult domains, set search engines to safe mode (also includes the security filter).
  • Family Filter (Free)– Set DNS to 185.228.168.168 and 185.228.169.168 to block access to VPN domains that could be used to bypass filters, mixed content sites (like Reddit), and sets YouTube to safe mode (includes Adult and Security filters as well).
  • Basic Plan ($5/month) allows you to setup custom filtering categories and whitelist and blacklist specific domains.
  • Professional ($9/month) targeting small networks (less than 2,000 devices, for more than that you can get a custom quote).

CleanBrowsing DNS Filtering Map

OpenDNS and CleanBrowsing Comparison

OpenDNS has been around the longest, but CleanBrowsing is leading in innovation (note that my comparison is on the free or low priced consumer service, not the enterprise service from each provider):

OpenDNS advantages

  • Free account allows more control of specific categories
  • Blocked domains get redirected to page saying why page is blocked (better end user understanding of what’s going on than an NXDOMAIN for most people)
  • Been Around Longer.  More mature.

CleanBrowsing advantages

  • Security – Supports DNSSEC (prevents forgery of DNS results …some ISPs have been known to hijack DNS results).  Also supports DNSCrypt, DNS over HTTPS, and DNS over TLS.
  • Blocked domains return an NXDOMAIN (better practice than redirecting for technical/security folks)
  • Privacy Policy: CleanBrowsing States it does not log requests
  • Better Test Results on Adult content filtering: blocked 100% of adult content on a Porn Filter test by Nykolas Z (OpenDNS blocked 89%).
  • Much better Test Results Blocking Phishing Sites: CleanBrowsing blocked 100% of phishing sites on 3 out of 4 tests beating out OpenDNS in every area.  On the real-time test it allowed 1 out of 12 sites through, however OpenDNS only blocked 2 out of 12 sites.

Both OpenDNS and CleanBrowsing have very fast DNS resolution rates (probably faster than your ISP), with CleanBrowsing resolving slightly faster for me but within milliseconds of each other.  I think either service is worth using.

I have made a covenant with my eyes.
How then could I look at a young woman? — Job 31:1 CSB

 

MobaXterm Professional Review

I recently switched to MobaXterm Professional from PuTTY.  And I’m not looking back…

A PuTTY Alternative

I had just re-installed Windows 10 to fix an updating issue.  As I was downloading PuTTY I thought: there has got to be something better than PuTTY.   PuTTY is a good program, but it doesn’t do four things for me:

  1.  Automatically save the SSH session
  2.  List of recent servers I’ve SSHed into for a quick reconnect.  I know this is nitpicky on my part, but I don’t really remember all my server hostnames or IP addresses.
  3. SFTP.  I just want to drag and drop files between the terminal and file explorer without having to open another program!
  4. If I make changes to a saved session in-flight and I don’t remember to save it (such as setting a keepalive) PuTTY forgets it.

I looked at and tried quite a few options.  KiTTY, MobaXterm, mRemoteNG, RoyalTS, SuperPuTTY, XShell6, Bitvise, SmarTTY, Solar-PuTTY, and SecureCRT.  I ended up buying MobaXterm.

What I Like About MobaXterm – A Quick Review

Start Screen

The start screen is simple and useful… open MobaXterm and start typing a hostname… if you’ve connected to that server before it will auto-complete, if not it creates a new session.

 

Along the left is a list of servers which can be organized into folders and the icons can be customized.  Main screen shows the last 9 sessions for quick access.

New Sessions

MobaXterm supports a number of protocols:

  • SSH
  • Telnet
  • Rsh
  • Xdmcp
  • RDP (yes, it can even manage Windows RDP sessions)
  • VNC
  • FTP
  • SFTP
  • Serial
  • File
  • Local Shell (which includes Ubuntu Bash WSL if you have it installed, Powershell, Bash on Windows, normal DOS Prompt)
  • Browser (opens a browser)
  • Mosh
  • S3

Integrated SFTP File Transfers on the Terminal

SSH into a server and the left pane shows an SFTP session which automatically follows where I am in the terminal and allows dragging and dropping files back and forth between file explorer!  No more having to open up WinSCP just to transfer a quick file.

Files can also be opened directly and edited using a built-in or an external editor.

X11 Forwarding

X11 forwarding works out of the box with no setup.  Below all I did was open an SSH session to my Linux VM running CrashPlan, ran “CrashPlanDesktop” (which is a graphical program) and it opened up the window locally in Windows.

One of my favorite programs in the world, Minesweeper, no longer comes with Windows 10.  It’s such a classic I don’t know what Microsoft was thinking by removing that.  But… no problem.  I can now run Gnome Mines on Windows via X11 Forwarding!

Terminal

The terminal itself is actually PuTTY under the hood but with some added features.  There’s a place to configure key words that if they show up on the terminal are highlighted in certain colors; the defaults are useful when reviewing logs.  Terminals can be tabbed, or split horizontal, vertical, or a grid of 4.  You can also open multiple MobaXterm Windows.  Terminals can also be dragged off to float (more like PuTTY terminals do).  Right-click can be configured to paste like PuTTY or provide a menu (also if pasting multiple-lines it will display a warning which is nice.).  If you don’t like the Windows 10 everything is flat look or you want a Dark Theme or want it to look like you’re on OSX there are a plenty of skins to chose from…

MobaXTerm Terminals

Setting up SSH tunnel port forwarding is easy…

For storing passwords and SSH key authentication MobaXterm can manage that and also save passwords (if you’re using something that uses password authentication which you shouldn’t be) securely.  I use an external ssh agent and it handled that well.

Extra Utilities

And MobaXterm comes with quite a few handy programs and utilities…  a variety of servers which is useful if you need to temporarily setup a quick Iperf or TFTP server.  Also included are Macros, and a variety of misc tools such as a Network Scanner, Port Scanner, etc.

A fantastic feature is the ability to run local terminals.  I can run a DOS Prompt, PowerShell, and Ubuntu Bash (WSL) terminal inside MobaXterm.

What Could Be Better

A few features that are missing:

  • The SFTP pane should elevate to root when I “sudo su”  Update: MobaXterm told me to use the SCP protocol instead of SFTP and there’s a quick button in the SFTP pane to sudo su.  This works.
  • I’d love to be able to open up a VMware ESXi VM console from MobaXterm.
  • Would like to have an option to use integrated SFTP with Mosh
  • The cost structure is very reasonable at $69 for a perpetual lifetime license but after the first year support/maintenance is 80% of the cost of the license.  I think the price is more than worth it but I’d love to see a lower maintenance price for home users or businesses under a certain size.
  • Some SSH settings can’t be defaulted and have to be explicitly set on each session.  I prefer to never lock the terminal title, and also I always want the SFTP directory to follow the directory in the terminal but neither of those can be set globally.  Fortunately the session remembers the settings so you only have to set it once per host, but there should be a global default.
  • RDP settings should have configurable global defaults… I never want to share my local drives or printers during an RDP session so have to uncheck those when first setting up a session.

That said it’s a good program, it works well.