In a previous post, I’ve already briefly touched on Let’s Encrypt. It’s a fairly new but already very well established Certificate Authority, providing anyone with free SSL certificates to use for sites and devices they own. This is a welcome change from the older CAs who charge a premium to get that padlock into your visitors’ browsers. Thanks to Let’s Encrypt being free however, their prices have come down as well in the last year, which is great!
A fairly major stumbling block for some people is the fact that, out of security concern, Let’s Encrypt’s certificates are only valid for 90 days, while paid certificates are usually valid up to 3 years, so administrators can have that part on autopilot for quite a while without intervention.
However, given the fact that it is possible for you to fully automate the renewals of Let’s Encrypt certificates, if you do it right, you may never have to manually touch any SSL certificate renewal ever again!
The ACME protocol
In that same previous post I’ve also touched on the fact that I don’t very much like the beginner-friendly software provided by Let’s Encrypt themselves. It’s nice for simple setups, but as it by default tries to mangle your Apache configuration to its liking, it breaks a lot of advanced set-ups. Luckily, the Let’s Encrypt system uses an open protocol called ACME (“Automated Certificate Management Environment“), so instead of using their own provided ACME client, we can use any other client that also speaks ACME. The client of my choice is dehydrated, which is written in bash and allows us to manage and control a lot more things. Last but not least, it allows the use of the dns-01 challenge type, which uses a DNS TXT entry to validate ownership of the domain/host name instead of a web server.
The dns-01 challenge
There are a few different reasons to use the dns-01 challenge instead of the http-01 challenge:
- Non-server hardware: not all devices supporting SSL are fully under your control. It might be a router, for example, or even a management card of some sorts, where you can’t just go in and install Let’s Encrypt’s ACME client, but you can (usually manually) upload SSL certificates to it. It would be nice to be able to request an “official” (non-self signed) certificate for anything that can do it, as otherwise the value of SSL communication is debatable (users are quickly trained to dismiss certificate warnings and errors if they are trained to expect them).
- Internally used systems: these don’t exist in outside DNS, and are likely not reachable from the internet on port 80 either, so the ACME server cannot contact the web server to validate the token.
- Centralized configuration management: most if not all of my server configuration is centrally managed by Puppet, including distribution of SSL certificates and reloading daemons after certificate changes. I don’t feel much for running an ACME client on every single server, all managing its own certificates. Being able to retrieve all SSL certificates to this same system directly and coordinate redistribution from there is a big win, plus there’s only one ACME client on the entire network.
The DNS record creation challenge
When using the dns-01 challenge, the script needs to be able to update your public DNS server(s), to be able to insert (and remove) a TXT record for the zone(s) you want to secure with Let’s Encrypt. There are a few different ways of accomplishing this, depending on what DNS server software you use.
For example, if you use Amazon’s Route53, CloudFlare, or any other cloud-based system, you’ll have to use their API to manipulate DNS records. If you’re using PowerDNS with a database backend, you could modify the database directly (as this script by Joe Holden demonstrates for PowerDNS with MySQL backend). Other types of server may require you to (re)write a zone file and load it into the software.
RFC2136 aka Dynamic DNS Update
Luckily, there’s also somewhat of a standard solution to remote DNS updates, as detailed in RFC2136. This allows for signed (or unsigned) updates to happen on your DNS zones over the network if your DNS server supports this and is configured to allow it. RFC2136-style updates are supported in ISC BIND, and since version 4.0 also in PowerDNS authoritative server.
As I use PowerDNS for all my DNS needs, this next part will focus on setting up PowerDNS, but if you can configure your own DNS server to accept dynamic updates, the rest of the article will apply just the same.
Setting up PowerDNS for dynamic DNS updates
First things first, the requirements: RFC2136 is only available since version 4.0 of the PowerDNS Authoritative Server – it was available as an experimental option in 3.4.x already, but I do recommend running the latest incarnation. Also important is the backend support: as detailed on the Dynamic DNS Update documentation page only a number of backends can accept updates – this includes most database-based backends, but not the bind zone file backend, for example.
I will assume you already have a running PowerDNS server hosting at least one domain, and replication configured (database, AXFR, rsync, …) to your secondary name servers.
There are a number of ways in PowerDNS to secure dynamic DNS updates: you can allow specific IPs or IP ranges to modify either a single domain, or give them blanket authorization to modify records on all domains, or you can secure updates per domain with TSIG signatures.
In this example I went with the easiest route, giving my configuration management server full access for all domains hosted on the server.
Only 2 (extra) statements are required in your PowerDNS configuration:
This will enable the Dynamic DNS Updates functionality, and allow changes coming from the 10.1.1.53 server only. Multiple entries (separated by spaces) and netmasks (i.e. 10.1.53.0/24) are allowed.
The script is hosted on github, we can install it into /root/dehydrated with the following commands:
# apt-get install git # cd /root; git clone https://github.com/lukas2511/dehydrated
# cd /root/dehydrated # echo HOOK=/root/dehydrated/hook.sh > config
The HOOK variable in the configuration above points to the hook script we will install for dns-01, so we don’t have to supply the path on every invocation.
Hook script requirements
As the hook script we will use is a simple bash script, it requires 2 binaries, one of which is the ‘nsupdate’ binary which will do the RFC2136-speaking for us, and the other is the ‘host’ binary, used to check propagation. In Debian and derivatives, these are contained in the ‘dnsutils’ and ‘bind9-host’ packages, respectively.
# apt-get install dnsutils bind9-host
The hook script
I’ve uploaded the hook script to Github, download it and save it as /root/dehydrated/hook.sh.
Make sure the script is executable as otherwise it won’t be run by dehydrated.
# chmod a+x hook.sh
This script will be called by dehydrated and will handle the creation and removal of the DNS entry using dynamic updates. It will also check if the record has correctly propagated to the outside world.
If you don’t have direct database replication between your master and its slaves, say you use AXFR with notifies, it will take a short while before all nameservers responsible for the domain are up to date and serving the new record.
I initially thought of iterating through all the NS records for the domain and check if they are all serving the correct TXT record, but after seeing Joe’s PowerDNS/MySQL script run the check against Google’s 188.8.131.52, I decided to do the same. If in the end it turns out there are too many failures, I might update the script to check every nameserver individually before continuing.
The hook script will load the configuration file used by dehydrated itself (/root/dehydrated/config), so you can add a number of configuration values for the hook script in there:
This is the DNS server IP to send the dynamic update to.
This is the path for the nsupdate binary, the default is the correct path on Debian and derivatives.
The amount of times to try asking Google if the DNS record propagation succeeded.
The amount of time to wait (in seconds) before retrying the DNS propagation check.
This is the DNS server port to send the dynamic update to.
This is the TTL for the record we will be inserting, default is 5 minutes which should be fine.
DESTINATION="/etc/puppet/modules/letsencrypt/files" CERT_OWNER=puppet CERT_GROUP=puppet CERT_MODE=0600 CERTDIR_OWNER=root CERTDIR_GROUP=root CERTDIR_MODE=0755
This block defines where to copy the newly created certificates to after they have been received from Let’s Encrypt. A new directory inside DESTINATION will be created (named after the hostname) and the 3 files (key, certificate and full chain) will be copied into it. Leaving DESTINATION empty will disable the copy feature.
The CERT_OWNER, CERT_GROUP and CERT_MODE fields define the new owner of the files and their mode. Leaving CERT_OWNER empty will disable the chown functionality, leaving CERT_GROUP empty will change group ownership to the CERT_OWNER’s primary group, and leaving CERT_MODE empty will disable the chmod functionality.
CERTDIR_OWNER, CERTDIR_GROUP and CERTDIR_MODE offer the same functionality for the certificate files’ directory created inside DESTINATION.
I use this functionality to copy the files to the puppet configuration directory, and I need to change ownership and/or mode because the certificates generated are by default readable by root only, which means my Puppet install can not actually deploy them as it is running as the ‘puppet’ user.
Requesting a certificate
To request a certificate, run:
# ./dehydrated --cron --challenge dns-01 --domain <your.host.name>
If everything goes well, you will end up with a brand new 90-day certificate from Let’s Encrypt for the host name you provided, copied into the destination directory of your choice.
Renewing your certificates automatically
The hook script adds any successful certificate creations into domains.txt. This file is used by letsencrypt.sh to automatically renew certificates if you don’t pass the –domain parameter on the command line.
# ./dehydrated --cron --challenge dns-01
To do this fully automatically, just add the command into a cron job.