The objective of DevOps is simplicity. But with so many moving parts, getting the Rube Goldberg machine to run without a hitch is challenging even under the best circumstances. Recently, I had a need for a rapid deployment of a LetsEncrypt-enabled web service, and I was tired of doing, well, any manual steps. So I buckled down and figured out how to get an app deployed with a real domain and a real cert in a single move.
Here's how I got it done. Hopefully this process can be of use for you.
Here is a repo that demonstrates how to easily create web apps with a signed cert with DevOps automation.
For this adventure, we'll be using the following resources:
- Azure as our cloud provider
- Terraform for our asset declaration/deployment
- Docker to host services on our VM
- Traefik, our application proxy, and the secret sauce in this setup
- WordPress as our sample app to deploy.
Let's work backward from how we want this to go. With a single invocation of
terraform apply, we want a virtual machine deployed to Azure, Docker installed, and a WordPress application spun up, along with a LetsEncrypt-signed cert for a given domain.
"Taggart, why aren't you using Azure Web Apps or Container Apps?"
Because they're overkill, and they cost too dang much! I know how to install Docker on a VM—heck, I even know how to install Docker in Swarm mode on multiple VMs. I don't feel like paying Microsoft any more than I absolutely have to. So while their dizzying array of cloud products seems appealing, I like to keep it as close to the metal as possible without sacrificing the automation I'm after.
And by the way, if that means I have to write a few lines of shell script as connective tissue, so be it. I accept that as a technical debt, yes, but a smaller one than aligning my build to a specific cloud product that may change at any moment. At least with virtual machines, I can be reasonably certain they'll always be around as an option.
...oof, at least I hope so.
Terraform is going to deploy a VM, but not just a VM, right? This is The Cloud™, after all. The virtual machine comes complete with:
- A storage disk
- A network interface
- A public IP
- A network security group
- A DNS
The Terraform plan must account for all this. It's a lot! But good news: I've made an example repo that shows you how. Make sure to change the domain name and other details in these files. More on that later.
Terraform is great, but it can't do everything. Importantly, it can't register domain names for you. That's why, before any of this kicks off, you need to register your domain name, and create an Azure DNS Zone for the domain. Then, make sure the domain is configured to use Azure's nameservers, as per the instructions in the DNS Zone itself.
That's a lot of manual work for this automation, but hey: it' DNS. Did you expect it to be smooth?
With the domain name secured, Azure DNS Zone created, and the domain pointing to Azure's nameservers, we can examine the tools we use to hoist this thing into the sky.
Obviously, Terraform is how we bring resources up in the cloud. The primary resource in this deployment is a simple Linux VM—Ubuntu 22.04, to be precise—that will host our web app. We provision a Network Security Group that allows SSH from a provided IP address (your external), and HTTP/HTTPS from everywhere.
The NSG is attached to a new public IP, which in turn is attached to a new DNS A record for the domain configured in
The example also groups everything within a resource group it assumes already exists, but you could easily add a dynamic resource group for all of this to the Terraform plan.
With the VM created, a lightweight setup script runs that performs the following:
- Installs auditd and Sysmon For Linux1
- Installs Docker
- Brings up the service defined in the provided
So obviously the next tool in the chain is...
Could we directly install these services? Probably, but
docker-compose.yml is doing the work of post-provisioning for us, allowing us to keep that setup shell script as small as possible. This compose file comprises three containers: WordPress, naturally, a MySQL database, and finally, our most important piece of the puzzle: Traefik.
If you're unfamiliar with Traefik, it's a highly flexible reverse application proxy designed for containers. By hooking directly into the Docker socket, it can automatically detect new services that need proxying. That's a cool trick, but we're going to take advantage of another capability: automatic LetsEncrypt provisioning!
Well, sort of automatic. It's worth examining the
docker-compose.yml file for the necessary configuration. In particular, the
command section for the Traefik container:
command: - "--api.insecure=true" - "--providers.docker" - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true" - "--firstname.lastname@example.org" - "--certificatesresolvers.letsencrypt.acme.storage=acme.json" - "--entrypoints.wordpress.address=:80" - "--entrypoints.wordpress.http.redirections.entrypoint.to=websecure" - "--entrypoints.websecure.address=:443" - "--entrypoints.websecure.http.tls.domains.main=mydomain.com" - "--entrypoints.websecure.http.tls.domains.sans=*.mydomain.com"
entrypoints configuration options allow us to customize how to handle services on specific ports. The
certificatesresolvers configs tell Traefik how to handle any service that uses a TLS cert.
We combine these options with labels set on our WordPress container:
labels: - traefik.enable=true - traefik.http.routers.wordpress.rule=Host(`mydomain.com`) - traefik.http.routers.wordpress.entrypoints=websecure - traefik.http.routers.wordpress.tls=true - traefik.http.routers.wordpress.tls.certresolver=letsencrypt
Notice that the
entrypoints option is set to
websecure, which is where we've defined our domain information above. We also set that router's
letsencrypt, which we've defined.
Putting all this together, and customizing the aspects relevant to our domain, we can now simply run:
...and in no time, you'll have a brand new WordPress site waiting for you.
But of course, WordPress is just one example. This same method could be used for any services that requires a signed cert. APIs, proxies...or even command-and-control servers. I'll leave that to your imagination.
Regardless of app, the ability to quickly provision properly-signed services feels like what DevOps always promises, but so rarely delivers. I'm quite happy with this setup, and intend to use it for quite a few projects.
1 For some reason, this version of Ubuntu installs a critical library (
libcrypto.so.1.1) as part of a snap, preventing most reasonable software—including Sysmon for Linux, from finding it. The setup script simply creates a symlink to that location in a more reasonable path.