I can’t think of a good reason to set up SSL for a static blog, but once Amazon and Let’s Encrypt started giving out free certificates, it kind of felt like why the hell not. (Update: Ars Technica put out a great post on the value, or not, of the “s” in https.)
I’m a satellite systems engineer by day, so have no idea what I’m doing when it comes to this sort of thing. Luckily, I found this great post on Dan Roncadin’s blog. I don’t rehash that article here, but there were a few additional fiddly bits required to make it work for me that seemed worthy of documenting.
For background, this is a static blog created with Pelican, hosted on S3, with Route 53 for DNS. The setup was pretty straightforward: I pointed my domain registrar to Route 53 DNS servers, gave Route 53 the URL to my S3 bucket, and synchronized a local directory with S3 whenever I needed to update something.
Setting up SSL using Amazon’s AWS Certificate Manager (ACM) adds a couple of steps. Here’s what I did based on Dan’s post.
- Request a certificate through the Certificate Manager page in the AWS Management Console.
- Wonder why I didn’t get the verification email, realize I had WHOIS privacy turned on, go to Hover to turn off WHOIS privacy, resend the verification request, confirm I got the email, and turn WHOIS back on.
- Validate the certificate request.
- Set up CloudFront to point to my CNAME (i.e. veridical.net), select the recently validated SSL certificate, and configure the cache behavior settings.
So far so good, but I had to a few more things.
- Route 53 was still pointing to S3, so I had to update it with the CloudFront URL.
- I had to update the
SITEURL
variable in my Pelican configuration file fromhttp://veridical.net
tohttps://veridical.net
. TheSITEURL
variable is expanded to create lots of hyperlinks when the pages are created. Any plain-old http links would automatically re-direct because of how I configured CloudFront, but the<head>
block on every page was loading javascript and CSS at via unsecure URLs that some combination of CloudFront and my browser were blocking. - I added the
--cf-invalidate
argument to the definition ofs3_upload
in my Pelican makefile. This forces CloudFront to invalidate it’s cache and fetch fresh files from S3. I didn’t think it was going to be that simple, but it worked on the first try. - Finally, I rebuilt the site and sync’ed the content to S3. I could see in the CloudFront dashboard that the invalidation request had gone through, and when I refreshed, I was greeted with this.