Evolution of my Website

The Wayback Machine and I go … well, way back. It’s got versions of my website dating back 20 years (as of February 2022, anyway) and through more than a few variations on the theme.

I bought sowrey.org a little over 22 years ago, largely on a need to try and keep in communication with others once I moved away from Southern Ontario (and my family). I had no vision in mind (despite my university education and what I had been working at in Toronto before moving to Vancouver) and started off with fairly minimal content.

And all of it was manual. Which was how things were in 1999 – no web content management systems, no fancy SaaS platforms, and Google was just coming out into the open. I think I used a simple hosting service that supported my domain, but you’ll have to excuse me if I can’t remember the specifics after so long, and so, so many changes.

My first record of sowrey.org in the WayBack Machine is February 2002. By that point, I was almost two years in to working at Critical Mass, making websites. And, still, at that time, it was all manual. My site was put together with Apache’s Server Side Includes (SSI), which allowed me to have consistent headers and footers, while dealing with unique page content. It was pretty basic, quite manual, and all the files had to be FTP’d when I did updates. Not exactly the most secure process, and fraught with more than one “oopsie” as I overwrote the wrong files.

The technical structure of the site remained more or less the same, style updates and improvements nothwidthstanding (we were still neck-deep in the initial Browser Wars and their aftermath) until early 2005, when I was turned onto WordPress.

For those of who have never heard of the product, WordPress is one of the earliest wide-spread blogging platforms. (The first was Links.net, which predated WordPress by nearly a decade. Several other platforms followed, but all were services; WordPress was the first widely-available software.) I can’t remember who specifically turned me to this, either industry news or a coworker, but I set to work learning PHP and understanding MySQL, and got my own hosting services (probably on something like BlueHost, though records don’t seem to go back that far) and stood up my first CMS-powered website (even the ones we did at Critical Mass were still largely code updates).

My first few years with WordPress were a love-hate relationship. There were times I loved it, and times I screamed at it. A simple update would completely detonate my site, and I’d have to figure out how to undo the trauma. But it taught me about backups and restores, which came to be a massively useful skill. Understanding how to write code that would survive updates became very useful, as well. Not to mention playing around with APIs, hooking into other platforms (ie. Flickr), and even how to make it work well on mobile (before we really adopted responsive design).

In 2012, I moved my domain from … well, wherever it was that I was managing it, to CloudFlare. I’m fairly certain that was a recommendation from a friend (again, I’m not sure whom), but it provided a virtual wall between the public IP and my actual server. It was my first foray into real web security that I had my direct fingers in. It was about that time I also moved all my communciations services to Google, which CloudFlare made ridiculously easy to do.

But WordPress persisted. I moved hosts several times, going from shared hosting like BlueHost and HostGator, to self-managed services on Digital Ocean and Google Cloud. This was still in my “very hands-on” period of my career when I wanted to know what was going on, what my code was doing, watch for performance, and tweak the living bejeezus out of everything. Y’know, play.

Then 2015 hit and … well, you might notice a significant drop in blog entries around that time. I just didn’t want to write. Not publicly, anyway. One thing WordPress did offer was private posts that no-one else could see. Helpful for my own needs, which were largely tracking my mental health at the time. But it also put a large stick through the spinning spokes of my “hands-on” behaviour. And I became hands-off.

Things lagged for … well, years. I just didn’t want to. I kept the lights on (including the final move to Google Cloud for self-hosting) and put in the odd post that I felt I really should write, but generally I didn’t do much of anything. Even doing the “one click” WordPress updates were a challenge. And I was struggling with the server timing out, which was turning into a big problem.

I had tweaked my server – Apache, PHP, and MySQL – trying to maximize it’s performance on a hideously small footprint. I wanted to keep this cheap, only a few bucks a month. And, by and large, I was succeeding. But between CloudFlare, Google’s search bots, DDoS attacks (yes, I got DDoS’d more than a few times), and updates in the tech stack, I was losing the battle of “keeping it small” – CPU usage regularly spiked to several hundred percent, and the site would go offline (even with CloudFlare’s “Always On” feature). No amount of caching a dynamic site was helping.

Back to old school techniques: I moved to try make the site static. But I wasn’t thinking, nor was I really investing the time to understand what I needed to do. I went with WordPress plugins that generated static pages. Which, great, but it also messed up all my URIs. Problem was: I didn’t really care. Google had already largely abandoned me (whether due to relevance, algorithm updates, my prolonged absence, or a combination thereof, I’ll never know) so I didn’t care too much about URIs changing.

It didn’t help, nearly enough. The server was still being taken down with massive cache requests. And bumping up the CPU level of the VM would triple my costs. So I took the extra step of moving WordPress to my own computer, and turned the VM into nothing more than an Apache server. It had the benefit of serving only static content, which minimized the CPU load.

I also punted my blog from the main domain (sowrey.org) to a new hostname: https://geoff.sowrey.org. I did this for a few reasons, not the least of which was understanding that my daughters might want a website of their own, someday, and Daddy can’t hog the root domain for himself. (Though, I would eventually recant that, and turned it into a new site where I post stories.)

Somewhere, inside me, this felt wrong. The world sees my website through CloudFlare’s services. My server provided content to CloudFlare. It seemed a superfluous step – why couldn’t I just serve content via CloudFlare?

Funny thing: you can. My friend (and DevOps mentor) Jeremy has been at me for years with all the things he’s learned. I struggle sometimes to understand how he’s managed to learn all that he has, but I’m very glad he has … and that he has the patience to teach me (even if it is at the highest possible level). He turned me onto CloudFlare Workers, which are dead-simple request-respond services that spit back a static result. You can use them for just about anything, really, but in this particular case, the request is a URI, and the response is a website.

Now, you can do this with WordPress. But I felt that it was time for me to officially abandon the overhead. After all, I’d been using what is a fairly powerful CMS platform for dumping out text. And not even really heavily-meta text, or with media. (There has been media over the years, but various WP updates and media service changes detonated most of it.) It was time to go a little more old school than that.

Back when I first blogged on Sowrey.org, I did it with Server Side Includes and simple HTML. Once in the presence of a suitably-enabled Apache instance, it strung the files into a cohesive website. But Apache, let alone Server Side Includes, is now waning technology, with people having moved to virtualized services, if not to Platform- or Software-as-a-Service solutions. Another approach was needed.

Now I’ve not been a fan of post-processing tools (ie. Grunt) for a very long time – extra tools to do something that the application software can’t do natively seemed a waste. But then CI/CD processes came along, where the extra tools could be merged in automagically, and the overhead seemed less burdensome – or at least, they’d always be run, even if you “forgot” to do something.

I espouse being a lazy developer: don’t reinvent wheels, don’t do any more (boring, repetitive) work than is absolutely necessary, love your automation. Consistent processes lead to consistent (and predictable!) results. So if you have a pipeline that can turn text into a website, that suddenly sounds more along the lines of what I needed.

Hence, Markdown. It’s been around forever but hasn’t really been used until the last few years, after everyone writing software documentation got really sick of the various documentation formats being flung about all the damned time and just wanted to settle on something simple. (Also about lazy development: simpler is always better.) No more embedded HTML (well, not much of it, anyway), you still get meta data (dates, tags), all you need is a pipeline processor to turn a set of structured Markdown files into something else.

My first attempt was a “Markdown” CMS called Grav. It’s been around a while and not terribly well-updated, but it worked fairly well. But it had a major fault in my mind: it was still heavily PHP-based and was needed to serve content. The content source was static, but the output was dynamic. Which ended up being no more of a solution than WordPress, really – I still had CPU spikes.

Then I spun the process a bit – I wanted static output, not just input. The “dynamic” part really only had to be taking all the inputs and spitting out a final result – an intermediate process.

So almost two months ago, I started the next major life of my sites.

By and large, the process works. It’s nowhere near as efficient as I’d like, though. (Am I think about writing my own processor? Am I flying in the face of my “lazy” ethos? Yes. Yes, I am.) But it works something like this:

  1. Write a blog entry. Like, say, this one.
  2. Run a local test, which is effectively running “node build” at the command line. Assuming no errors…
  3. Add the new entry (and images, now that I have them, too) to the Git repo, and push to the origin.

At this point in the process, I’m done. What happens afterwards is entirely automated:

  1. At the Origin, which for me is Github, a YAML file tells Github to do things once the commit is synced, notably: build the site and deploy it.
  2. Github Actions kicks in and runs the build script: set up a Debian server, install Node, install Metalsmith and its dependencies, bring down the site files, and do the build. More or less, it’s rebuilding what I have locally. And, again, assuming no errors…
  3. Kick off Wrangler, which is CloudFlare’s handy little tool for copying the build output to a Worker Site.

Now here’s the fun part: it technically doesn’t matter where I write the blog entry. I can even go into Github, create a new file (in the proper source structure), save it, and the process resumes at Step 4. I run a higher risk of failure (because I mucked up metadata, most likely), which will cause Step 5 to fail, and the process stops (and I get an email saying something went wrong).

But I get what I wanted: a static site, served without any hit to a virtual machine. In fact, I’ve just deleted my last virtual machine on Google Cloud. For the first time in a decade, I have no server to manage or update. It’s just a Worker, doing simple request/respond operations.

Now I just need to finish cleaning up a couple of decades of entries, find missing images, tweak the layouts…