‘The Whistle Song (E.K. 12” Mix)’ by Frankie KnucklesCan’t say it better than @GhostType_GH: Heaven’s got a new resident DJ. #RIP

‘The Whistle Song (E.K. 12” Mix)’ by Frankie Knuckles
Can’t say it better than @GhostType_GH: Heaven’s got a new resident DJ. #RIP

Faster, Cheaper, Docker

image

Before I completely recover from an almost sleepless Thursday night, I thought it’d be nice to write up a bit about what we did. The short story is that we took all our servers off Amazon, containerised them, and moved them from North Virginia to Germany.

Bit of background

This Is My Jam has always been an AWS shop, from when we started on a single m1.large to when we peaked at 50 or so EC2 instances, RDS, Cloudfront, S3, Route 53, email, you name it. It’s been great, mostly reliable, and extremely easy to work with. We were a team of four people in London, none of us with huge amounts of operations and architecture experience, yet we somehow built and maintained this pretty complex stack. We managed our infrastructure with a Fabric-based CLI issuing EC2 API calls and invoking Puppet to configure new servers. It’s not surprising we ended up with lots of machines when creating them was a one line shell command.

Or, maybe not always one line. Puppet would often take two or three goes to roll through, probably because I missed a few require statements here and there. If you hadn’t built a particular type of server in a couple of months, you’d be sure you’d have to tweak some manifests to deal with upgraded software versions. But all in all, it worked most of the time, and it allowed us to iterate quickly.

The biggest problem with AWS is cost. All those little CPU hours add up, and When Matt and Han took Jam indie a few months ago, the cost simply because prohibitive.

We bought three high-end servers on Hetzner in Germany. They’re famously cheap and pretty reliable, as long as you’re not getting DDOS’d. Now we just needed a way to get our stuff shipped over from Amazon, but the idea of resuscitating all those Puppet manifests made my eyes water.

Around the same time as this was happening, the tech blogs started exploding with articles about this new thing called Docker. I generally don’t like to do what the cool kids do, I still think PHP was a good choice of language for This Is My Jam, and I still think a phone should have a physical keypad. But Docker looked really good.

Starting out with Docker

If, hypothetically, one were to believe the hype, Docker should allow you to build a server in Amazon, start it anywhere else in the world, and it would work exactly the same. You wouldn’t even have to rebuild it. That’s exactly what we needed.

Another nice feature of running containers is that they are isolated, so you can start twenty different containers on a server, kill them, and you’re back with a vanilla server. We wanted to scale down the number of servers we run, putting many applications on the same machine, but we weren’t exactly sure what the best composition of apps per machine was. With Docker, we would be able to reshuffle containers between machines with very little hassle.

Now we had to find some central place to store the images, something like a Github for Docker images. My good friend Ben pointed us to quay.io, a private hosted Docker registry that had started a couple of months earlier.

(A few weeks ago, Docker.io announced that they are offering private repos on the official Docker index, at about half the price of quay.io. Quay.io still has some nice features though, like being able to visualise branches and see which images take up most space. And their support is fantastic.)

After having set up all the peripheral stuff, I was finally on my way to writing Dockerfiles. Turns out, that’s the easy part! A Dockerfile is literally just a shell script with some extra stuff like adding files and exposing ports. Look at this Graphite one for example. Graphite is notoriously difficult to install, but when you see it like this, it’s not that intimidating. Just imagine what that would look like as a Puppet module.

For me, the biggest brain block was to figure out the difference between images and containers. I fixed that by thinking of it in terms of build-time and run-time. Images are built once, and are immutable, static, dead. Containers run on top of images. That also made sense of the RUN and CMD instructions. Every RUN becomes a new image layer in the stack of images that make up … an image (I might have to think a bit longer about that one). The single CMD gets run after you start the container. So a typical RUN instruction would be apt-get install figlet and a CMD could be figlet i love figlet.

Building a new toolchain

As I was writing more and more Dockerfiles, I got sick of typing sudo docker run quay.io/thisismyjam/somelongname on the command line. It got particularly frustrating when you needed to expose ports and pass variables like sudo docker run -e FOO=BAR -p 80:8080 quay.io/thisismyjam/anevenlongername. Something had to be done.

My good friends Ben and Aanand had just released fig, a really neat little tool for declaratively configuring local dev environments. I liked fig’s YAML schema so much I spent the next few weeks writing headintheclouds, a combined server provisioning and Docker orchestration tool that builds on top of fig’s syntax. A headintheclouds manifest can look like this.

When the goal is to make portable Docker images, some things need to be configured at runtime. The Apache server needs to talk to the Redis server, but the Redis server might move, so you need to be able to set the Redis server IP at runtime. Same with the Redis server, if you move it to a smaller server, you have to be able to set the MAXMEMORY variable dynamically. Docker environment variables are made for this, but unlike Puppet, Docker doesn’t come with a template engine, so substituting parameters in config files becomes unnecessarily difficult. I built a simple command line template engine in Python that uses the jinja2 templating language. This Redis image has examples of it being used. Modelling these types of server-container dependencies also became a key feature of headintheclouds.

The final piece of the puzzle was firewalls. Now that we weren’t going to sit behind EC2 security groups any more, we had to manage that ourselves. To get around that I built an iptables wrapper firewall thing into headintheclouds.

I wrote most of our new Docker images on a dev box in the same AWS cloud as our existing production stack. Two reasons for that, first is that when testing a container you’ll often need to talk to some actual servers to make sure everything is glued together at the right places. Secondly, you probably don’t have as fast Internet connection at home as Amazon does. That matters quite a lot when you’re uploading and downloading entire operating systems.

Moving to Germany

Before moving to Germany, most of our stack was actually running on Docker containers in EC2. Our biggest concern was migrating the database, so we figured we’d do that in two steps. We built the new production stack in Hetzner, still talking to the old database, but it was slow as a dog. That’s when we realised we had to do the whole migration in one go. Adding insult to injury, we remembered that our RDS instance was running MySQL 5.5, so we couldn’t use this nice replication feature either. Unless we spend lots of time researching fancy MySQL migration tools, we were stuck with mysqldump.

We ended up scheduling a few hours of downtime on Thursday night. To begin with, we piped a mysqldump straight through the new mysql server. We had concerns about slowing down production by testing it live beforehand, but sitting there, watching pv made me regret that decision.

And then, three and a half hours through, the mysql client got an error and broke the pipe. Instead of restarting the pipe, I just did the mysqldump on its own, up’d the max_allowed_packet variable, and sourced the dump like it was 2001, poured me a whiskey, watched youtube videos, and waited for four hours.

When the import was done, I flipped the mysql host IP parameter in the headintheclouds manifest, restarted the stack, and switched DNS over. And that was that.

The first thing I noticed using the new stack was how snappy it was. The web and database servers are on the same machine, so there’s no network latency. The hardware is better than on EC2, and if a container has issues with CPU being stolen, we can just move the thief to a different machine. If we ever get a spike where we need more capacity, we can spin up a few Digital Ocean droplets in Amsterdam and run the same containers on them.

To wrap up, it’s like we’ve moved out of our parents house into our own flat, we couldn’t afford anything too fancy, but what we have is ours and we have our own furniture and our friends can come over whenever they like and sometimes we get automated emails that say things like

DER VON IHNEN BEAUFTRAGTE AUTOMATISCHE RESET FÜR IHREN
SERVER #301204 WURDE SOEBEN AUSGEFÜHRT.

- Andreas

‘Johnny And Mary (feat. Bryan Ferry)’ by Todd TerjeWe’re getting pretty excited about ‘It’s Album Time’ over here at the Jam Factory. Come on, April 8! Hurry up!

‘Johnny And Mary (feat. Bryan Ferry)’ by Todd Terje
We’re getting pretty excited about ‘It’s Album Time’ over here at the Jam Factory. Come on, April 8! Hurry up!

See ya soon, Austin!

See ya soon, Austin!

Jam cookie succeed! (Lemon sablés with citrus buttercream and blackberry jam)

Jam cookie succeed! (Lemon sablés with citrus buttercream and blackberry jam)

Music we like, things we make & stuff we get up to. from Team Jam
www.thisismyjam.com

view archive