cloudformation

Pre-warming Memcache for fun and profit

by Oliver on Wednesday, August 12th, 2015.

One of the services my team runs in AWS makes good use of Memcached (via the ElastiCache product). I say “good” use as we manage to achieve a hit rate of something like 98% most of the time, although now I realise that it comes at a significant cost – when this cache is removed, it takes a significant toll on the application. Unlike other applications that traditionally cache the results of MySQL queries, this particular application stores GOB-encoded binary metadata, but what the application does is outside the scope of this post. When the cached entries aren’t there, the application has to do a reasonable amount of work to regenerate it and store it back.

Recently I observed that when one of our ElastiCache nodes are restarted (which can happen for maintenance, or due to system failure) we already saw a less desirable hit to the application. We could minimise this impact by having more instances in the cluster with less capacity each – for the same overall cluster capacity. Thus, going from say 3 nodes where we lose 33% of our cache capacity to 8 nodes where we would lose 12.5% of our cache capacity is a far better situation. I also realised we could upgrade to the latest generation of cache nodes, which sweetens the deal.

The problem that arises is: how can I cycle out the ElastiCache cluster with minimal impact to the application and user experience? To save a long story here, I’ll tell you that there’s no way to change individual nodes in a cluster to a different type, and if you maintain your configuration in CloudFormation and change the instance type there, you’ll destroy the entire cluster and recreate it again – losing your cache in the process (in fact you’ll be without any cache for a short period of time). I decided to create a new CloudFormation stack altogether, pre-warm the cache and bring it into operation gently.

How can you pre-warm the cache? Ideally, you could dump the entire contents and simply insert it into the new cluster (much like MySQL dumps or backups), but with Memcached this is impossible. There is the stats cachedump command to Memcached, which is capable of dumping out the first 2MB of keys of a given slab. If you’re not aware of how Memcached stores its data, it breaks the memory allocation into various “slabs” of increasing sizes and stores values in the closest-sized slab that will fit it (although always rounding up). Thus, internally the data is segmented. You can list stats for all of the current slabs with stats slabs, then perform a dump of the keys with stats cachedump {slab} {limit}.

There are a couple of problems with this. One is the aforementioned 2MB limit on the returned data, which in my case did in fact limit how useful this approach was. Some slabs had several hundred thousand objects and I was not able to retrieve nearly the whole keyspace. Secondly, the developer community around Memcached is opposed to the continued lifetime of this command, and it may be removed in future (perhaps it already is, I’m not sure, but at least it still exists in 1.4.14 which I’m using) – I’m sure they have good reasons for it. I was also concerned that using the command would lock internal data structures and cause operational issues for the application accessing the server.

You can see the not-so-reassuring function comment here describing the locking characteristics of this operation. Sure enough, the critical section is properly locked with pthread_mutex_lock on the LRU lock for the slab, which I assumed meant that only cache evictions would be affected by taking this lock. Based on some tests (and common sense) I suspect that it is an LRU lock in name only, and more generally locks the data structure in the case of writes (although it does record cache access stats somewhere as well, perhaps in another structure). In any case as mentioned before, I was able to retrieve only a small amount of the total keyspace from my cluster, so as well as being a dangerous exercise, using the stats cachedump command was not useful for my original purpose.

Later in the day I decided to instead retrieve the Elastic LoadBalancer logs from the last few days, run awk over them to extract the request path (for some requests that would trigger a cache fill) and simply make the same requests to the new cluster. This is more effort up-front since the ELB logs can be quite large, and unfortunately are not compressed, but fortunately awk is very fast. The second part to this approach (or any for that matter) is using Vegeta to “attack” your new cluster of machines, replaying the previous requests that you’ve pulled from the ELB logs.

A more adventurous approach might be to use Elastic MapReduce to parse the logs, pull out the request paths and using the streaming API to call an external script that will make the HTTP request to the ELB. That way you could quite nicely farm out work of making a large number of parallel requests from a much larger time period in order to more thoroughly pre-warm that cache with historical requests. Or poll your log store frequently and replay ELB requests to the new cluster with just a short delay after they happen on your primary cluster. If you attempt either of these and enjoy some success, let me know!

Tags: , , , ,

Wednesday, August 12th, 2015 Tech No Comments

Cloudformation and the data transformation nightmare

by Oliver on Friday, March 7th, 2014.

The background to this story is that I spent the bulk of one week recently working on getting a prototype service deployed with AWS CloudFormation, and the experience was still reasonably painful. My team has other services deployed with CloudFormation, which is working perfectly fine (now) but I had hoped there would be some improvements available since the last time we went through the process.

The Components

Firstly I’ll describe in a sentence what CloudFormation does, for those who aren’t familiar with it, and then list the components which need to go into making something deployable by CloudFormation. CloudFormation itself allows you to describe an entire collection of resources required to make a service runnable and accessible – for example web servers, database servers, a loadbalancer and perhaps a Memcached cluster.

You might consider the descriptive language it requires somewhat akin to the current generation of configuration management languages, but far more restrictive and minimal. It also tends to occupy itself with what happens on initial provisioning rather than keeping the whole “stack” consistent over time – tasks which are more or less delegated to other services such as CloudWatch + AutoScaling etc. The fundamental input is the CloudFormation template which requires you to provide the following:

  • Descriptive (for humans) metadata about the stack.
  • A list of all resources (basically other AWS services) required.
  • Static configuration parameters for all of those resources.
  • “Dynamic” parameters which can be overridden at stack creation or update time, which can either be a literal value or based on a map lookup.
  • EC2 instance user data – mostly taking the form of a shell script used to bootstrap your service when the instances boot up.

The input is encoded in JSON – not a bad encoding in that it is fairly easy to read, but tends to be fairly easy to break by human editing with missing quotes or commas. Depending on your JSON parser of choice, it can be very hard to find the breaking change when that happens.

Static configuration data is mixed with dynamic configuration data (parameters that can come from external sources, or mappings that depend on other inputs) which complicates the understanding of the template, but also vastly enriches its functionality. Here, we already have two possible origins of data coming into the system from sources other than the template itself – parameters passed on the command line to a tool, or read from a configuration file (local, stored in revision control, stored in S3, etc) to be merged in with other parameters.

Then there is the user data, which is really my main pain point.

Bootstrapping the Instance

We’ve described the stack, set up a template ranging in complexity from very simple to a very rich description of different environments and software version requirements – but even with a completely running stack we don’t have any instances actually running our software. You might have generated your own AMIs with your software built into them, configured to start on boot – but in this case you have simply moved your complexity from the provisioning stage to the build stage (and building an AMI as complete as this can take time both in preparation and on every build).

There are several methods available to you when delivering bootstrapping instructions to your newly started instances (most of which are described in this document from AWS).

  • User data script in CloudFormation template, verbatim, expressed as shell commands.
  • Externally-templated user data script (e.g. with ERB, Jinja or your engine of choice), rendered and delivered as part of the CloudFormation template. Like the above but allows some customisation for different environments or build versions.
  • Minimal shell bootstrap embedded in the CloudFormation template, pulling in an external script from a network source to continue the heavy lifting.
  • Puppet or Chef (which still requires a minimal shell bootstrap as above to start their own process).
  • CloudInit
  • CloudFormation helper scripts

None of these are the perfect solution, and a choice for one or another often is simply selecting a point on a complexity scale. I won’t go into detail of how each works but will try to express what pained me about each the most.

  • Verbatim shell script: Potential for errors due to how it must be encoded in the JSON template. No choice for customisation based on environment or build versioning. Limited size.
  • Shell script with external templating: Introduces complexity by requiring data to be merged in from external sources. Risk of the external sources not being available or providing correct data at bootstrap time. Difficulty in tracking exactly which data went into a given build, and maintaining that historical timeline.
  • Minimal bootstrapping to initialize another script: Same risks as above, with shifted complexity from CloudFormation template to another file (or files) which may have further templating. It is also now more difficult to pass data between the initial bootstrap and secondary scripts.
  • Puppet or Chef: Much more complexity. You really need to have ongoing configuration management a requirement to make this a worthwhile proposition.
  • CloudInit: Requires learning what is basically a scripting “shorthand” form. It requires you to write your instructions, then encode the data as a separate step and make the encoded data available to the CloudFormation process. Passing custom data around dependent on environment or build version is again more difficult. Complexity is shifted to the build stage. Since the data is still passed in the user data section of the template, it must adhere to the same size limitations.
  • CloudFormation helper scripts: Only available on Amazon Linux, unless you are prepared to use something like heat-cfntools, which you must install using Pip. This dependency makes using the tools on something like Ubuntu Linux more complex again. This was actually one of the more tempting options for me when considering them recently, but the simplicity of the helper scripts and ease of expression in the template comes at the price of them being fairly inflexible – not quite flexible enough for my software deployment.

I’ll now briefly describe another part of the problem which also has an impact on the choice of bootstrapping methodology, before getting to a more complete conclusion.

Service Lifetime Management

If you have been paying attention, you would have read at least something recently about the wars raging over the decision to replace Init in Debian. I won’t provide any links – Google has more than enough fodder on the subject. It’s a topic I’ve rarely concerned myself with (if you’ve been using RHEL or CentOS for any reasonable amount of time, any system other than the standard SysV-style Init has barely been a blip on the radar), but nevertheless I’m aware that the current choice is SystemD, for better or worse.

Be that as it may, a bunch of my services are running on Ubuntu and hence use Upstart (which I’ve rarely had problems with, despite my reading on the topic now showing that this seems to be rare). Attempting to integrate Upstart-based control of a service with the CloudFormation user-data or other bootstrapping mechanisms comes with its own challenges:

  • Amazon Linux comes with a horribly outdated version of Upstart (0.6 if I recall correctly) which lacks many of the options making Upstart desirable. Among them is file-based logging (this is largely why I decided against Amazon Linux, despite having CloudFormation helper scripts available).
  • Expressing the startup parameters of the service correctly often requires a large amount of quoting. Expressing correct quoting from another shell script, within JSON is at the very least an error-prone exercise.
  • I’d like to run my service as a non-root user. Upstart supposedly supports user services (which you can even configure from a script inside the user directory), however these are disabled by default and the documentation for enabling them again is extremely poor. Enabling them would also add complexity to the bootstrapping process. This means handling the dropping of privileges within the Upstart service script itself.
  • Delegating runtime control of your service to a system service such as Upstart introduces a complexity in passing data from the bootstrapping process once again. Since the your service is now being executed from Upstart it inherits nothing from your bootstrap process directly, unlike if you simply forked a process from the bootstrap script which would allow for inheritance of environment variables etc (which may or may not be a good thing, but it can at least be convenient).

Despite these points, Upstart still allows you to provide a very minimal startup script for your service (mine was about three lines – compare that to your typical init script), and some conveniences such as automatic logging. It’s still not perfectly smooth deploying a service unless you want to package everything up in a deb/rpm earlier in your build pipeline (and then still deal with the issue of full configuration of your service dependent on the environment and versions at hand) but to be fair, that’s not the fault or purpose of Upstart.

What did our implementation look like?

An initial implementation of ours had the vast majority of the work being done using Ruby scripts, some shell, the Amazon Ruby SDK and ERB templating. Specifically, you would call a shell script with some parameters (this could be done either manually or from our build system), that would call a Ruby script with those parameters and perhaps some other data divined from the filesystem, and the Ruby script would render the ERB template and make calls using the AWS Ruby SDK to S3 (build artifacts and more CloudFormation parameters stored in files) and CloudFormation to create or update the stack.

There were a lot of moving parts, data coming from a bunch of different places and not a consistent language or distribution of work between components. Adding to the distribution of configuration data amongst several different sources, passing this data between components at runtime as command line parameters, environment variables or embedded in JSON templates is also awkward at best.

How I manage my sanity

So, I’m pretty good at complaining about things, but what does my solution look like? One distinct advantage I have is that I tend to use Golang to write software, and hence end up with a single binary artifact from which the service is run. Most of the time there are no or few external supporting files so there is very little to ship and configure. But nevertheless…

  • Keep your data simple as possible. Build runtime secrets (and as much static “configuration”) into binaries as possible (you can use a trick like this with Golang) and keep the binaries secure.
  • Don’t use data in S3 or stored elsewhere to populate parameters. Use mappings, switch based on a pseudo-parameter, e.g. AWS::StackName to make it super simple and obvious where the data is coming from.
  • If you can tolerate Amazon Linux, and the cost of flexibility, do use the CloudFormation helper scripts.
  • There’s a temptation to make your build tooling run on multiple environments – especially if your software runs on Linux in AWS, and you use a Mac (or worse, Windows) as your working environment. Make the most common things automated completely for Linux and run as much of them in a build tool like Jenkins. Try to make the other things as platform agnostic as possible (Ruby, Python or Java are all fairly good and have Amazon SDKs).
  • Try to avoid the temptation of templating inside of templates!

When I next need to make some changes to this deployment system, I think I’ll be attempting to remove as much of the Ruby code as possible and relying solely on the Amazon Python CLI. This will allow us to remove the Ruby SDK requirement (which, unless you are a developer, has very little utility in the command-line realm) in favour of the Python CLI which can be used for many other things in addition to its integration in our deployment script. It’s also a lot lighter-weight than the previous JVM-based CLI utilities. I’ve already reduced the data sources down a lot, but these could be potentially reduced further, or if not, work to ensure that there are suitable mechanisms for tracing data transformations through the system so we know where data is coming from, and when it has changed.

These are just a few realisations after the latest round of bootstrapping refactoring I had to perform recently. What have your own experiences been like in bootstrapping nodes and keeping your sanity?

Tags: , , , , ,

Friday, March 7th, 2014 Tech 2 Comments