deployment

The learning gap in deploying Javascript apps

by Oliver on Sunday, August 17th, 2014.

I’ve recently been building a website for my wife using relatively modern tools – lots of client-side Javascript and relatively minimal CSS and HTML. A recent-ish email alerted me to the existence of a Berlin-based Famous framework meetup group, which initially made no sense to me, but after I checked out the actual framework website my interest was piqued. I’ve got next to no website building experience (at least from the front-end point of view), and what I would only describe as barely competent Javascript skills. Nevertheless it appealed more to me to learn more about this world than simply building a generic WordPress-based site and customising some stylesheets.

There are some great tools out there these days for Javascript learners, for example Codecademy. I used this myself to brush up on the weird scoping rules, function hoisting, manipulating the DOM etc. That’s enough to generally get you started with Javascript as a language and also in the context of execution in the browser which comes with additional constraints and possibilities. On top of that you have tutorials on using jQuery which are usually quite understandable and make a lot of sense if you have already learned about the DOM and manipulating content on a page. So far so good.

The Famous framework introduces a new paradigm for creating responsive, dynamic content on a webpage for both desktop and mobile devices. Fortunately on their website they also provide a bunch of tutorials which give you a pretty good overview of functionality and how to use the framework. Their starter pack containing further examples for most of the library functions also helps considerably. I took a few times to go through all of it and still find some aspects confusing but ultimately I was able to build a website that actually worked.

Great – now I need to run this stuff on a real webserver. If you have had even tangential contact with real front-end developers you have probably heard of such terms as “minification“, “Javascript obfuscation” and perhaps “RequireJS” or “CommonJS modules” and so on. I was already somewhat aware of CommonJS modules, having encountered them in front-end code for my company’s website and could see that they provide a nice solution to the Javascript scoping and code reuse problems. Fortunately, using the scaffolding provided in the Famous starter kit you get code that already has the CommonJS module pattern built-in, so building the website code followed this pattern from the start. If this hadn’t been the case, I’m not sure how I would have found some good examples for getting started with it.

The website built, I was only vaguely aware that RequireJS had to be part of the solution but left this part of the puzzle aside while I pondered how to minify the code. I could have just downloaded the famous.min.js file from their website and copied all the files to my webserver but I felt like this wasn’t how it is meant to be (but with no real way knowing the right way to do it). This lack of knowledge resulted in a frustrated mailing list post but ultimately no better solution. There was a suggestion to use RequireJS, and a lot more Googling on my part but I still didn’t make much progress. Eventually I decided I’d create a “vendor” directory, cloned the Famous repo into it and… got stuck again. I just didn’t know how to join the dots between the original Javascript code and a tidy single-file representation of the library dependencies and my own app code.

After a conversation with a coworker I was armed with links to such tools as Bower (which I played with a bit, before realising I still didn’t know what I was doing), Grunt, Gulp, Yeoman, and also found an interesting Hackernews thread that seemed to contain useful information but was still a little out of my reach. All these tools, but still I had no good idea what I needed to accomplish and how to do it. I decided in the end to just use the Yeoman Famous generator module and generate a Famous site scaffolding to see what such a thing looked like. This contained just a tiny example of Famous code but more importantly had all of the minification tooling, Javascript linting and build preparation and deployment tasks already baked in without having to build them from scratch. I copied the relevant parts over to my existing repository, fixed all the linter errors and ended up with the minified, deployable Javascript files I’d been hoping for.

I’m happy with the end result, but saddened by the apparent gap in learning possibilities for this part of the Javascript world. There are plenty of opportunities to learn basic Javascript, figure out how to make a static website slightly more dynamic and perhaps use some fairly standard convenience libraries like jQuery or Underscore but not much more than that. When it comes to building reusable modules with RequireJS or minifying, linting and building your code for deployment this feels like an area that suffered from problems felt acutely by long-time Javascript developers, that now fortunately have a variety of good solutions – and there has been a lot of rapid development in this area in the last few years. Sadly, getting learners up to speed with these problems and their solutions doesn’t seem to have kept up with the technologies themselves.

My recommendation: If you’re in a similar position to me, check out Yeoman and the generators available for it. Use that and build up your knowledge of how the individual components and libraries are used together by inspecting what scaffolding code it provides for you. I can only hope that we start to see more tutorials on Codecademy and other similar sites on Javascript deployment subjects and the problems that the tools mentioned above solve for modern websites.

Tags: , , , , , , , ,

Sunday, August 17th, 2014 Tech No Comments

Cloudformation and the data transformation nightmare

by Oliver on Friday, March 7th, 2014.

The background to this story is that I spent the bulk of one week recently working on getting a prototype service deployed with AWS CloudFormation, and the experience was still reasonably painful. My team has other services deployed with CloudFormation, which is working perfectly fine (now) but I had hoped there would be some improvements available since the last time we went through the process.

The Components

Firstly I’ll describe in a sentence what CloudFormation does, for those who aren’t familiar with it, and then list the components which need to go into making something deployable by CloudFormation. CloudFormation itself allows you to describe an entire collection of resources required to make a service runnable and accessible – for example web servers, database servers, a loadbalancer and perhaps a Memcached cluster.

You might consider the descriptive language it requires somewhat akin to the current generation of configuration management languages, but far more restrictive and minimal. It also tends to occupy itself with what happens on initial provisioning rather than keeping the whole “stack” consistent over time – tasks which are more or less delegated to other services such as CloudWatch + AutoScaling etc. The fundamental input is the CloudFormation template which requires you to provide the following:

  • Descriptive (for humans) metadata about the stack.
  • A list of all resources (basically other AWS services) required.
  • Static configuration parameters for all of those resources.
  • “Dynamic” parameters which can be overridden at stack creation or update time, which can either be a literal value or based on a map lookup.
  • EC2 instance user data – mostly taking the form of a shell script used to bootstrap your service when the instances boot up.

The input is encoded in JSON – not a bad encoding in that it is fairly easy to read, but tends to be fairly easy to break by human editing with missing quotes or commas. Depending on your JSON parser of choice, it can be very hard to find the breaking change when that happens.

Static configuration data is mixed with dynamic configuration data (parameters that can come from external sources, or mappings that depend on other inputs) which complicates the understanding of the template, but also vastly enriches its functionality. Here, we already have two possible origins of data coming into the system from sources other than the template itself – parameters passed on the command line to a tool, or read from a configuration file (local, stored in revision control, stored in S3, etc) to be merged in with other parameters.

Then there is the user data, which is really my main pain point.

Bootstrapping the Instance

We’ve described the stack, set up a template ranging in complexity from very simple to a very rich description of different environments and software version requirements – but even with a completely running stack we don’t have any instances actually running our software. You might have generated your own AMIs with your software built into them, configured to start on boot – but in this case you have simply moved your complexity from the provisioning stage to the build stage (and building an AMI as complete as this can take time both in preparation and on every build).

There are several methods available to you when delivering bootstrapping instructions to your newly started instances (most of which are described in this document from AWS).

  • User data script in CloudFormation template, verbatim, expressed as shell commands.
  • Externally-templated user data script (e.g. with ERB, Jinja or your engine of choice), rendered and delivered as part of the CloudFormation template. Like the above but allows some customisation for different environments or build versions.
  • Minimal shell bootstrap embedded in the CloudFormation template, pulling in an external script from a network source to continue the heavy lifting.
  • Puppet or Chef (which still requires a minimal shell bootstrap as above to start their own process).
  • CloudInit
  • CloudFormation helper scripts

None of these are the perfect solution, and a choice for one or another often is simply selecting a point on a complexity scale. I won’t go into detail of how each works but will try to express what pained me about each the most.

  • Verbatim shell script: Potential for errors due to how it must be encoded in the JSON template. No choice for customisation based on environment or build versioning. Limited size.
  • Shell script with external templating: Introduces complexity by requiring data to be merged in from external sources. Risk of the external sources not being available or providing correct data at bootstrap time. Difficulty in tracking exactly which data went into a given build, and maintaining that historical timeline.
  • Minimal bootstrapping to initialize another script: Same risks as above, with shifted complexity from CloudFormation template to another file (or files) which may have further templating. It is also now more difficult to pass data between the initial bootstrap and secondary scripts.
  • Puppet or Chef: Much more complexity. You really need to have ongoing configuration management a requirement to make this a worthwhile proposition.
  • CloudInit: Requires learning what is basically a scripting “shorthand” form. It requires you to write your instructions, then encode the data as a separate step and make the encoded data available to the CloudFormation process. Passing custom data around dependent on environment or build version is again more difficult. Complexity is shifted to the build stage. Since the data is still passed in the user data section of the template, it must adhere to the same size limitations.
  • CloudFormation helper scripts: Only available on Amazon Linux, unless you are prepared to use something like heat-cfntools, which you must install using Pip. This dependency makes using the tools on something like Ubuntu Linux more complex again. This was actually one of the more tempting options for me when considering them recently, but the simplicity of the helper scripts and ease of expression in the template comes at the price of them being fairly inflexible – not quite flexible enough for my software deployment.

I’ll now briefly describe another part of the problem which also has an impact on the choice of bootstrapping methodology, before getting to a more complete conclusion.

Service Lifetime Management

If you have been paying attention, you would have read at least something recently about the wars raging over the decision to replace Init in Debian. I won’t provide any links – Google has more than enough fodder on the subject. It’s a topic I’ve rarely concerned myself with (if you’ve been using RHEL or CentOS for any reasonable amount of time, any system other than the standard SysV-style Init has barely been a blip on the radar), but nevertheless I’m aware that the current choice is SystemD, for better or worse.

Be that as it may, a bunch of my services are running on Ubuntu and hence use Upstart (which I’ve rarely had problems with, despite my reading on the topic now showing that this seems to be rare). Attempting to integrate Upstart-based control of a service with the CloudFormation user-data or other bootstrapping mechanisms comes with its own challenges:

  • Amazon Linux comes with a horribly outdated version of Upstart (0.6 if I recall correctly) which lacks many of the options making Upstart desirable. Among them is file-based logging (this is largely why I decided against Amazon Linux, despite having CloudFormation helper scripts available).
  • Expressing the startup parameters of the service correctly often requires a large amount of quoting. Expressing correct quoting from another shell script, within JSON is at the very least an error-prone exercise.
  • I’d like to run my service as a non-root user. Upstart supposedly supports user services (which you can even configure from a script inside the user directory), however these are disabled by default and the documentation for enabling them again is extremely poor. Enabling them would also add complexity to the bootstrapping process. This means handling the dropping of privileges within the Upstart service script itself.
  • Delegating runtime control of your service to a system service such as Upstart introduces a complexity in passing data from the bootstrapping process once again. Since the your service is now being executed from Upstart it inherits nothing from your bootstrap process directly, unlike if you simply forked a process from the bootstrap script which would allow for inheritance of environment variables etc (which may or may not be a good thing, but it can at least be convenient).

Despite these points, Upstart still allows you to provide a very minimal startup script for your service (mine was about three lines – compare that to your typical init script), and some conveniences such as automatic logging. It’s still not perfectly smooth deploying a service unless you want to package everything up in a deb/rpm earlier in your build pipeline (and then still deal with the issue of full configuration of your service dependent on the environment and versions at hand) but to be fair, that’s not the fault or purpose of Upstart.

What did our implementation look like?

An initial implementation of ours had the vast majority of the work being done using Ruby scripts, some shell, the Amazon Ruby SDK and ERB templating. Specifically, you would call a shell script with some parameters (this could be done either manually or from our build system), that would call a Ruby script with those parameters and perhaps some other data divined from the filesystem, and the Ruby script would render the ERB template and make calls using the AWS Ruby SDK to S3 (build artifacts and more CloudFormation parameters stored in files) and CloudFormation to create or update the stack.

There were a lot of moving parts, data coming from a bunch of different places and not a consistent language or distribution of work between components. Adding to the distribution of configuration data amongst several different sources, passing this data between components at runtime as command line parameters, environment variables or embedded in JSON templates is also awkward at best.

How I manage my sanity

So, I’m pretty good at complaining about things, but what does my solution look like? One distinct advantage I have is that I tend to use Golang to write software, and hence end up with a single binary artifact from which the service is run. Most of the time there are no or few external supporting files so there is very little to ship and configure. But nevertheless…

  • Keep your data simple as possible. Build runtime secrets (and as much static “configuration”) into binaries as possible (you can use a trick like this with Golang) and keep the binaries secure.
  • Don’t use data in S3 or stored elsewhere to populate parameters. Use mappings, switch based on a pseudo-parameter, e.g. AWS::StackName to make it super simple and obvious where the data is coming from.
  • If you can tolerate Amazon Linux, and the cost of flexibility, do use the CloudFormation helper scripts.
  • There’s a temptation to make your build tooling run on multiple environments – especially if your software runs on Linux in AWS, and you use a Mac (or worse, Windows) as your working environment. Make the most common things automated completely for Linux and run as much of them in a build tool like Jenkins. Try to make the other things as platform agnostic as possible (Ruby, Python or Java are all fairly good and have Amazon SDKs).
  • Try to avoid the temptation of templating inside of templates!

When I next need to make some changes to this deployment system, I think I’ll be attempting to remove as much of the Ruby code as possible and relying solely on the Amazon Python CLI. This will allow us to remove the Ruby SDK requirement (which, unless you are a developer, has very little utility in the command-line realm) in favour of the Python CLI which can be used for many other things in addition to its integration in our deployment script. It’s also a lot lighter-weight than the previous JVM-based CLI utilities. I’ve already reduced the data sources down a lot, but these could be potentially reduced further, or if not, work to ensure that there are suitable mechanisms for tracing data transformations through the system so we know where data is coming from, and when it has changed.

These are just a few realisations after the latest round of bootstrapping refactoring I had to perform recently. What have your own experiences been like in bootstrapping nodes and keeping your sanity?

Tags: , , , , ,

Friday, March 7th, 2014 Tech 2 Comments