Running and energy gels

by Oliver on Monday, August 10th, 2015.

I’ve been taking part in longer-distance events of various kinds (running, obstacle racing, cycling and most distantly kayaking) for quite some time now and have had various nutritional techniques over the years and for the different events. Something that has been more of a staple for me personally in cycling and running is energy gels, and if you’ve used them yourself you’ll know that they can be a hit-and-miss affair; often the choice is very personal. What works for one person may not work for another.

That being said, at the last couple of events in which I participated, I think I did a particularly poor job at choosing my gels. At the point of consumption it was a matter of purely getting the calories into my body, but it wasn’t particularly pleasant. I’m not a picky person when it comes to food and drink, but I am obsessed with details and quite thorough so I decided to try as many different gels as I could get my hands on and record my thoughts on them. Again, choice can be a personal thing but perhaps it will be interesting to others, and also handy for my own memory’s sake.

A couple of notes before the first review: I’m consuming these after at least ~45 minutes of reasonable exercise so they are somewhat realistic impressions as I’d expect “in the field”. Of course, if you are low on energy and haven’t had anything in a while, generally you’ll be ravenous and slurp down anything, but assuming you time everything correctly you’ll want something that is reasonably palatable.

Clif Shot Energy Gel – Razz flavour

96 calories from a compact, easy to carry and open 34 gram packet. It also has a “litter leash” on the side of the packet which I haven’t worked out how to use – I just roll it back up and put it in my pocket. I think I ate one of these at home late at night a week or so ago, in one of my more desperate fits of midnight-snacking, and remember the taste wasn’t that great, but after running for 10km it was actually not bad. Not sure if it was the fact that it was around 0°C outside, but it seemed very thick – probably too thick for my liking.

Ideally I would not have to follow this up immediately with water to get it down, but it was just so thick I felt compelled to drink afterwards. The flavour was ok – not great initially but by the time I’d finished it, I wasn’t too unimpressed. Would use again if there was nothing better. Raspberry, in that kind of synthetic-tasting raspberry you get in lollies, mixed with a bit of saltiness. But I guess that’s quite predictable – most gels are going to taste synthetic, sickly sweet and salty all at the same time.

Somewhat interesting note: the packaging has a small paper sticker with the word “shot” on it, that covers “90% organic” on the plastic packaging. I wonder if the formula is different in Germany, or if it is simply bullshit that wasn’t allowed to be advertised here (unlike perhaps in other markets).

XenoFit Carbohydrate – Redberry Gel

68 calories from one 25g “stick” (in reality a tear-to-open soft packet like all of the others). Noticeably smaller than the others, and when I opened it I didn’t taste anything initially – although I suspect this was due to my taste buds being slightly frozen (I was running in about 3ºC). The flavour, when it came through, was pretty similar to the Clif Shot but perhaps not quite as sweet – which is good.

Quite thick consistency again, similar to the Clif Shot. Again, this could be due to the temperature. Overall ok, would use again but it didn’t blow me away. The relatively small amount of energy in one packet might be an issue for some – I would probably want more in one packet to avoid having to continually open them to stay energised.

Nutrixxion Energy Gel “Force” flavour/variety

40g easy-open sachet with 80mg caffeine, 500mg taurine, 118 calories.
This one really tasted quite foul. It doesn’t list the flavour on the packet and I couldn’t identify what it might have been from the flavour of it – generically sweet and goopy texture. I guess it is not trying to be user friendly but has the advantage of a shot of caffeine to redeem itself with. Not sure that is enough for me. Wouldn’t use this one again.

PowerBar L-Carnitine liquid drinking ampoule (before) and Amino Mega Liquid drinking ampoule (after)

25mL each.
Technically not energy gels, these were in the same area of the shop and caught my eye. They are intended as some kind of vaguely snake-oily type of muscle enhancer through combining their consumption with exercise. The L-Carnitine ampoule was pleasant enough tasting, the liquid being quite liquidy and having a nice flavour (although somewhat unidentifiable) but outside of the main ingredient it has practically no energy content at all. The Amino Mega Liquid does contain 52 calories and a host of strange chemicals, but also a rather horrid flavour. For those reasons I couldn’t consider these worthwhile for energy during exercise, and probably wouldn’t use them for their supposed muscle building or regeneration effects either.

MultiPower MultiCarbo Gel, Lemon flavour

40g easy-open sachet with 104 calories.
Very similar to many of the others I’ve previously written about, thickish consistency and sickly-sweet lemon flavour that wasn’t terribly unpleasant or pleasant. Again, wouldn’t avoid it if there were no other options, but also wouldn’t pick it necessarily over other available gels.

Sponser Sport Food Liquid Energy BCAA, strawberry-banana flavour

70g “toothpaste tube” style container with 173 calories.
Contains 500mg of “BCAA” (branched-chain amino acid), and also calls out (on the front of the tube) the presence of additional sodium, potassium and taurine. Whether or not those elements of the liquid energy actually helped me or not, I found two things about this one detrimental – the container and the flavour. The very artificial-tasting strawberry-banana combination is not exactly revolting, but it’s not great. Why would you combine these flavours?!? Secondly, the tube format with the cap was far more difficult to consume from than other gels, and this was just when I was running. They recommend that you consume half the tube every 20 minutes, combined with 200mL of water on each “dose”, which doubles the difficulty. I couldn’t possibly recommend this, and would avoid it unless I literally had no other options.

Xenofit Carbohydrate for sport, citrus-mix gel

25g sachet with 68 calories.
This one surprised me twice. Firstly the flavour, citrus (although basically lemon) was actually quite pleasant although the texture is more pasty/floury than most gels, which was not expected. When I first started swallowing the gel I was thinking “I’m not sure I like this texture”, but after just a few seconds this changed to “actually I quite like this”. It is definitely a bit on the thicker side and takes a little effort getting it out of the small sachet, but not thick like some of the other gels are that just makes them unpalatable. Incidentally, Xenofit is also the brand of sports drink powder I have. The main downside to these gels is their smaller size, and so you don’t get as much out of each sachet as some of the others. Would definitely buy again.

Dextro Energy Liquid Gel, Apple flavour

60mL packet with screw-off lid, 114 calories.
I was saving this one for last but couldn’t help myself and took it on my run today. I’m fairly certain this is my favourite of the whole lot of gels (or possibly one of the other flavours of the same product). The consistency is much more fluid than other gels, in fact it’s almost like drinking a very sweet shot of juice or cordial. They are quite sweet but not sickly sweet, and the flavour, while somewhat artificial (like most other gels) is not too bad. This combination makes these my favourite energy shot. The only thing against them is the fairly impractical screw-off top of the packet. It’s ok if you are running, but quite fiddly if you are on a bike.

BSc BodyScience Athlete Series Energy Gel, Super Berry flavour

35g easy-tear sachet, 97 calories.
Unfortunately I left my big bag of gels at home, forgetting to take them with me on vacation, but this gave the opportunity to pick up something new in a different country. I haven’t seen this particular brand before but was interested to try it. The consistency was quite thick, even compared to others I’ve already tried, and the temperature is mid-20s Celsius at the moment so in colder temperatures it might be difficult to consume! That being said, it was not quite as sickly sweet as other gels that are on the thicker side, although the “super berry” flavour is not as great as it would have you believe. I didn’t have any water immediately available to me, so it was good that there wasn’t a terrible aftertaste. Quite convenient size and easily tearable on a bike without slowing down much – would probably buy again over most of the other gels I’ve tried.

PowerGel Fruit Dual Source Carb Mix Gel, Mango Passionfruit flavour

41g easy-tear sachet, 108 calories.
I’ll admit up front that PowerGels are one of my least favourite tasting gels, and I’ve used them in several races before when nothing else was available. That being said, the “fruit” variety wasn’t nearly as bad as the non-fruit flavours I’ve had before. Almost pleasant, if I dare go so far. Still, the easy-tear sachet was anything but – unless I made a terrible mistake, the consistency of the gel made it practically impossible to get out of the sachet. If I’d been on a bike it would have been a disaster, but even running it made things tricky. Luckily I had to stop at some traffic lights and wrestled with the packet for a minute. Would possibly buy again if I could be sure the sachet would open properly.

MultiPower MultiCarbo Gel, Orange flavour

40g easy-open sachet with 104 calories.
Proving once again that enjoyment heavily depends on how tired and in need of a psychological energy boost you are, I didn’t mind consuming this gel – despite having had one before in lemon flavour and not really enjoying it. Given that the lemon flavoured gel of this kind was consumed quite a while ago I think it must have been in one of the colder months, and I can definitely confirm that on a slightly warmer day the consistency was not as thick. The flavour was reasonable, not quite as sickly sweet as the lemon (somehow, despite the orange still being quite artificial tasting).

I’d be tempted to have this one again – but only in orange flavour. It’s definitely one of the more handy sizes and packagings to carry around for use on a run or ride without much effort in consuming.

Nutrixxion Energy Gel, Cola-Lemon flavour

40g easy-open sachet with 40mg caffeine, 118 calories.
Contrary to the “Force” flavour of the same brand and type, I have a soft spot for Cola-Lemon flavour in general and so I kinda enjoyed this gel. The consistency was still quite goopy, even on a hot evening after sitting in my pocket. I’d already run out of water so it was not the best experience, but still not too bad. On flavour alone I’d probably use this again and consider it on par with everything but my most favourite gel (currently the Dextro Energy Liquid).

PowerGel “Original” Dual Source Carb Mix Gel, Vanilla flavour

41g easy-tear sachet, 107 calories.
I had low expectations for this one as I tend to prefer the fruity or cola flavours of gels, but actually was surprised that I preferred this one over the other flavours of PowerGels that I’ve tried in the past and generally disliked. I could actually stomach it, and the consistency was not bad. However, unless I’m making a terrible mistake every single time I open one of these sachets, there is something about the consistency of the gel and the opening at the top of the packet that makes it very difficult to get anything out of it easily. Smaller sachets from other brands don’t seem to be as problematic.

In any case, if I remembered to be more careful opening it, I would probably try this one again.

Xenofit Carbohydrate, Maracuja (Passionfruit) flavour

60mL sachet, 103 calories.
Unlike the smaller gels in the same brand, this one was quite a bit bigger. The reason became evident when I opened it (and also hinted at by the calorie content) – it’s much less dense and “gel-like”. Almost (but not quite) liquid. At the other end of the scale is the Dextro Energy Liquid which is very liquid, but this one was about half-way in-between. Perhaps as as result, the gel itself was not as sweet (and I was anticipating some fairly awful synthetic-tasting passionfruit flavour) and not overpowering. I wouldn’t say it particularly tasted like actual passionfruit, but that wasn’t a problem.

Might try again, although like many of the other rip-open sachets it proved tricky to actually get it open without squirting the contents all over my hand.

Sponser Liquid Energy Long, Salty flavour

40g sachet, 94 calories.
Knowing that the flavour was already described as “salty”, I was already dreading this one somewhat. This gel’s advantage is that it has 180mg of sodium to counteract losing sodium during exercise through respiration. Hence, most exercise drinks also contain a reasonable amount of sodium. In practice though, it was the consistency that was the killer for me. Despite it being a quite hot day (perhaps 30ºC in the shade when I was running) the gel did not soften up much, and felt like a slug sitting in my mouth, it was so thick and unyielding. It was quite a battle to swallow the first mouthful, and continue through the rest of the meager 40g.

To summarise: the flavour is not actually noticeably saltier than other gels or exercise drinks but the consistency is way too thick to be pleasant. I would not try this one again, even if it were the only option.

Xenofit Carbohydrate, Orange flavour

60mL sachet, 103 calories.
This was a sweet (literally) relief after the salty, thick gel I had consumed about an hour before. Not much more to say about the orange flavour that I didn’t already say about the passionfruit variety. The warm weather made this gel seem even more liquid-like compared to other gels, and it was easier to consume since I managed to tear the sachet properly this time. It was a bit more synthetic tasting than the passionfruit.

The only thing not in its favour is the size, and at 60mL (rather than perhaps 40mL/40g) it doesn’t fit so well in my pockets, but that’s probably more to do with my running kit than this gel. Would try again if my more favoured options are not available.

Dextro Energy Liquid Gel, Cola flavour

60mL packet with screw-off lid, 114 calories.
Just another flavour of the same one I’ve tried before (and wrote about above). I only just realised it was cola flavour just now, when I started to write about it – it wasn’t obvious to me while I was out for my run, and I assumed it was some kind of very synthetic fruit flavour. In any case, as before, this is definitely my preferred energy source while exercising. The only thing not in its favour for me personally is the additional size due to it being a liquid and not a more-dense gel, and the screw-off top (although if you feel like having just half of it now and half later, it is useful).

So there you have it.
It took something like 8 months to actually get through all of these gels since I wasn’t doing a lot of long distances to warrant it until recently. To sum up, my preferred gel is the Dextro Energy Liquid Gel (if that wasn’t already obvious), but since the amount of energy you get from them isn’t as much as others, I’d probably supplement that with a banana or energy bars. However, before you base your own selection on my reviews here, I’d recommend trying them for yourself – it’s a very personal choice and everyone will have their own preferences.

Tags: ,

Monday, August 10th, 2015 Health 2 Comments

It’s 2015, and online shopping sites still suck at taking credit card details

by Oliver on Monday, June 15th, 2015.

This is a small rant, and the title should already be very familiar to you, if you have paid for anything online in the last 15 or so years. Nothing (or very little) seems to have changed in that time, remarkably. We still seem to be affected by the same range of ridiculously trivial problems – all easily solvable with a tiny amount of Javascript, mind you – and not making any further progress in the state of the art.

Since I live in Germany, practically my only use for my credit card is in online shopping (almost no physical shops or even taxis accept credit cards). Like many others I suspect, I have my credit card details saved in a secure password store, not saved in my browser for convenient automatic purchases by accident or small child, so I tend to copy and paste the numbers into the text field(s) provided on the site in question. The interfaces usually seem to possess these attributes:

  • Four separate text fields which have to be entered individually, preventing you from pasting a single number in.
  • Single text field with a hard-limit of 16 characters, preventing you from pasting in a credit card number with spaces between each group of four digits.
  • Single text field with no character limit, but it will warn you and prevent further form submission until you remove any spaces or hyphens between groupings of digits.

I think I might have seen one website, ever, that actually managed to take the input you provided and massaged it into the correct format. For the benefit of anyone reading that might have to implement a website like this, here are some hints:

  • Provide a single text field, and don’t limit the width of the field.
  • If there are any white-space or hyphens in the input, just throw them away silently.
  • If you are left with 16 digits as you expect, continue. If not, you can legitimately warn the user about a problem.

These are genuinely problems I would give to a programming intern on their first day and expect a solution within an hour. It’s really not acceptable for widely-used websites to do such a terrible job of input handling and validation as the majority do today.

Tags: , , ,

Monday, June 15th, 2015 Tech No Comments

It’s 2015, and I’m still writing config management commands wrapped in for loops.

by Oliver on Friday, April 10th, 2015.

Warning: this is a bit of a rant. Today my team had to migrate our ElasticSearch cluster from one set of instances in our EC2 VPC, to a smaller set of physical machines in our regular datacenter (yes, it’s actually cheaper). Both sets of machines/instances are Chef-controlled, which generally I don’t have to worry about, but in this case it was important due to the ordering of steps in the migration.

The general process was something like this:

  • Remove the Logstash role from the nodes we were collecting data from to avoid immediate reconfiguration when we started moving ElasticSearch nodes around.
  • Remove the ElasticSearch role from the old ES cluster, and alter the disk and memory layout settings to match the new hardware.
  • Add the ElasticSearch role to the new ES machines and let them bring up the new cluster.
  • Add the Logstash role back to the data collection nodes, which would then point to the new cluster.

For simplicity’s sake, I’ll just say we weren’t too interested in the data as we rotate it fairly regularly anyway, and didn’t bother migrating anything.

Last week I was already bitten by a classic Chefism when we realised that how I’d set up the Logstash recipe was incorrect. To avoid weird and wonderful automatic reconfigurations of the ElasticSearch nodes that Logstash points to, I use a hard-coded array of node addresses in the role attributes, but in the recipe leave the default setting (in case you forgot to set up your array of ElasticSearch addresses) as a single array – [“127.0.0.1”]. Chef professionals will already know what is coming.

Of course, Chef has an attribute precedence ruleset that still fails to sink into my brain even now, and what compounds the problem is that mergeable structures (arrays, hashes) are merged rather than replaced. So I ended up with an array containing all my ElasticSearch node addresses, and 127.0.0.1. So for some indeterminate time we’ve been only receiving about 80% of the data we had been expecting, as the Logstashes configured with 127.0.0.1 have been simply dropping the messages. Good thing there wasn’t ElasticSearch running on those machines as well!

On attempting to fix this, I removed the default value that had been merged in, but decided it would be prudent to check that an array had been given for the list of ElasticSearch servers to avoid Logstash starting up with no server to send messages to, and potentially start crash-looping. The first attempt was something like this:


if ! elasticsearch_servers.class == Array
  fail "You need to set node["logstash"]["elasticsearch_servers"] with an array of hostnames."
end

Then I found that this fail block was being triggered on every Logstash server. Prior experiences with Hashes and Mashes in Chef land made me suspect that some bizarro internal Chef type was being used instead of a plain Array, and a quick dip into Chef shell confirmed that – indeed I was dealing with an instance of Chef::Node::ImmutableArray. That was surprised number two.

The third thing that got us today was Chef’s eventual consistency model with respect to node data. If you are running Chef rather regularly (e.g. every 15 minutes) or have simply been unlucky, you may have attempted to add or remove a role from a machine while it was in the middle of a Chef run, only for it to write its node attributes back to the Chef server at the end of the run and wipe out your change. I’m sure there’s a better way of doing this (and I’m surprised there isn’t locking involved) but I managed to run into this not only at the start of our migration but also at the end. So we started out life for our new ElasticSearch nodes by accidentally joining them to the old cluster for a few minutes (thankfully not killing ourselves with the shard relocation traffic) since the old roles had not been completely removed. Then we managed to continue sending Logstash messages to the old cluster for a period of time at the end of the migration, until we figured out the Logstash nodes still thought the old cluster was in use.

The for loop referenced in the title was of course me, repeatedly attempting to add or remove roles from machines with the knife command, then restarting Logstash until it was clear the only connections to our old cluster were from the health checkers. (Of course I could have used PDSH, but I needed to space out the Logstash restarts, due to the usual JVM startup cycle.)

All of this makes me really glad that I don’t deal with configuration management on a daily (or even weekly, perhaps even monthly) basis. Writing and running 12 Factor apps and things easily deployable via CloudFormation (yes, even with its mountain of JSON config) are actually getting work done, and this just feels like pushing a boulder up a hill. Sorry for the Friday evening downer – happy weekend everyone!

Tags: , ,

Friday, April 10th, 2015 Tech No Comments

MBTI and pair programming

by Oliver on Friday, March 27th, 2015.

I had a meeting this week where among other things we talked about our teams and team members and how things were doing, generally, in the sense of team health. Oh yeah, since I haven’t explicitly called it out on this blog, for the last 9 months I’ve been an engineering manager and since the beginning of the year took on a second team. So I’ve got two teams of developers to manage currently.

Within the discussion we touched on personalities of team members and how some people are more likely to engage in pair programming, but others generally not. This reminded me of my own habits. I aspire to pair program, but when the opportunity is there I usually avoid it. This isn’t setting a great example to my teams so I feel guilty about this, but in the moment we were discussing the topic I started to reflect on this tendency a little.

Part of the new management training programmes that are being explored at SoundCloud involves a reasonable amount (ok, a LOT) of self-discovery and self-awareness. One form this takes is in doing an MBTI test and getting familiar with your tendencies, preferences and communication styles. I’ve done this test at least twice in the past and nothing had changed this time around, but I’m more familiar now with the implications. I tend to live the factual, data-based world, and without delving too much into my own personal MBTI type, prefer to plan things out and think them through in advance rather than acting spontaneously in the moment, talking out problems and making on the spot decisions.

This really reflects on my hesitation when it comes to pair programming. Innate in the process is talking out problems when the situation is not understood, making on the spot decisions without having much time to reflect internally and rely on internal thought processes (since there’s another person there waiting for you). This directly conflicts with my personal pre-dispositions. No wonder I am not a willing pair programmer! It’s quite likely members of my teams may be the same, and hence taking a universal approach of everyone pair programs may at best lead to poor results and at worst lead to a dysfunctional team full of unhappy members.

I’m sure this is not an original thought (in general) but it was a useful realisation for me. I think that taking into account different personality types and different personal motivators for your team member, and using this when it comes to planning how to work and what to work on, can be one potentially powerful tool in building a strong and happy team.

Tags: , ,

Friday, March 27th, 2015 Tech, Thoughts No Comments

Golang, testing and HTTP router package internals

by Oliver on Sunday, January 18th, 2015.

We have an internal service that takes requests of the form something like /foo/{ID}/bar which basically is expected to generate data of the form “bar” about the ID entity within the collection “foo”. Real clear! Because this service has a bunch of similar routes that have a resource identifier as part of the request path, we use the Gorilla Mux package. There are a lot of different approaches to HTTP request routing, many different packages (and some approaches recommending you use no external packages) – this is out of scope of this article!

This week we realised that for a given endpoint, one other service that calls ours was URL-encoding the resource ID. There are two parts to this that need to be stated:

  • IDs are meant to be opaque identifiers like id:12345 – so they have at least one colon in them.
  • A colleague later pointed out that encoding of path parts is actually up to the application to define (although I haven’t read the part of the spec that says this, recently). URL-encoding thus is not strictly necessary except when it comes to query parameters, but in this case we have something being encoded and need to deal with it.

The fix is relatively simple. We already had some code pulling out the ID:

id := mux.Vars(request)["id"]


However, this was just passing the ID to another part of the code that validated that the ID is properly formed, with a regular expression. It was expecting, again, something like id:12345 and not id%3A12345. So we introduce a very simple change:

    encodedID := mux.Vars(request)["id"]
    id, err := url.QueryUnescape(encodedID)
    if err != nil {
        return err
    }


OK, this will work, but we should test this. We’ve introduced two new “happy paths” (receive a request with the ID not encoded and the decoded version is exactly the same, receive a request with the ID encoded, and the decoded version is what we expect for later validation) and one new “unhappy path” where the ID is malformed and doesn’t decode properly. The problem here is that we need to test this function and pass in a request that has an appropriate ID for the path we are trying to test.

Already, this sets us down the road to the yak shaving shop. We can assemble a request ourselves, and attempt to set up all of the right internal variables necessary to be able to pull them out again inside our own function in the way it expects. A slightly harder way would be to set up a test server with httptest, set up the routing with mux and make a real request through them, which will ensure the request that makes it to our function under test is as real as can be. However we are now also effectively testing the request handling and routing as well – not the minimum code surface area.

As it turns out, neither of these options are particularly good. I’ll start with the latter. You start up a test server and assemble a request like so (some parts have been deliberately simplified):

testID := "id%XX12345" // deliberately malform the encoded part
ts, _ := httptest.NewServer(...)
url := fmt.Sprintf("%s/foo/%s/bar", ts.URL, testID)
resp, err := http.Get(url)


Uh-oh, the malformed encoded characters will be caught by http.Get() before the request is even sent by the client. The same goes for http.NewRequest() We can’t test like this, but what if we assemble the request ourselves?

serverAddr := strings.Split(ts.URL, "http://")[1]
req := &http.Request{
	Method: "GET",
	URL: &url.URL{
		Host:   serverAddr,
		Scheme: "http",
		Opaque: "/foo/id%XX12345/bar",
	},
}
resp, err := http.DefaultClient.Do(req)


We can send this request and it will make it to the server, but now the request handling on the server side will parse the path and catch the error there – it still won’t make it to the function under test. We could write our own test server that has its own request handling wrapped around a TCP connection, but that’s far more work. We also have to determine whether the request has succeeded or failed via the test server’s response codes (and possibly response body text) which is really not ideal.

So, onto testing with a “fake” request. Looking back at our code, we notice that we are pulling the ID out of the request via mux.Vars(request)["id"]. When you don’t use any request routing package like this, basically all request path and query parameter variables are accessible directly on the request object, but my suspicion was that mux.Vars didn’t just simply wrap around data already in the request object but that it stored it elsewhere in a different way. Looking at the mux code, it actually uses a data structure defined in the context package, a very gnarly nesting of maps. The outer level keys off the request pointer, and each unique request pointer will have a map containing different “context keys” – either varsKey or routeKey depending on where the parameters are coming from (but I’ll not dive into that in this article).

The part of the request we are interested in is grouped under varsKey, which is the first iota constant, so it is 0. We can use the context.Set() to set the appropriate data we want to fake out in our request, with a relatively bizarre invocation:

type contextKey int
const (
  varsKey contextKey iota
  routeKey
)
context.Set(request, varsKey, map[string]string{"id": "id%XX12345"})


This appeared to be enough to work, but inevitably the test would fail due to the result of mux.Vars(request)["id"] being an empty string. I added some debugging Printf’s to the mux and context packages to print out what was being set and what was being accessed, and universally it looked like what was created should have been correct:

map[0x10428000:map[0:map[id:id%XX12345]]]


The request pointer keying into the top-level map was the same in both cases, but the map of parameter names to values was only there when setting it in the test – what mux.Vars() was accessing simply didn’t have them.

The problem is of course simple. The mux package is keying into the second-level map with a variable of value 0 but type mux.contextKey. I was attempting to fool it by keying into the map with a main.contextKey of the same value. The only reason this worked at all was due to the inner map of data being map[interface{}]interface{} – effectively untyped – the two zero-valued variables of different types (and even then, only different by virtue of being in different packages) did not collide and hence there was no way to get out the value I had previously set.

Since mux.contextKey is not exported, there is actually no way to fake out that request data (well, I’m sure it can be done with some reflect package magic, but that’s definitely indicative of code smell). The end result was that this small code change is untestable in the unhappy path. I’m still relatively sure nothing unexpected will happen at runtime since the request handling above the function will catch malformed encodings, and some alternatives do exist, such as doing this kind of decoding in its own handler wrapping around what we already have set up, or not using the mux package in the first place and simplifying our request routes.

It is yet again, a great example of why sometimes the simplest changes can take the most time, discussion, agonising over testing methodologies and greatest personal sacrifice of software development values. The only reason I didn’t spend any longer on it (and I definitely could have) was because it was blocking other teams from progressing on their own work (outside of the obvious wastage in my own productive time).

Tags: , , , ,

Sunday, January 18th, 2015 Tech 2 Comments

AWS AutoScaling group size metrics (or lack thereof)

by Oliver on Saturday, January 17th, 2015.

One of the notably lacking metrics from CloudWatch has been the current and previous AutoScaling group sizes – in other words, how many nodes are in the cluster. I’ve worked around this by using the regular EC2 APIs, querying the current cluster size and the desired size and logging this to Graphite. However, it only gives you the current values – not anything in the past, which regular CloudWatch metrics do (up to 2 weeks in the past).

My colleague Sean came up with a nice workaround – using the SampleCount statistic of the CPUUtilization metric within a given AutoScaler group namespace. Here’s an example, using the AWS Python CLI:


$ aws cloudwatch get-metric-statistics --dimensions Name=AutoScalingGroupName,Value=XXXXXXXXProdCluster1-XXXXXXXX --metric CPUUtilization --namespace AWS/EC2 --period 60 --statistics SampleCount --start-time 2015-01-17T00:00:00 --end-time 2015-01-17T00:05:00
{
    "Datapoints": [
        {
            "SampleCount": 69.0,
            "Timestamp": "2015-01-17T00:00:00Z",
            "Unit": "Percent"
        },
        {
            "SampleCount": 69.0,
            "Timestamp": "2015-01-17T00:01:00Z",
            "Unit": "Percent"
        },
        {
            "SampleCount": 69.0,
            "Timestamp": "2015-01-17T00:03:00Z",
            "Unit": "Percent"
        },
        {
            "SampleCount": 69.0,
            "Timestamp": "2015-01-17T00:02:00Z",
            "Unit": "Percent"
        },
        {
            "SampleCount": 67.0,
            "Timestamp": "2015-01-17T00:04:00Z",
            "Unit": "Percent"
        }
    ],
    "Label": "CPUUtilization"
}

Some things to note:

  • Ignore the units – it’s not a percentage!
  • You will need to adjust your –period parameter to match that of your metric sampling period on the EC2 instances in the AutoScale group – if you have regular monitoring enabled this will be one sample per 5 minutes (300 seconds), if you have detailed monitoring enabled it will be one sample per 1 minute (60 seconds).
  • The last point also means that if you want to gather less frequent data points for historical data, you’ll need to do some division – e.g. using –period 3600 will require you to divide the resulting sample count by 12 (regular monitoring) or 60 (detailed monitoring) before you store it.
  • Going via CloudWatch in this way means you can see your cluster size history for the last two weeks, just like any other CloudWatch metric!
  • Unfortunately you will lose your desired cluster size metric, which is not captured. In practice I haven’t really required both desired and actual cluster size metrics.

We’ll start using this almost immediately, as we can remove one crufty metric collection script in the process. Hope it also helps some of you out there in AWS land!

Tags: , , ,

Saturday, January 17th, 2015 Tech No Comments

Getting Tough – The Race (Rudolstadt, 06.12.14)

by Oliver on Sunday, December 7th, 2014.

As I mentioned in my last post, I did Getting Tough – The Race in Rudolstadt yesterday. Check out that post for a link to a first-person-style video shot at last year’s event. Having watched that video, and obsessively scoured the internet for photos and any other tidbits of information, I expected that I knew what I was getting myself into. I have to say first up, whoever “Iron Mike” (poster of the video) is, he’s quite an athlete! There were a few obstacles in this year’s race that weren’t in last year’s, but still his time of about 02:15 compared with mine of about 03:51 is dramatically faster!

Leading up to the race, I was primarily concerned about two things – the temperature and prospect of complete submersion in ice-cold water, and the fact that this year they snuck in the “electro-shock therapy” obstacle that many would be familiar with from Tough Mudder – basically wires hanging down from a frame that you have to run through, and suffer multiple significant shocks along the way. Ironically even though the longest distance I’ve run this far has been a half-marathon, the fact that Getting Tough is 24km didn’t bother me so much.

First up, my gear selection. Over the last few weeks my training runs were the test ground for combinations of clothing and I arrived at the following:

  • Nike dri-fit combat pro underwear
  • Nike dri-fit running pants that end just below the knee
  • ASICS Gel-FujiRacer 2 Trail shoes and the socks I got from the TrailBerlin race
  • Under Armour Cold Gear Compression EVO Mock long-sleeved top as the base layer
  • Some Helly-Hansen poly-propylene long-sleeved top I had from maybe 10 years ago over that
  • Another synthetic long-sleeved top from O’Neill which I got originally for kayaking, over that
  • A short-sleeved synthetic running top from the race itself as the top layer – mostly for blocking the wind and for looks 🙂
  • Under Armour ArmourVent Cold Gear running beanie
  • Under Armour Core Cold Gear Infrared gloves

I figured with all of the layers I was wearing I would be covered for the cold weather, and for the most part I was (more on that later). I was expecting the temperature to be a few degrees below zero, and some of the pre-race organisers’ photos actually showed snow on the starting grounds, but in the end it might have been hovering around 0ºC and any snow that was there before had melted. In the morning after registering, people started warming up in the “Walk of Fame” area (the final concentrated obstacle field) before the call was made to move to the starting field.

Everyone marched the 1km through town to the starting field where the warm-ups continued and some starting announcements were made. There were fireworks and amazingly a three-plane formation fly-by just before the start! I started at the very left-hand side of the starting field which worked to my advantage as they also had a fire engine spraying everyone with cold water as we crawled under a very long section of barbed wire – my side of the field was too close to the fire engine for them to aim down to, so I escaped the cold water for the moment. After exiting the barbed-wire crawling obstacle there was a short jog across the field until we hit the ditches. These are pretty deep and climbing out is difficult by yourself, but fortunately plenty of people were willing to pull you out on the other side. The water was pretty damn cold, but we were only in it briefly. A quick scramble over a dirt hill, and then another ditch, then out again.

Incidentally I was amazed at the pace at which everyone set off – considering that there were 24km to cover, I thought people would take the start a bit easier (I certainly was) but I guess adrenaline does that to people. After a few events where I used too much energy too early on and ran out of steam towards the end, I’m quite cautious about starting out too fast, and let myself warm up properly in the first 5km, also for the sake of my muscles. I assume the front-runners would have been galloping along at the start, which is no surprise.

If you check out any of the videos of last year’s event taken on drones, you’ll see that the next real obstacle is a river crossing. This is really the first time the coldness of the water hits home. The river is relatively fast-flowing although perhaps only knee-height at that point. Because the crowd hasn’t thinned out significantly at that point, there’s only so fast you can go, and by the time you hit the other side your legs are really starting to hurt. But at this point you aren’t too worried – you made it across and it’s more a refreshing dip in the river than anything significantly problematic.

From there, the first 15-20km was actually pretty uneventful. There were probably just a handful of obstacles along the way, with the large part of the difficulty just coming from running that distance through the mountains. I saw running, but actually when the terrain became very steep most people slowed down to a walk until it flattened out again. Given we still had a long way to go, I don’t think this is unreasonable to keep your energy up for the last couple of brutal kilometres. There were plenty of water and tea stops along the way, some cut-up pieces of banana and I also had a couple of gels with me to keep up energy levels.

I think if the temperature had been a few degrees lower, we would have seen some decent snow coverage, but as it was there were just a few patches of it once we got up into the highest part of the mountains. The scenery was beautiful though and I’d recommend running around these parts just for that sake. There are some parts that go through little villages just outside of Rudolstadt which are quite charming and a lot of the locals turn out to wave, clap and cheer on the runners. I actually found it generally warm enough during the mountain running that I took my beanie off for the whole time.

Where things start to get serious is when you hit the ditches again – this time going along them length-ways. The water is by now up to about waist-height in some parts and you have to wade the whole length of the three-sided ditch that has been excavated. It probably took 2-3 minutes but it felt like an eternity. The water is so cold at this point that it stops feeling cold anymore, leaving just raw pain in your legs. I was about half-way along the first side when I started thinking I wouldn’t make it – the pain was just too great and I was sure my legs would just collapse and leave me sinking into the mud. The second side was marginally shallower which provided some relief for the upper legs, but by the third side I think some of the pain had subsided into throbbing numbness. Crawling out at the end of the ditch was a challenge, and my legs and feet felt like numb concrete stilts extending from my hips. I could jog but not with much dexterity.

That led to the first part of obstacle runs at the end, mostly made up of climbing over obstacles, crawling through or under concrete pipes, monkey bars (which I’m glad to say my training helped me with immensely) and a section where we all had to carry a heavy bag of rocks or something. Naturally there was jumping over fire and a few more ditches – all actually fun and a good chance to loosen up and warm up after the coldness of wading through the extended ditch.

But that warming up was short-lived – after a short run further, we came to the swimming pools. There are two of them – the first one with about seven logs that you have to dive underneath, and the second with a giant scaffolding erected on top of it, providing plenty of opportunities for climbing and potentially falling into the water below if you lose your grip. Being up to chest-height in the freezing water, and fully submerging yourself to get under the logs was definitely one of my fears coming into this race and I have to say it was warranted. The cold was unbelievable. The additional layers I had on did absolutely nothing to keep my body warm; the beanie got completely soaked as did my gloves, and in moments I was chilled to the bone. In this situation you just have to get through it as quickly as possible, which is exactly what I did. Between logs I only came up for air and then dived down again immediately. On getting out of the water I wrung out my gloves and beanie which made some difference, and fortunately my clothing selection did ensure that most of the water drained off.

The second pool’s scaffolding was a new obstacle for this year. Last year they had some kind of balancing-run over shipping pallets that were tied together and floating on top of the water, which actually would have been quite fun. This time around it was two sets of monkey bars, some balancing-beams and some length-ways pipes where you had to move along hand-over-hand. I can’t really describe them very well… basically just a single pipe aligned in the direction you are travelling in that you are suspended from, and have to climb along using only your hands. Again, thanks to all the monkey bar training I’ve been doing recently, I managed to get through this obstacle the whole way without falling off – and I was very glad!

That was followed by a few other simple but enjoyable obstacles – running over discarded car tyres, crawling under fences in sand, but then we hit a road block. There’s a sort of fake house that they build, fill with stage smoke and dance music and some small obstacles, but getting into it involves climbing over a garbage skip filled with water and jumping back down to the ground on the other side. The funnel into the house slowed people down enough that a huge crowd gathered waiting to get in. Since we were all still quite wet and cold, and now not moving, most started shivering – some quite violently. Fortunately there were so many people and the crush of newcomers behind us meant that we were all packed in quite tightly – sharing what body heat there was to go around. I probably lost about 15 minutes just sitting in this traffic jam, which definitely shows that it helps being among the faster runners through the first 20km of the race.

Most of the rest of the obstacles you can watch the videos for, but there was another new addition to this year as I mentioned before – the electric shock run. From the photos the organisers had put up I had assumed there was only going to be one, but there were actually three! I really not sure what their intention was, as the final result from their construction was a bunch of quite widely-spaced wires hanging down. A few runners ahead of me approached slowly and then weaved quite easily between the wires at walking pace – which I did myself – and escaped being shocked at all. The second of these was exactly the same. The third electric shock run, reasonably close to the end of the race, had wires that ended so far off the ground that it was possible to quite comfortably crawl underneath them entirely. So, much to my relief, I was able to avoid being shocked (with electricity at least) at all during this race.

Another obstacle I had not entirely understood from the pre-race photos – there is another brief river crossing of sorts as you approach the Walk of Fame obstacle section. Last year they just had people descend to the river from the bank, walk along the river under the bridge for a bit and then climb the bank again up to the field. This year, they had strung up two rows of tractor tyres that you had to climb over. Quite similar to chest-height wall climbs, but easier due to the knobbly bits on the tyres giving quite ample grip (again, to my relief). After getting out of the river, I was expecting the cold water treatments to be really over – but it was not to be. A scaffolding rigged up so you had to duck under one beam and then climb onto a platform before jumping out the other side had a bunch of fire hoses rigged up at the top shooting out water from the river. It was basically a freezing cold river water showing, while you were also trying to climb onto a chest-height platform – not pleasant, but at least not too difficult to climb.

After this, again you can check the videos for the remaining obstacles as it was practically identical to last year’s layout. The wall climbs still got the better of me and I needed assistance in getting up despite the wall climbing training I’ve been doing. The extended crawling sections on gravel were quite painful on the knees and legs and you soon give up the idea of getting any kind of speed in these sections. All the other obstacles are actually quite fun, but at this point I was really reaching my physical limits which makes them all the more challenging. Around about this time I was thinking “why the hell did I do this” – the last 1km section of obstacles is just relentless and brutal torture after already subjecting yourself to a very active and punishing trail run and obstacles.

There was a minor traffic jam again as people slowed down to climb over the last few concrete barricades and crawl through the (very low) final tunnels, but at this point I doubt anybody was too concerned with speed and more had thoughts on finally finishing. The RFID reader was right after the exit of the last tunnel so I think I got my wrist-tag over it well before I crossed the finish line, but there were only a few seconds in it. After getting my medal and being wrapped up in a space-blanket, I was given a cup of some kind of hot lemon recovery drink and retreated back to the changing tent to collect my bag, towel off and get into some warmer clothes. I was feeling noticeably nauseous (hopefully not because of the recovery drink) and physically and mentally devastated. Even after completely changing into my dry, warm clothes I was also still shivering for about an hour afterwards.

So, there it is. It’s only a day and a bit after finishing but I know it’s too soon to ask myself if I’d like to do it again (and I’ve got other races coming up which I have to focus on instead). However I will say that the vast majority of it was truly enjoyable; it was an immense physical challenge and if you have done plenty of OCR-style events already and feel like you need to push the challenge to the next level then Getting Tough will certainly fit your requirements! I do wonder if next year might be a little colder, and the electro-shock wires a little closer together and if that might make it just that little bit more punishing (which I wouldn’t find that appealing)!

Tags: , , , ,

Sunday, December 7th, 2014 Health No Comments

Getting back into fitness

by Oliver on Sunday, November 30th, 2014.

I’ve just created a new category on this blog – Health – and recategorised one of the older articles I wrote a while ago. Being a confirmed computer geek, fitness has never really been very high on my priority list until I got into kayaking when I was about 24 or so. I managed to keep that up for several years and ended up quite fit – completing a 111km kayaking marathon three times in consecutive years and also a 5-day 404km marathon at the end of 2006. Sadly around 2007 I moved and various other factors made it more difficult to keep up kayaking, and I gave it up completely when I moved to Germany in 2009.

Since having a child in 2010 it’s been harder to recover my fitness regime. For a while I was going to the gym, but I tend to get a bit bored doing that. We borrowed a bike trailer and for maybe 6 months I was cycling almost every morning about 20km with my son sitting in the trailer behind me – it was actually a lot of fun and good exercise, picking different routes around Berlin to explore. Another winter came and inevitably I stopped exercising again and really I didn’t find anything suitable to do for a couple of years. I tried running a few times but ended up with very sore knees after no more than 5km.

A couple of years ago I discovered the Berliner Mauerweg – the entire course of the original 160km Berlin Wall built in the early 60s. It is possible to walk and even ride a bike around the whole thing. I had cycled a few 20-30km sections from time to time but last year finally undertook to cycle around the whole thing in one day, and started preparing by cycling 30-50km sections until I had finished the whole thing and had familiarised myself with the entire length. Some sections I rode more than once, so I made sure to try both directions so that I could plan to ride the entire 160km in the most logical way possible. Largely this comes down to which direction is easiest to find your way along, since some parts are not well signposted and it is easy to lose your way. Finally, and fittingly, on Tag der Deutschen Einheit last year I rode around the whole thing. The weather was perfect, I set out in the dark at 6am and finished around 4:30pm, tired but happy. I took about 3-4L of water too much for the journey but had it been hotter I might have used it all up – probably good to have been on the safe side.

I’d like to ride it again but it’s a big undertaking (and that’s not on a road bike either – just a normal “Herrenrad”). Late last year I took a bamboo bike building course, with the intention of racing it in some of the amateur races that seem to be frequently happening around the country. Finally, in August of this year, I did race the bike and had a great time doing so. There are definitely plenty of cycle-nuts at SoundCloud so for fitness I could easily stick with them and make that my primary sport.

For reasons that I still don’t understand, earlier in the year I started getting interested in doing Tough Mudder. I suspect it was a banner ad or a suggested group while I was on Facebook – that shows you how powerful these messages can be without you even realising it! After watching a few videos and immersing myself in the subject I was hopelessly addicted to the idea of doing it, and signed up, not really knowing yet how I’d get to the point of physically achieving that level of fitness required. Right after finishing the Vattenfall Cyclassic race I started doing fitness training classes twice every week, and even managed to convince a couple of co-workers to sign up not only to the training sessions but Tough Mudder itself. Along the way to Tough Mudder I also ran a bunch of amateur 10km “trail run” races in preparation, such as the Volvo Tierparklauf, Potsdamer Herbstlauf and TrailRun Berlin.

Despite training quite a lot before Tough Mudder, I still feel I was unprepared. The 18km distance was not much of a problem, but I did find that any time I went significantly over 12km beforehand that my legs would be extremely sore for anywhere from a couple of days to a couple of weeks. I actually didn’t run much at all in the two weeks before Tough Mudder because of it – something I am doing or not doing is leading to excessive recovery time between exercises, and if I’m going to further improve my fitness this is something I’ll have to address. Upper body and core strength (really important when climbing high walls I discovered) were also nowhere near what I’d like them to be. I recorded the whole thing on my GoPro and have an fun-size edit available for viewing here:

In the run-up to Tough Mudder I managed to sign up for the Müggelsee Halbmarathon as well, and at 21km is far and away the longest run I’ve ever done uninterrupted. Excluding Tough Mudder I had previously done the Sydney City to Surf a couple of times, which is 14km and I had always had extremely sore knees and legs in general after doing that. Unfortunately the Halbmarathon came only a week after Tough Mudder so I hadn’t fully recovered, and so did the whole thing in a reasonable amount of leg pain. Then I had to take another couple of weeks off running to recover again, which is really unfortunate as I had also signed up for Getting Tough which is on the 6th of December. The last few weeks I’ve been training on my own for this race, which is 23km and in the cold of winter. You should really check it out, it’s amazing:

I’m feeling far more unprepared for this one, mostly because of the extreme cold conditions. I’ve developed a reasonable training regime for it, which consists of a roughly 10-11km run to Volkspark Friedrichshain, a bit of hill running, various upper-body and core strength exercises like bear crawling, climbing walls, climbing nets, monkey bars, dips, pushups etc, with running in-between to recover. I’ve been getting up at least two mornings during Monday to Friday at 6am and doing this routine all in the complete dark before sunrise. Yesterday’s session was the first where the temperature was actually at freezing point and it was nice to know that I was at least clothed well enough to be comfortable at that temperature.

But even when this event is over I have others on the horizon. I’m looking into signing up for No Guts No Glory – both the “chicken run” race which is a 6km night run on the Saturday, and the “No Way Out” race which is a regular day race of 17km on the Sunday. If I manage to complete Getting Tough and keep up my training, I figure that by February I should be able to tackle both of these races given that the first one is relatively short. Not sure what my chances are for convincing anyone to join me though!

On a similar note, I’ve been keeping my eye on various other OCR websites like Mudstacle and Nuclear Races, and other general running event websites that aggregate together information on different race styles. It has exposed the fact that the bulk of these events are in the UK; certainly I can find things to do in Germany but not so much. The USA seems to have a lot more of the “flashier” events like Zombie Runs and Spartan Races (which seem to be much more competitive). I’m considering going over to the UK for a few events next year but haven’t yet identified any that I’d definitely want to do. I’d also like to do as many Tough Mudders as I can next year, and would like to do one Spartan Race at least, to see how they are, but would need to be in much better shape for them.

On the complete opposite side to physical fitness, I’ve also been considering mental fitness. Since I’ve had a large recent shift in my career, more towards engineering management and away from an individual technical contributor role, I’ve been doing a lot of thinking and self-reflection on where I want to be going and how I’m performing in my role. The main take-away from this has been that my mind is extremely cluttered at the moment and it is very hard to find my way through the fog. Part of the management training I’ve been doing at work has involved using Headspace – a meditation app designed to help you gain a bit more control over your presence of mind. I’ve had some, but not much progress, with this.

Along similar lines a former co-worker of mine mentioned that he was going to do a Vipassana meditation course. It is 10 days of no talking, frugal eating, and basically just intense meditation and self-reflection. The me of prior to using Headspace would have laughed at the suggestion – I’ve never considered myself a “meditation person” (and I would use those scare-quotes). At this point in my life and career, and having identified that I can’t mentally break through the fog on my own without having had enough self-reflection to identify and answer fundamental questions about myself, I am very tempted to go on a Vipassana course. Finding 10 days to do this would be really the biggest challenge, but perhaps something I can make steps towards in the coming months. There’s nothing stopping me from starting with a single day, or even a whole weekend to meditate. Come to think of it, I should probably find something now and just book it, as has been my habit with OCR events the last few months.

Tags: , , , , , , ,

Sunday, November 30th, 2014 Health No Comments

Changing of the HTPC guard

by Oliver on Sunday, November 30th, 2014.

Way back in 2010 I wrote about my acquisition of a Zotac HD-ID11 ZBox “home theatre PC”. Since then it has served me fairly well, although I suspect I ran it too hot for the last year or two. After several Ubuntu and XBMC upgrades I noticed that it tended to idle pretty hot despite not really doing much most of the time, and was able to ignore this until it started restarting itself frequently, presumably due to thermal cutoffs. I ordered some replacement heatsink thermal pads and replaced the dried out old ones on the CPU and GPU, and was able to coax it back to life.

Until this week, unfortunately, when it stopped responding altogether and eventually would just sit there with the fan running at full speed and the power light staying red (which usually means it is powered off). So, good night my sweet prince. It’s an annoyance for several reasons:

  • There was a time in my life (let’s say teenage years and early twenties) when I was obsessed with hardware, and would gladly give up weeks of free time for such mundane and ultimately pointless exercises such as getting my ancient Sun Sparcstation 5 to run fully diskless by booting Debian Woody with a 2.2 kernel over BOOTP and using NFS for its root filesystem. I have practically no “free” time these days and getting hardware to work is usually an exercise in cargo-culting and futility. I prefer writing software to fix problems.
  • Even in 2014, it seems as though picking the right hardware to run under Linux is still difficult. To be clear – I should be able to look at the Amazon page for a given piece of equipment and just see printed there “Linux supported”, then click the “buy it now button”. This is made harder due to also sometimes needing ARM support.
  • The ZBox was doing way more than just serving as an HTPC. It was also our IPv6 tunnel gateway, fileserver, TimeMachine backup server, wiki server, Transmission torrent client + webserver and also copied backups to Amazon Glacier for remote offline storage.
  • I just dislike buying more stuff and having to fix broken things. There are so many little inconveniences in my daily life like fruitlessly trying to cancel our old internet connection, or finally getting around to booking a doctor’s appointment to get a referral for physiotherapy that it ends up feeling like death by a thousand paper cuts. The total amount of all these little things adds up in the active mind and paralyses me from actually doing any of them. Adding one more thing just adds more to the burden.

But I digress… I’ve already had a Raspberry Pi for some time now and planned to use it to experiment with OpenCV with a small webcam but put that activity on the backburner and never got back to it. While I’m slightly concerned about its relatively low clock speed and on-board RAM (I’ve got the 512MB “B” version), it is still the easiest and cheapest replacement on hard for the ZBox. What was missing was a plastic case, wireless adaptor since my cable modem and wireless router are now on the other side of the living room, and a USB hub since I’ve read that the Pi doesn’t deliver enough power on the USB ports, leading to problems with wifi adaptors. I purchased these items:

At about €24 total, that was much cheaper than replacing the ZBox with another current-generation HTPC, which typically would set me back €200-300. I of course checked beforehand on the Raspberry Pi forums that the wifi adaptor would work, so I was reasonably confident that I had a winning combination. I received the boxes yesterday and got to setting up all the new toys, expecting it (as many websites had mentioned) to all work flawlessly from the start.

Not so! Despite recognising the wifi adaptor and loading the correct kernel module, it just wouldn’t work. I fiddled with wpa_supplicant as we all do when first trying to figure out what is going wrong, but to no avail. I found this blog post from some other enterprising person who had managed to get it working – and all without much additional effort, and still wasn’t able to replicate the conditions for success. Eventually I downloaded images for OpenElec, Raspbian and the latest Raspbmc (I had been running an older image of Raspbmc from much earlier in the year) and tried them all without them working.

It was actually only when I read through the above blog post again that I realised something – the wifi adaptor actually lights up when it is working! I have the Pi, drive array, USB Hub and a bunch of cables almost entirely hidden away under the TV on a shelf, so I couldn’t easily see any lights down there. Arbitrarily, I decided to plug the adaptor directly into the Pi as it is in the picture on that blog post. It worked! That was the lightbulb moment – and brought back all the memories of trying to fix hardware problems earlier in my life.

After that it was pretty easy to identify the problem – the wifi adaptor apparently doesn’t like to work when plugged in the D-Link hub next to another device. The hub has 4 downstream ports (one of which is a “fast charge” port – presumably it outputs more than 500mA if necessary) – two on one side of the square and two on the neighbouring side. One other side is blank and the rear has the upstream port and power socket. If the wifi adaptor is on one side with another device – either my external drive array or the wireless keyboard transceiver – it won’t work. If it is on one side by itself, it works fine. The main irritation is that when it doesn’t work it still appears to the system and does everything but light up and send/receive wifi signals. It turns out that this makes it very hard to figure out what is going wrong!

Moral of the story is: when you have hardware problems, even in 2014 it helps to go back to first principles of hardware diagnosis. Isolate the new hardware, try to replicate known working conditions with no additional unknowns, then gradually add your own peculiarities back into the equation until you have identified the problem.

The next issue to tackle will be how to run all of those additional services in only ~300MB of the remaining RAM on this little computer.

Tags: , , ,

Sunday, November 30th, 2014 Tech No Comments

The learning gap in deploying Javascript apps

by Oliver on Sunday, August 17th, 2014.

I’ve recently been building a website for my wife using relatively modern tools – lots of client-side Javascript and relatively minimal CSS and HTML. A recent-ish email alerted me to the existence of a Berlin-based Famous framework meetup group, which initially made no sense to me, but after I checked out the actual framework website my interest was piqued. I’ve got next to no website building experience (at least from the front-end point of view), and what I would only describe as barely competent Javascript skills. Nevertheless it appealed more to me to learn more about this world than simply building a generic WordPress-based site and customising some stylesheets.

There are some great tools out there these days for Javascript learners, for example Codecademy. I used this myself to brush up on the weird scoping rules, function hoisting, manipulating the DOM etc. That’s enough to generally get you started with Javascript as a language and also in the context of execution in the browser which comes with additional constraints and possibilities. On top of that you have tutorials on using jQuery which are usually quite understandable and make a lot of sense if you have already learned about the DOM and manipulating content on a page. So far so good.

The Famous framework introduces a new paradigm for creating responsive, dynamic content on a webpage for both desktop and mobile devices. Fortunately on their website they also provide a bunch of tutorials which give you a pretty good overview of functionality and how to use the framework. Their starter pack containing further examples for most of the library functions also helps considerably. I took a few times to go through all of it and still find some aspects confusing but ultimately I was able to build a website that actually worked.

Great – now I need to run this stuff on a real webserver. If you have had even tangential contact with real front-end developers you have probably heard of such terms as “minification“, “Javascript obfuscation” and perhaps “RequireJS” or “CommonJS modules” and so on. I was already somewhat aware of CommonJS modules, having encountered them in front-end code for my company’s website and could see that they provide a nice solution to the Javascript scoping and code reuse problems. Fortunately, using the scaffolding provided in the Famous starter kit you get code that already has the CommonJS module pattern built-in, so building the website code followed this pattern from the start. If this hadn’t been the case, I’m not sure how I would have found some good examples for getting started with it.

The website built, I was only vaguely aware that RequireJS had to be part of the solution but left this part of the puzzle aside while I pondered how to minify the code. I could have just downloaded the famous.min.js file from their website and copied all the files to my webserver but I felt like this wasn’t how it is meant to be (but with no real way knowing the right way to do it). This lack of knowledge resulted in a frustrated mailing list post but ultimately no better solution. There was a suggestion to use RequireJS, and a lot more Googling on my part but I still didn’t make much progress. Eventually I decided I’d create a “vendor” directory, cloned the Famous repo into it and… got stuck again. I just didn’t know how to join the dots between the original Javascript code and a tidy single-file representation of the library dependencies and my own app code.

After a conversation with a coworker I was armed with links to such tools as Bower (which I played with a bit, before realising I still didn’t know what I was doing), Grunt, Gulp, Yeoman, and also found an interesting Hackernews thread that seemed to contain useful information but was still a little out of my reach. All these tools, but still I had no good idea what I needed to accomplish and how to do it. I decided in the end to just use the Yeoman Famous generator module and generate a Famous site scaffolding to see what such a thing looked like. This contained just a tiny example of Famous code but more importantly had all of the minification tooling, Javascript linting and build preparation and deployment tasks already baked in without having to build them from scratch. I copied the relevant parts over to my existing repository, fixed all the linter errors and ended up with the minified, deployable Javascript files I’d been hoping for.

I’m happy with the end result, but saddened by the apparent gap in learning possibilities for this part of the Javascript world. There are plenty of opportunities to learn basic Javascript, figure out how to make a static website slightly more dynamic and perhaps use some fairly standard convenience libraries like jQuery or Underscore but not much more than that. When it comes to building reusable modules with RequireJS or minifying, linting and building your code for deployment this feels like an area that suffered from problems felt acutely by long-time Javascript developers, that now fortunately have a variety of good solutions – and there has been a lot of rapid development in this area in the last few years. Sadly, getting learners up to speed with these problems and their solutions doesn’t seem to have kept up with the technologies themselves.

My recommendation: If you’re in a similar position to me, check out Yeoman and the generators available for it. Use that and build up your knowledge of how the individual components and libraries are used together by inspecting what scaffolding code it provides for you. I can only hope that we start to see more tutorials on Codecademy and other similar sites on Javascript deployment subjects and the problems that the tools mentioned above solve for modern websites.

Tags: , , , , , , , ,

Sunday, August 17th, 2014 Tech No Comments