Adding Meaning to Code

by Oliver on Wednesday, August 24th, 2016.

This is the product of only about 5 minutes worth of thought, so take it with a grain of salt. When it comes to how to write maintainable, understandable code, there are as many opinions out there as there are developers. Personally I favour simple, understandable, even “boring” method bodies that don’t try to be flashy or use fancy language features. Method and class names should clearly signal intent and what the thing is or does. And, code should (IMHO) include good comments.

This last part is probably an area I’ve seen the most dissent. For some reason people hate writing comments, and think that the code should be “self-documenting”. I’ve rarely, perhaps never seen this in practice. While perhaps the intent was for it to be self-documenting, that never arose in practice.

Recently (and this is related, I promise), I watched a lot of talks (one, in person) and read a lot about the Zalando engineering principles. They base their engineering organisation around three pillars of How, What and Why. I think the same thing can be said for how you should write code and document it:

class Widget
  def initialize
    @expires_at = Time.now + 86400
  end

  # Customer X was asking for the ability to expire     #  <--- Why
  # widgets, but some may not have an expiry date or
  # do not expire at all. This method handles these
  # edge cases safely.
  def is_expired?()                                     #  <--- What
    return !!@expires_at && Time.now() > @expires_at    #  <--- How
  end
end

This very simple example shows what I mean (in Ruby, since it's flexible and lends itself well to artificial examples like this). The method body itself should convey the How of the equation. The method name itself should convey the intent of the method - What does this do? Ultimately, the How and What can probably never fully explain the history and reasoning for their own existence. Therefore I find it helpful to accompany these with the Why in a method comment to this effect (and a comment above the method could also be within the method, or distributed across the method - it's not really important).

You could argue that history and reasoning for having the method can be determined from version control history. This turns coding from what should be a straightforward exercise into some bizarre trip through the Wheel of Time novels, cross-referencing back to earlier volumes in order to try to find some obscure fact that may or may not actually exist, so that you can figure out the reference you are currently reading. Why make the future maintainer of your code go through that? Once again, it relies entirely on the original committer having left a comprehensive and thoughtful message that is also easy to find.

The other counter argument is that no comments are better than out of date or incorrect comments. Again, personally I haven't run into this (or at least, not nearly as frequently as comments missing completely). Usually it will be pretty obvious where the comment does not match up with the code, and in this (hopefully outlier) case you can then go version control diving to find out when they diverged. Assessing contents of the code itself is usually far easier than searching for an original comment on the first commit of that method, so it seems like this should be an easier exercise.

Writing understandable code (and let's face it, most of the code written in the world is probably doing menial things like checking if statements, manipulating strings or adding/removing items from arrays) and comments is less fun than hacking out stuff that just works when you are feeling inspired, so no wonder we've invented an assortment of excuses to avoid doing it. So if you are one of the few actually doing this, thank you.

Tags: , ,

Wednesday, August 24th, 2016 Tech No Comments

Thoughts on creating an engineering Tech Radar

by Oliver on Friday, August 12th, 2016.

Perhaps you are familiar with the ThoughtWorks Tech Radar – I really like it as a useful summary of global technology trends and what I should be looking at familiarising myself with. Even the stuff on the “hold” list (such as Scaled Agile Framework – sometimes anti-patterns are equally useful to understand and appreciate). There’s a degree of satisfaction in seeing your favourite technology rise through the ranks to become something recommended to everyone, but also in my current (new) role it has a different purpose.

Since I started a new job just over a month ago, I’ve come into an organisation with a far simpler tech stack and in some regards, less well-defined technology strategy. I like to put in place measures to help engineers be as autonomous in their decision-making process as possible, so a Tech Radar can help frame which technologies they can or should consider when going about their jobs. This ranges from techniques they should strongly consider adopting (which can be much more of a tactical decision) to databases they could select from when building a new service that doesn’t fit the existing databases already in use. The Tech Radar forms something like a “garden fence” – you don’t necessarily need to implement everything within it, but it shows you where the limits are in case you need something new.

So basically, I wanted to use the Tech Radar as a way to avoid needing to continually make top-down decisions when stepping into unknown territory, and help the organisation and decision-making scale as we add more engineers. The process I followed to generate it was very open and democratic – each development team was gathered together for an hour, and I drew the radar format on the whiteboard. Then engineers contributed post-it notes with names of technologies and placed them on the board. After about 10 minutes of this, I read through all of the notes and got everyone to describe for the room the “what” and the “why” of their note. Duplicates were removed and misplaced notes moved to their correct place.

Afterwards, I transcribed everything into a Google Doc and asked everyone to again add the “what” and “why” of each contributed note to the document. What resulted was an 11-page gargantuan collection of technologies and techniques that seemed to cover everything that everyone could think of in the moment, and didn’t quite match up with my expectations. I’ll describe my observations about the process and outcomes.

Strategy vs Tactics, and Quadrants

The purpose of the overall radar is to be somewhat strategic. ThoughtWorks prepares their radar twice a year, so it is expected to cover at least the next 6 months. Smaller companies might only prepare it once a year. However, amongst the different quadrants there is a reasonable amount of room for tactics as well. In particular I would say that the Techniques and Tools quadrants are much more tactical, whereas the Platforms and Languages & Frameworks quadrants are much more strategic.

For example, let’s say you have Pair Programming in the Techniques quadrant. Of course, you might strategically adopt this across the whole company, but a single team (in fact, just two developers) can try instituting it this very day, at no impact to anyone in other teams and probably not even others in the same team. It comes with virtually no cost to just try out, and start gaining benefit from immediately, even if nobody else is using it. Similarly, on the Tools side, you might decide to add a code test coverage reporting tool to your build pipeline. It’s purely informational, you benefit from it immediately and it doesn’t require anyone else’s help or participation, nor does it impact anyone else. For that reason it’s arguable whether these things are so urgent to place on the radar – developers can largely make the decisions themselves to adopt such techniques or tools.

On the other hand, the adoption of a new Language or Framework, or building on top of a new Platform (let’s say you want to start deploying your containers to Kubernetes) will come with a large time investment both immediately and ongoing, as well as needing wide-scale adoption across teams to benefit from that investment. Of course there is room for disagreement here – e.g. is a service like New Relic a tool or a platform? Adoption of a new monitoring tool definitely comes with a large cost (you don’t want every team using a different SaaS monitoring suite). But the Tech Radar is just a tool itself and shouldn’t be considered the final definition of anything – just a guide for making better decisions.

Strategic Impact

As touched on above, adopting a Platform or new Language/Framework has significant costs. While putting together a radar like this with input from all people, who may have different levels of experience, you might find that not all of the strategic impacts have been considered when adding an item to the list. An incomplete list of things I believe need to be examined when selecting a Language or Framework could be:

  • What are the hiring opportunities around this technology? Is it easier or harder to hire people with this skillset?
  • Is it a growing community, and are we likely to find engineers at all maturity levels (junior/intermediate/senior) with experience in the technology?
  • For people already in the company, is it easy and desirable to learn? How long does it take to become proficient?
  • Similarly, how many people already at the company already know the technology well enough to be considered proficient for daily work?
  • Does the technology actually solve a problem we have? Are there any things our current technologies do very well that would suffer from the new technology’s introduction?
  • What other parts of our tech stack would need to change as a result of adopting it? Testing? Build tooling? Deployments? Libraries and Dependencies?
  • Do we understand not only the language but also the runtime?
  • Would it help us deliver more value to the customer, or deliver value faster?
  • By taking on the adoption costs, would we be sacrificing time spent on maximising some other current opportunity?
  • Is there a strong ecosystem of libraries and code around the technology? Is there a reliable, well-supported, stable equivalent to all of the libraries we use with our current technologies? If not, is it easy and fast to write our own replacements?
  • How well does adoption of the technology align with our current product and technology roadmaps?

By no means is this list exhaustive, but I think all points need some thought, rather than just “is it nicer to program in than my current language”.

Filtering the list and assembling the radar

As mentioned, I ended up with a fairly huge list of items which now needs to be filtered. This is a task for a CTO or VP of Engineering depending on your organisation size. Ultimately people accountable for the technology strategy need to set the bounds of the radar. For my list, I will attempt to pre-filter the items that have little strategic importance – like tools or techniques (unless we determine it’s something that could/should have widespread adoption and benefit).

Ultimately we’ll have to see what the output looks like and whether engineers feel it answers questions for them – that will determine whether we try to build a follow-up radar in the next quarter or year. If I end up running the process again, I suspect I’ll make use of a smaller group of people to add inputs – who have already collected and moderated inputs from their respective teams. The other benefit of the moderation/filtering process is that the document that is later produced is a way of expressing to engineers (perhaps with less experience) the inherent strategic importance of the original suggestions. There are no wrong suggestions, but we should aim to help people learn and think more about the application of strategy and business importance in their day to day work.

Tags: , , , ,

Friday, August 12th, 2016 Tech No Comments

Easing back into fitness

by Oliver on Wednesday, June 1st, 2016.

It has been about 7 months since the birth of my daughter and I think that’s about long enough to let myself sit idle due to child rearing. Certainly as the father, I don’t have that many excuses as to why I can’t become physically active again, and I actually miss running and taking part in the various crazy obstacle races. So, I’ve resolved to get myself back into shape (without being too obsessive about it at least).

In previous years I certainly at times pushed myself too hard and ended up with some minor injuries – I guess that’s what you get in your mid-thirties. I pushed myself to get my running distances up too quickly, and ended up straining some leg muscles and needed physiotherapy for a couple of months. So far, I’ve only run about 5-6km once a week for the last few weeks and that is about all I can manage. I can feel the fitness level slowly returning (although it is also hard to tell due to the heat I’ve been running in) but am resisting running for any longer distances yet.

Since my biggest focus is getting into a state of fitness where I can again tackle an obstacle race that is not “insanely difficult” (to be defined further down), I know that one of my biggest weaknesses (literally) is upper body and core strength. To that end, and to assist with my running recovery, I’ve started a regime of stretches and small upper/core exercises which I repeat twice daily – once in the morning after the kids have woken me up, and again at night before bed. Here’s the general routine:

  • Lie on my back and stretch the entire body out
  • Pull each knee up to my chest individually and stretch the leg
  • Stretch the “glutes”
  • Hamstring stretches
  • Abductor / groin stretch while sitting
  • Front plank for as long as I can hold it
  • Side plank for as long as I can hold it, on each side
  • Push ups while kneeling (so I can do more repetitions with smaller load)
  • Prisoner squats until my legs start burning.
  • Lie on my front and stretch the front thigh area of each leg.

Sorry for the lack of accurate terminology! Some of these stretches I learned while I had a personal trainer leading up to my Tough Mudder race in 2014, some I got from my yoga teacher wife and some I just make up myself. Generally each stretch I hold for 30 seconds. I can say that my legs feel better after stretching them twice a day, and the other light exercises are having a very small but noticeable effect. It’s enough to keep those muscles a little bit active but not so much that I dread it and skip exercising them at all.

The intention now is to keep this up, continue raising the limits slowly until I feel like I can take on some of the smaller and less difficult obstacle races (and perhaps shorter regular running races). I figured out last year that a marathon is just not my cup of tea, after attempting to run 30km in one training session and finding it incredibly boring. I can manage a half marathon but I think that’s about the limit.

What do I define as “insanely difficult”? Tough Mudder definitely had at least two aspects which for me are pretty undesirable. I don’t particularly like being electrocuted, and the 12ft walls were almost impossible for me without a lot of assistance – this again comes back to the lack of upper and core strength which I hope to work on. Getting Tough – The Race was probably the hardest event I’ve undertaken so far due to the distance (24km) and extreme cold (being completely submerged for a long period of time in icy water) and sheer number of obstacles. I don’t relish the thought of that icy water again any time soon. No Guts No Glory, despite also being in very icy conditions (well, actual snow for most of it) was very enjoyable although I unfortunately did some injury to my finger which still hasn’t recovered. Bremen Lake Run would again have been more fun if it weren’t for the big walls, and it also had some cold water thrown in for fun.

So I guess my main complaint would be with the walls, which I know I need to work on a lot. I don’t know if the electric shock therapy obstacles will always be in Tough Mudder but if the walls were less of a challenge for me I guess I can work on my psychological toughening to get through being electrocuted. Meanwhile, there are actually a lot of very enjoyable (like, actually enjoyable for normal people) obstacle races coming up in Germany over the next few months which don’t have this level of insane difficulty that I’d like to attempt. Perhaps this year or next I’ll even try one or two in the UK as they tend to have more variety.

Tags: , , , ,

Wednesday, June 1st, 2016 Health No Comments

Catching Up

by Oliver on Saturday, May 21st, 2016.

I haven’t posted anything for quite some time (which I feel a little bad about), so this is something of a randomly-themed catch-up post. According to my LinkedIn profile I’ve been doing this engineering management thing for about two years, which at least provides some explanation for a relative lack of technical-oriented blog posts. Of course in that time I have certainly not revoked my Github access, deleted all compilers/runtimes/IDEs/etc and entirely halted technical work, but the nature of the work of course has changed. In short, I don’t find myself doing so much “interesting” technical work that leads to blog-worthy posts and discoveries.

So what does the work look like at the moment? I’ll spare you the deep philosophical analysis – there are many, many (MANY) posts and indeed books on making the transition from a technical contributor to a team manager or lead of some sort. Right back at the beginning I did struggle with the temptation to continue coding and contribute also on my management tasks – it is difficult to do both adequately at the same time. More recently (and perhaps in my moments of less self-control) I do allow myself to do some technical contributions. These usually look like the following:

  • Cleaning up some long-standing technical debt that is getting in the way of the rest of the team being productive, but is not necessarily vital to their learning/growth or knowledge of our technology landscape.
  • Data analysis – usually ElasticSearch, Pig/Hive/Redshift/MapReduce jobs to find the answer to a non-critical but still important question.
  • Occasionally something far down the backlog that is a personal irritation for me, but is not in the critical path.
  • Something that enables the rest of the team in some way, or removes a piece of our technology stack that was previously only known by myself (i.e. removes the need for knowledge transfer)
  • Troubleshooting infrastructure (usually also coupled with data analysis).

I’d like to say I’ve been faithful to that list but I haven’t always. The most recent case was probably around a year ago I guess, when I decided I’d implement a minimum-speed data transfer monitor to our HLS server. This ended up taking several weeks and was a far more gnarly problem than I realised. The resulting code was also not of the highest standard – when you are not coding day-in and day-out, I find that my overall code quality and ability to perceive abstractions and the right model for a solution is impaired.

Otherwise, the tasks that I perhaps should be filling my day with (and this is not an exhaustive list, nor ordered, just whatever comes to mind right now) looks more like this:

  • Assessing the capacity and skills make up of the team on a regular basis, against our backlog and potential features we’d like to deliver. Do we have the right skills and are we managing the “bus factor”? If the answer is “no” (and it almost always is), I should be hiring.
  • Is the team able to deliver? Are there any blockers?
  • Is the team happy? Why or why not? What can I do to improve things for them?
  • How are the team-members going on their career paths? How can I facilitate their personal growth and helping them become better engineers?
  • What is the overall health of our services and client applications? Did we have any downtime last night? What do I need to jump on immediately to get these problems resolved? I would usually consider this the first item to check in my daily routine – if something has been down we need to get it back up and fix the problems as a matter of urgency.
  • What is the current state of our technical debt; are there any tactical or strategic processes we need to start in order to address it?
  • How are we matching up in terms of fitting in with technology standards in the rest of the organisation? Are we falling behind or leading the way in some areas? Are there any new approaches that have worked well for us that could be socialised amongst the rest of the organisation?
  • Are there any organisational pain-points that I can identify and attempt to gather consensus from my peer engineering managers? What could we change on a wider scale that would help the overall organisation deliver user value faster, or with higher quality?
  • Could we improve our testing processes?
  • How are we measuring up against our KPIs? Have we delivered something new recently that needs to be assessed for impact, and if so has it been a success or not matched up to expectations? Do we need to rethink our approach or iterate on that feature?
  • Somewhat related: have there been any OS or platform updates on any of our client platforms that might have introduced bugs that we need to address? Ideally we would be ahead of the curve and anticipate problems before they happen, but if you have a product that targets web browsers or Android phones, there are simply too many to adequately test ahead of general public releases before potential problems are discovered by the public.
  • Is there any free-range experimentation the team could be doing? Let’s have a one-day offsite to explore something new! (I usually schedule at least one offsite a month for this kind of thing, with a very loose agenda.)
  • How am I progressing on my career path? What aspects of engineering management am I perhaps not focussing enough on? What is the next thing I need to be learning?

I could probably go on and on about this for a whole day. After almost two years (and at several points before that) it is natural to question whether the engineering management track is the one I should be on. Much earlier (perhaps 6 months in) I was still quite unsure – if you are still contributing a lot of code as part of your day to day work, the answer to the question is that much harder to arrive at since you have blurred the lines of what your job description should look like. It is much easier to escape the reality of settling permanently on one side or the other.

Recently I had some conversations with people which involved talking in depth about either software development or engineering management. On the one hand, exploring the software development topics with someone, I definitely got the feeling that there was a lot I am getting progressively more and more rusty on. To get up to speed again I feel would take some reasonable effort on my part. In fact, one of the small technical debt “itches” I scratched at the end of last year was implementing a small application to consume from AWS Kinesis, do some minor massaging of the events and then inject them into ElasticSearch. I initially thought I’d write it in Scala, but the cognitive burden of learning the language at that point was too daunting. I ended up writing it in Java 8 (which I have to say is actually quite nice to use, compared to much older versions of Java) but this is not a struggle a competent engineer coding on a daily basis would typically have.

On the other hand, the conversations around engineering management felt like they could stretch on for ever. I could literally spend an entire day talking about some particular aspect of growing an organisation, or a team, or on technical decision-making (and frequently do). Some of this has been learned through trial and error, some by blind luck and I would say a decent amount through reading good books and the wonderful leadership/management training course at SoundCloud (otherwise known as LUMAS). I and many other first-time managers took this course (in several phases) starting not long after I started managing the team, and I think I gained a lot from it. Unfortunately it’s not something anyone can simply take, but at least I’d like to recommend some of the books we were given during the course – I felt I got a lot out of them as well.

  • Conscious Business by Fred Kofman. It might start out a bit hand-wavy, and feel like it is the zen master approach to leadership but if you persist you’ll find a very honest, ethical approach to business and leadership. I found it very compelling.
  • Five Dysfunctions of a Team by Patrick Lencioni. A great book, and very easy read with many compelling stories as examples – for building healthy teams. Applying the lessons is a whole different story, and I would not say it is easy by any measure. But avoiding it is also a path to failure.
  • Leadership Presence by Kathy Lubar and Belle Linda Halpern. Being honest and genuine, knowing yourself, establishing a genuine connection and empathy to others around you and many other gems within this book are essential to being a competent leader. I think this is a book I’ll keep coming back to for quite some time.

In addition I read a couple of books on Toyota and their lean approach to business (they are continually referenced in software development best practices). I have to admit that making a solid connection between all of TPS can be a challenge, and I hope to further learn about them in future and figure out which parts are actually relevant and which parts are not. There were a few other books around negotiation and other aspects of leadership which coloured my thinking but were not significant enough to list. That said, I still have something like 63 books on my wish list still to be ordered and read!

In order to remain “relevant” and in touch with technical topics I don’t want to stop programming, of course, but this will have to remain in the personal domain. To that end I’m currently taking a game programming course in Unity (so, C#) and another around 3D modelling using Blender. Eventually I’ll get back to the machine learning courses I was taking a long time ago but still need to re-take some beginner linear algebra in order to understand the ML concepts properly. Then there are a tonne of other personal projects in various languages and to various ends. I’ll just keep fooling myself that I’ll have free time for all of these things 🙂

Tags: , , ,

Saturday, May 21st, 2016 Tech, Thoughts No Comments

Pre-warming Memcache for fun and profit

by Oliver on Wednesday, August 12th, 2015.

One of the services my team runs in AWS makes good use of Memcached (via the ElastiCache product). I say “good” use as we manage to achieve a hit rate of something like 98% most of the time, although now I realise that it comes at a significant cost – when this cache is removed, it takes a significant toll on the application. Unlike other applications that traditionally cache the results of MySQL queries, this particular application stores GOB-encoded binary metadata, but what the application does is outside the scope of this post. When the cached entries aren’t there, the application has to do a reasonable amount of work to regenerate it and store it back.

Recently I observed that when one of our ElastiCache nodes are restarted (which can happen for maintenance, or due to system failure) we already saw a less desirable hit to the application. We could minimise this impact by having more instances in the cluster with less capacity each – for the same overall cluster capacity. Thus, going from say 3 nodes where we lose 33% of our cache capacity to 8 nodes where we would lose 12.5% of our cache capacity is a far better situation. I also realised we could upgrade to the latest generation of cache nodes, which sweetens the deal.

The problem that arises is: how can I cycle out the ElastiCache cluster with minimal impact to the application and user experience? To save a long story here, I’ll tell you that there’s no way to change individual nodes in a cluster to a different type, and if you maintain your configuration in CloudFormation and change the instance type there, you’ll destroy the entire cluster and recreate it again – losing your cache in the process (in fact you’ll be without any cache for a short period of time). I decided to create a new CloudFormation stack altogether, pre-warm the cache and bring it into operation gently.

How can you pre-warm the cache? Ideally, you could dump the entire contents and simply insert it into the new cluster (much like MySQL dumps or backups), but with Memcached this is impossible. There is the stats cachedump command to Memcached, which is capable of dumping out the first 2MB of keys of a given slab. If you’re not aware of how Memcached stores its data, it breaks the memory allocation into various “slabs” of increasing sizes and stores values in the closest-sized slab that will fit it (although always rounding up). Thus, internally the data is segmented. You can list stats for all of the current slabs with stats slabs, then perform a dump of the keys with stats cachedump {slab} {limit}.

There are a couple of problems with this. One is the aforementioned 2MB limit on the returned data, which in my case did in fact limit how useful this approach was. Some slabs had several hundred thousand objects and I was not able to retrieve nearly the whole keyspace. Secondly, the developer community around Memcached is opposed to the continued lifetime of this command, and it may be removed in future (perhaps it already is, I’m not sure, but at least it still exists in 1.4.14 which I’m using) – I’m sure they have good reasons for it. I was also concerned that using the command would lock internal data structures and cause operational issues for the application accessing the server.

You can see the not-so-reassuring function comment here describing the locking characteristics of this operation. Sure enough, the critical section is properly locked with pthread_mutex_lock on the LRU lock for the slab, which I assumed meant that only cache evictions would be affected by taking this lock. Based on some tests (and common sense) I suspect that it is an LRU lock in name only, and more generally locks the data structure in the case of writes (although it does record cache access stats somewhere as well, perhaps in another structure). In any case as mentioned before, I was able to retrieve only a small amount of the total keyspace from my cluster, so as well as being a dangerous exercise, using the stats cachedump command was not useful for my original purpose.

Later in the day I decided to instead retrieve the Elastic LoadBalancer logs from the last few days, run awk over them to extract the request path (for some requests that would trigger a cache fill) and simply make the same requests to the new cluster. This is more effort up-front since the ELB logs can be quite large, and unfortunately are not compressed, but fortunately awk is very fast. The second part to this approach (or any for that matter) is using Vegeta to “attack” your new cluster of machines, replaying the previous requests that you’ve pulled from the ELB logs.

A more adventurous approach might be to use Elastic MapReduce to parse the logs, pull out the request paths and using the streaming API to call an external script that will make the HTTP request to the ELB. That way you could quite nicely farm out work of making a large number of parallel requests from a much larger time period in order to more thoroughly pre-warm that cache with historical requests. Or poll your log store frequently and replay ELB requests to the new cluster with just a short delay after they happen on your primary cluster. If you attempt either of these and enjoy some success, let me know!

Tags: , , , ,

Wednesday, August 12th, 2015 Tech No Comments

Running and energy gels

by Oliver on Monday, August 10th, 2015.

I’ve been taking part in longer-distance events of various kinds (running, obstacle racing, cycling and most distantly kayaking) for quite some time now and have had various nutritional techniques over the years and for the different events. Something that has been more of a staple for me personally in cycling and running is energy gels, and if you’ve used them yourself you’ll know that they can be a hit-and-miss affair; often the choice is very personal. What works for one person may not work for another.

That being said, at the last couple of events in which I participated, I think I did a particularly poor job at choosing my gels. At the point of consumption it was a matter of purely getting the calories into my body, but it wasn’t particularly pleasant. I’m not a picky person when it comes to food and drink, but I am obsessed with details and quite thorough so I decided to try as many different gels as I could get my hands on and record my thoughts on them. Again, choice can be a personal thing but perhaps it will be interesting to others, and also handy for my own memory’s sake.

A couple of notes before the first review: I’m consuming these after at least ~45 minutes of reasonable exercise so they are somewhat realistic impressions as I’d expect “in the field”. Of course, if you are low on energy and haven’t had anything in a while, generally you’ll be ravenous and slurp down anything, but assuming you time everything correctly you’ll want something that is reasonably palatable.

Clif Shot Energy Gel – Razz flavour

96 calories from a compact, easy to carry and open 34 gram packet. It also has a “litter leash” on the side of the packet which I haven’t worked out how to use – I just roll it back up and put it in my pocket. I think I ate one of these at home late at night a week or so ago, in one of my more desperate fits of midnight-snacking, and remember the taste wasn’t that great, but after running for 10km it was actually not bad. Not sure if it was the fact that it was around 0°C outside, but it seemed very thick – probably too thick for my liking.

Ideally I would not have to follow this up immediately with water to get it down, but it was just so thick I felt compelled to drink afterwards. The flavour was ok – not great initially but by the time I’d finished it, I wasn’t too unimpressed. Would use again if there was nothing better. Raspberry, in that kind of synthetic-tasting raspberry you get in lollies, mixed with a bit of saltiness. But I guess that’s quite predictable – most gels are going to taste synthetic, sickly sweet and salty all at the same time.

Somewhat interesting note: the packaging has a small paper sticker with the word “shot” on it, that covers “90% organic” on the plastic packaging. I wonder if the formula is different in Germany, or if it is simply bullshit that wasn’t allowed to be advertised here (unlike perhaps in other markets).

XenoFit Carbohydrate – Redberry Gel

68 calories from one 25g “stick” (in reality a tear-to-open soft packet like all of the others). Noticeably smaller than the others, and when I opened it I didn’t taste anything initially – although I suspect this was due to my taste buds being slightly frozen (I was running in about 3ºC). The flavour, when it came through, was pretty similar to the Clif Shot but perhaps not quite as sweet – which is good.

Quite thick consistency again, similar to the Clif Shot. Again, this could be due to the temperature. Overall ok, would use again but it didn’t blow me away. The relatively small amount of energy in one packet might be an issue for some – I would probably want more in one packet to avoid having to continually open them to stay energised.

Nutrixxion Energy Gel “Force” flavour/variety

40g easy-open sachet with 80mg caffeine, 500mg taurine, 118 calories.
This one really tasted quite foul. It doesn’t list the flavour on the packet and I couldn’t identify what it might have been from the flavour of it – generically sweet and goopy texture. I guess it is not trying to be user friendly but has the advantage of a shot of caffeine to redeem itself with. Not sure that is enough for me. Wouldn’t use this one again.

PowerBar L-Carnitine liquid drinking ampoule (before) and Amino Mega Liquid drinking ampoule (after)

25mL each.
Technically not energy gels, these were in the same area of the shop and caught my eye. They are intended as some kind of vaguely snake-oily type of muscle enhancer through combining their consumption with exercise. The L-Carnitine ampoule was pleasant enough tasting, the liquid being quite liquidy and having a nice flavour (although somewhat unidentifiable) but outside of the main ingredient it has practically no energy content at all. The Amino Mega Liquid does contain 52 calories and a host of strange chemicals, but also a rather horrid flavour. For those reasons I couldn’t consider these worthwhile for energy during exercise, and probably wouldn’t use them for their supposed muscle building or regeneration effects either.

MultiPower MultiCarbo Gel, Lemon flavour

40g easy-open sachet with 104 calories.
Very similar to many of the others I’ve previously written about, thickish consistency and sickly-sweet lemon flavour that wasn’t terribly unpleasant or pleasant. Again, wouldn’t avoid it if there were no other options, but also wouldn’t pick it necessarily over other available gels.

Sponser Sport Food Liquid Energy BCAA, strawberry-banana flavour

70g “toothpaste tube” style container with 173 calories.
Contains 500mg of “BCAA” (branched-chain amino acid), and also calls out (on the front of the tube) the presence of additional sodium, potassium and taurine. Whether or not those elements of the liquid energy actually helped me or not, I found two things about this one detrimental – the container and the flavour. The very artificial-tasting strawberry-banana combination is not exactly revolting, but it’s not great. Why would you combine these flavours?!? Secondly, the tube format with the cap was far more difficult to consume from than other gels, and this was just when I was running. They recommend that you consume half the tube every 20 minutes, combined with 200mL of water on each “dose”, which doubles the difficulty. I couldn’t possibly recommend this, and would avoid it unless I literally had no other options.

Xenofit Carbohydrate for sport, citrus-mix gel

25g sachet with 68 calories.
This one surprised me twice. Firstly the flavour, citrus (although basically lemon) was actually quite pleasant although the texture is more pasty/floury than most gels, which was not expected. When I first started swallowing the gel I was thinking “I’m not sure I like this texture”, but after just a few seconds this changed to “actually I quite like this”. It is definitely a bit on the thicker side and takes a little effort getting it out of the small sachet, but not thick like some of the other gels are that just makes them unpalatable. Incidentally, Xenofit is also the brand of sports drink powder I have. The main downside to these gels is their smaller size, and so you don’t get as much out of each sachet as some of the others. Would definitely buy again.

Dextro Energy Liquid Gel, Apple flavour

60mL packet with screw-off lid, 114 calories.
I was saving this one for last but couldn’t help myself and took it on my run today. I’m fairly certain this is my favourite of the whole lot of gels (or possibly one of the other flavours of the same product). The consistency is much more fluid than other gels, in fact it’s almost like drinking a very sweet shot of juice or cordial. They are quite sweet but not sickly sweet, and the flavour, while somewhat artificial (like most other gels) is not too bad. This combination makes these my favourite energy shot. The only thing against them is the fairly impractical screw-off top of the packet. It’s ok if you are running, but quite fiddly if you are on a bike.

BSc BodyScience Athlete Series Energy Gel, Super Berry flavour

35g easy-tear sachet, 97 calories.
Unfortunately I left my big bag of gels at home, forgetting to take them with me on vacation, but this gave the opportunity to pick up something new in a different country. I haven’t seen this particular brand before but was interested to try it. The consistency was quite thick, even compared to others I’ve already tried, and the temperature is mid-20s Celsius at the moment so in colder temperatures it might be difficult to consume! That being said, it was not quite as sickly sweet as other gels that are on the thicker side, although the “super berry” flavour is not as great as it would have you believe. I didn’t have any water immediately available to me, so it was good that there wasn’t a terrible aftertaste. Quite convenient size and easily tearable on a bike without slowing down much – would probably buy again over most of the other gels I’ve tried.

PowerGel Fruit Dual Source Carb Mix Gel, Mango Passionfruit flavour

41g easy-tear sachet, 108 calories.
I’ll admit up front that PowerGels are one of my least favourite tasting gels, and I’ve used them in several races before when nothing else was available. That being said, the “fruit” variety wasn’t nearly as bad as the non-fruit flavours I’ve had before. Almost pleasant, if I dare go so far. Still, the easy-tear sachet was anything but – unless I made a terrible mistake, the consistency of the gel made it practically impossible to get out of the sachet. If I’d been on a bike it would have been a disaster, but even running it made things tricky. Luckily I had to stop at some traffic lights and wrestled with the packet for a minute. Would possibly buy again if I could be sure the sachet would open properly.

MultiPower MultiCarbo Gel, Orange flavour

40g easy-open sachet with 104 calories.
Proving once again that enjoyment heavily depends on how tired and in need of a psychological energy boost you are, I didn’t mind consuming this gel – despite having had one before in lemon flavour and not really enjoying it. Given that the lemon flavoured gel of this kind was consumed quite a while ago I think it must have been in one of the colder months, and I can definitely confirm that on a slightly warmer day the consistency was not as thick. The flavour was reasonable, not quite as sickly sweet as the lemon (somehow, despite the orange still being quite artificial tasting).

I’d be tempted to have this one again – but only in orange flavour. It’s definitely one of the more handy sizes and packagings to carry around for use on a run or ride without much effort in consuming.

Nutrixxion Energy Gel, Cola-Lemon flavour

40g easy-open sachet with 40mg caffeine, 118 calories.
Contrary to the “Force” flavour of the same brand and type, I have a soft spot for Cola-Lemon flavour in general and so I kinda enjoyed this gel. The consistency was still quite goopy, even on a hot evening after sitting in my pocket. I’d already run out of water so it was not the best experience, but still not too bad. On flavour alone I’d probably use this again and consider it on par with everything but my most favourite gel (currently the Dextro Energy Liquid).

PowerGel “Original” Dual Source Carb Mix Gel, Vanilla flavour

41g easy-tear sachet, 107 calories.
I had low expectations for this one as I tend to prefer the fruity or cola flavours of gels, but actually was surprised that I preferred this one over the other flavours of PowerGels that I’ve tried in the past and generally disliked. I could actually stomach it, and the consistency was not bad. However, unless I’m making a terrible mistake every single time I open one of these sachets, there is something about the consistency of the gel and the opening at the top of the packet that makes it very difficult to get anything out of it easily. Smaller sachets from other brands don’t seem to be as problematic.

In any case, if I remembered to be more careful opening it, I would probably try this one again.

Xenofit Carbohydrate, Maracuja (Passionfruit) flavour

60mL sachet, 103 calories.
Unlike the smaller gels in the same brand, this one was quite a bit bigger. The reason became evident when I opened it (and also hinted at by the calorie content) – it’s much less dense and “gel-like”. Almost (but not quite) liquid. At the other end of the scale is the Dextro Energy Liquid which is very liquid, but this one was about half-way in-between. Perhaps as as result, the gel itself was not as sweet (and I was anticipating some fairly awful synthetic-tasting passionfruit flavour) and not overpowering. I wouldn’t say it particularly tasted like actual passionfruit, but that wasn’t a problem.

Might try again, although like many of the other rip-open sachets it proved tricky to actually get it open without squirting the contents all over my hand.

Sponser Liquid Energy Long, Salty flavour

40g sachet, 94 calories.
Knowing that the flavour was already described as “salty”, I was already dreading this one somewhat. This gel’s advantage is that it has 180mg of sodium to counteract losing sodium during exercise through respiration. Hence, most exercise drinks also contain a reasonable amount of sodium. In practice though, it was the consistency that was the killer for me. Despite it being a quite hot day (perhaps 30ºC in the shade when I was running) the gel did not soften up much, and felt like a slug sitting in my mouth, it was so thick and unyielding. It was quite a battle to swallow the first mouthful, and continue through the rest of the meager 40g.

To summarise: the flavour is not actually noticeably saltier than other gels or exercise drinks but the consistency is way too thick to be pleasant. I would not try this one again, even if it were the only option.

Xenofit Carbohydrate, Orange flavour

60mL sachet, 103 calories.
This was a sweet (literally) relief after the salty, thick gel I had consumed about an hour before. Not much more to say about the orange flavour that I didn’t already say about the passionfruit variety. The warm weather made this gel seem even more liquid-like compared to other gels, and it was easier to consume since I managed to tear the sachet properly this time. It was a bit more synthetic tasting than the passionfruit.

The only thing not in its favour is the size, and at 60mL (rather than perhaps 40mL/40g) it doesn’t fit so well in my pockets, but that’s probably more to do with my running kit than this gel. Would try again if my more favoured options are not available.

Dextro Energy Liquid Gel, Cola flavour

60mL packet with screw-off lid, 114 calories.
Just another flavour of the same one I’ve tried before (and wrote about above). I only just realised it was cola flavour just now, when I started to write about it – it wasn’t obvious to me while I was out for my run, and I assumed it was some kind of very synthetic fruit flavour. In any case, as before, this is definitely my preferred energy source while exercising. The only thing not in its favour for me personally is the additional size due to it being a liquid and not a more-dense gel, and the screw-off top (although if you feel like having just half of it now and half later, it is useful).

So there you have it.
It took something like 8 months to actually get through all of these gels since I wasn’t doing a lot of long distances to warrant it until recently. To sum up, my preferred gel is the Dextro Energy Liquid Gel (if that wasn’t already obvious), but since the amount of energy you get from them isn’t as much as others, I’d probably supplement that with a banana or energy bars. However, before you base your own selection on my reviews here, I’d recommend trying them for yourself – it’s a very personal choice and everyone will have their own preferences.

Tags: ,

Monday, August 10th, 2015 Health 2 Comments

It’s 2015, and online shopping sites still suck at taking credit card details

by Oliver on Monday, June 15th, 2015.

This is a small rant, and the title should already be very familiar to you, if you have paid for anything online in the last 15 or so years. Nothing (or very little) seems to have changed in that time, remarkably. We still seem to be affected by the same range of ridiculously trivial problems – all easily solvable with a tiny amount of Javascript, mind you – and not making any further progress in the state of the art.

Since I live in Germany, practically my only use for my credit card is in online shopping (almost no physical shops or even taxis accept credit cards). Like many others I suspect, I have my credit card details saved in a secure password store, not saved in my browser for convenient automatic purchases by accident or small child, so I tend to copy and paste the numbers into the text field(s) provided on the site in question. The interfaces usually seem to possess these attributes:

  • Four separate text fields which have to be entered individually, preventing you from pasting a single number in.
  • Single text field with a hard-limit of 16 characters, preventing you from pasting in a credit card number with spaces between each group of four digits.
  • Single text field with no character limit, but it will warn you and prevent further form submission until you remove any spaces or hyphens between groupings of digits.

I think I might have seen one website, ever, that actually managed to take the input you provided and massaged it into the correct format. For the benefit of anyone reading that might have to implement a website like this, here are some hints:

  • Provide a single text field, and don’t limit the width of the field.
  • If there are any white-space or hyphens in the input, just throw them away silently.
  • If you are left with 16 digits as you expect, continue. If not, you can legitimately warn the user about a problem.

These are genuinely problems I would give to a programming intern on their first day and expect a solution within an hour. It’s really not acceptable for widely-used websites to do such a terrible job of input handling and validation as the majority do today.

Tags: , , ,

Monday, June 15th, 2015 Tech No Comments

It’s 2015, and I’m still writing config management commands wrapped in for loops.

by Oliver on Friday, April 10th, 2015.

Warning: this is a bit of a rant. Today my team had to migrate our ElasticSearch cluster from one set of instances in our EC2 VPC, to a smaller set of physical machines in our regular datacenter (yes, it’s actually cheaper). Both sets of machines/instances are Chef-controlled, which generally I don’t have to worry about, but in this case it was important due to the ordering of steps in the migration.

The general process was something like this:

  • Remove the Logstash role from the nodes we were collecting data from to avoid immediate reconfiguration when we started moving ElasticSearch nodes around.
  • Remove the ElasticSearch role from the old ES cluster, and alter the disk and memory layout settings to match the new hardware.
  • Add the ElasticSearch role to the new ES machines and let them bring up the new cluster.
  • Add the Logstash role back to the data collection nodes, which would then point to the new cluster.

For simplicity’s sake, I’ll just say we weren’t too interested in the data as we rotate it fairly regularly anyway, and didn’t bother migrating anything.

Last week I was already bitten by a classic Chefism when we realised that how I’d set up the Logstash recipe was incorrect. To avoid weird and wonderful automatic reconfigurations of the ElasticSearch nodes that Logstash points to, I use a hard-coded array of node addresses in the role attributes, but in the recipe leave the default setting (in case you forgot to set up your array of ElasticSearch addresses) as a single array – [“127.0.0.1”]. Chef professionals will already know what is coming.

Of course, Chef has an attribute precedence ruleset that still fails to sink into my brain even now, and what compounds the problem is that mergeable structures (arrays, hashes) are merged rather than replaced. So I ended up with an array containing all my ElasticSearch node addresses, and 127.0.0.1. So for some indeterminate time we’ve been only receiving about 80% of the data we had been expecting, as the Logstashes configured with 127.0.0.1 have been simply dropping the messages. Good thing there wasn’t ElasticSearch running on those machines as well!

On attempting to fix this, I removed the default value that had been merged in, but decided it would be prudent to check that an array had been given for the list of ElasticSearch servers to avoid Logstash starting up with no server to send messages to, and potentially start crash-looping. The first attempt was something like this:


if ! elasticsearch_servers.class == Array
  fail "You need to set node["logstash"]["elasticsearch_servers"] with an array of hostnames."
end

Then I found that this fail block was being triggered on every Logstash server. Prior experiences with Hashes and Mashes in Chef land made me suspect that some bizarro internal Chef type was being used instead of a plain Array, and a quick dip into Chef shell confirmed that – indeed I was dealing with an instance of Chef::Node::ImmutableArray. That was surprised number two.

The third thing that got us today was Chef’s eventual consistency model with respect to node data. If you are running Chef rather regularly (e.g. every 15 minutes) or have simply been unlucky, you may have attempted to add or remove a role from a machine while it was in the middle of a Chef run, only for it to write its node attributes back to the Chef server at the end of the run and wipe out your change. I’m sure there’s a better way of doing this (and I’m surprised there isn’t locking involved) but I managed to run into this not only at the start of our migration but also at the end. So we started out life for our new ElasticSearch nodes by accidentally joining them to the old cluster for a few minutes (thankfully not killing ourselves with the shard relocation traffic) since the old roles had not been completely removed. Then we managed to continue sending Logstash messages to the old cluster for a period of time at the end of the migration, until we figured out the Logstash nodes still thought the old cluster was in use.

The for loop referenced in the title was of course me, repeatedly attempting to add or remove roles from machines with the knife command, then restarting Logstash until it was clear the only connections to our old cluster were from the health checkers. (Of course I could have used PDSH, but I needed to space out the Logstash restarts, due to the usual JVM startup cycle.)

All of this makes me really glad that I don’t deal with configuration management on a daily (or even weekly, perhaps even monthly) basis. Writing and running 12 Factor apps and things easily deployable via CloudFormation (yes, even with its mountain of JSON config) are actually getting work done, and this just feels like pushing a boulder up a hill. Sorry for the Friday evening downer – happy weekend everyone!

Tags: , ,

Friday, April 10th, 2015 Tech No Comments

MBTI and pair programming

by Oliver on Friday, March 27th, 2015.

I had a meeting this week where among other things we talked about our teams and team members and how things were doing, generally, in the sense of team health. Oh yeah, since I haven’t explicitly called it out on this blog, for the last 9 months I’ve been an engineering manager and since the beginning of the year took on a second team. So I’ve got two teams of developers to manage currently.

Within the discussion we touched on personalities of team members and how some people are more likely to engage in pair programming, but others generally not. This reminded me of my own habits. I aspire to pair program, but when the opportunity is there I usually avoid it. This isn’t setting a great example to my teams so I feel guilty about this, but in the moment we were discussing the topic I started to reflect on this tendency a little.

Part of the new management training programmes that are being explored at SoundCloud involves a reasonable amount (ok, a LOT) of self-discovery and self-awareness. One form this takes is in doing an MBTI test and getting familiar with your tendencies, preferences and communication styles. I’ve done this test at least twice in the past and nothing had changed this time around, but I’m more familiar now with the implications. I tend to live the factual, data-based world, and without delving too much into my own personal MBTI type, prefer to plan things out and think them through in advance rather than acting spontaneously in the moment, talking out problems and making on the spot decisions.

This really reflects on my hesitation when it comes to pair programming. Innate in the process is talking out problems when the situation is not understood, making on the spot decisions without having much time to reflect internally and rely on internal thought processes (since there’s another person there waiting for you). This directly conflicts with my personal pre-dispositions. No wonder I am not a willing pair programmer! It’s quite likely members of my teams may be the same, and hence taking a universal approach of everyone pair programs may at best lead to poor results and at worst lead to a dysfunctional team full of unhappy members.

I’m sure this is not an original thought (in general) but it was a useful realisation for me. I think that taking into account different personality types and different personal motivators for your team member, and using this when it comes to planning how to work and what to work on, can be one potentially powerful tool in building a strong and happy team.

Tags: , ,

Friday, March 27th, 2015 Tech, Thoughts No Comments

Golang, testing and HTTP router package internals

by Oliver on Sunday, January 18th, 2015.

We have an internal service that takes requests of the form something like /foo/{ID}/bar which basically is expected to generate data of the form “bar” about the ID entity within the collection “foo”. Real clear! Because this service has a bunch of similar routes that have a resource identifier as part of the request path, we use the Gorilla Mux package. There are a lot of different approaches to HTTP request routing, many different packages (and some approaches recommending you use no external packages) – this is out of scope of this article!

This week we realised that for a given endpoint, one other service that calls ours was URL-encoding the resource ID. There are two parts to this that need to be stated:

  • IDs are meant to be opaque identifiers like id:12345 – so they have at least one colon in them.
  • A colleague later pointed out that encoding of path parts is actually up to the application to define (although I haven’t read the part of the spec that says this, recently). URL-encoding thus is not strictly necessary except when it comes to query parameters, but in this case we have something being encoded and need to deal with it.

The fix is relatively simple. We already had some code pulling out the ID:

id := mux.Vars(request)["id"]


However, this was just passing the ID to another part of the code that validated that the ID is properly formed, with a regular expression. It was expecting, again, something like id:12345 and not id%3A12345. So we introduce a very simple change:

    encodedID := mux.Vars(request)["id"]
    id, err := url.QueryUnescape(encodedID)
    if err != nil {
        return err
    }


OK, this will work, but we should test this. We’ve introduced two new “happy paths” (receive a request with the ID not encoded and the decoded version is exactly the same, receive a request with the ID encoded, and the decoded version is what we expect for later validation) and one new “unhappy path” where the ID is malformed and doesn’t decode properly. The problem here is that we need to test this function and pass in a request that has an appropriate ID for the path we are trying to test.

Already, this sets us down the road to the yak shaving shop. We can assemble a request ourselves, and attempt to set up all of the right internal variables necessary to be able to pull them out again inside our own function in the way it expects. A slightly harder way would be to set up a test server with httptest, set up the routing with mux and make a real request through them, which will ensure the request that makes it to our function under test is as real as can be. However we are now also effectively testing the request handling and routing as well – not the minimum code surface area.

As it turns out, neither of these options are particularly good. I’ll start with the latter. You start up a test server and assemble a request like so (some parts have been deliberately simplified):

testID := "id%XX12345" // deliberately malform the encoded part
ts, _ := httptest.NewServer(...)
url := fmt.Sprintf("%s/foo/%s/bar", ts.URL, testID)
resp, err := http.Get(url)


Uh-oh, the malformed encoded characters will be caught by http.Get() before the request is even sent by the client. The same goes for http.NewRequest() We can’t test like this, but what if we assemble the request ourselves?

serverAddr := strings.Split(ts.URL, "http://")[1]
req := &http.Request{
	Method: "GET",
	URL: &url.URL{
		Host:   serverAddr,
		Scheme: "http",
		Opaque: "/foo/id%XX12345/bar",
	},
}
resp, err := http.DefaultClient.Do(req)


We can send this request and it will make it to the server, but now the request handling on the server side will parse the path and catch the error there – it still won’t make it to the function under test. We could write our own test server that has its own request handling wrapped around a TCP connection, but that’s far more work. We also have to determine whether the request has succeeded or failed via the test server’s response codes (and possibly response body text) which is really not ideal.

So, onto testing with a “fake” request. Looking back at our code, we notice that we are pulling the ID out of the request via mux.Vars(request)["id"]. When you don’t use any request routing package like this, basically all request path and query parameter variables are accessible directly on the request object, but my suspicion was that mux.Vars didn’t just simply wrap around data already in the request object but that it stored it elsewhere in a different way. Looking at the mux code, it actually uses a data structure defined in the context package, a very gnarly nesting of maps. The outer level keys off the request pointer, and each unique request pointer will have a map containing different “context keys” – either varsKey or routeKey depending on where the parameters are coming from (but I’ll not dive into that in this article).

The part of the request we are interested in is grouped under varsKey, which is the first iota constant, so it is 0. We can use the context.Set() to set the appropriate data we want to fake out in our request, with a relatively bizarre invocation:

type contextKey int
const (
  varsKey contextKey iota
  routeKey
)
context.Set(request, varsKey, map[string]string{"id": "id%XX12345"})


This appeared to be enough to work, but inevitably the test would fail due to the result of mux.Vars(request)["id"] being an empty string. I added some debugging Printf’s to the mux and context packages to print out what was being set and what was being accessed, and universally it looked like what was created should have been correct:

map[0x10428000:map[0:map[id:id%XX12345]]]


The request pointer keying into the top-level map was the same in both cases, but the map of parameter names to values was only there when setting it in the test – what mux.Vars() was accessing simply didn’t have them.

The problem is of course simple. The mux package is keying into the second-level map with a variable of value 0 but type mux.contextKey. I was attempting to fool it by keying into the map with a main.contextKey of the same value. The only reason this worked at all was due to the inner map of data being map[interface{}]interface{} – effectively untyped – the two zero-valued variables of different types (and even then, only different by virtue of being in different packages) did not collide and hence there was no way to get out the value I had previously set.

Since mux.contextKey is not exported, there is actually no way to fake out that request data (well, I’m sure it can be done with some reflect package magic, but that’s definitely indicative of code smell). The end result was that this small code change is untestable in the unhappy path. I’m still relatively sure nothing unexpected will happen at runtime since the request handling above the function will catch malformed encodings, and some alternatives do exist, such as doing this kind of decoding in its own handler wrapping around what we already have set up, or not using the mux package in the first place and simplifying our request routes.

It is yet again, a great example of why sometimes the simplest changes can take the most time, discussion, agonising over testing methodologies and greatest personal sacrifice of software development values. The only reason I didn’t spend any longer on it (and I definitely could have) was because it was blocking other teams from progressing on their own work (outside of the obvious wastage in my own productive time).

Tags: , , , ,

Sunday, January 18th, 2015 Tech 2 Comments