Archive for December, 2013

Unexpected benefits of IPv6 tunnelling

by Oliver on Thursday, December 26th, 2013.

Recently I wrote about getting my IPv6 tunnel setup working properly again after a while of it not working very well (or not at all). Since my ISP doesn’t yet (to my knowledge) provide native IPv6 connectivity to regular consumers, I tunnel IPv6 via Hurricane Electric, which on the whole works pretty well.

Another fantastic thing that my ISP does is throttle YouTube (and presumably other) traffic, which can make it unusable at the best of times, even at the lowest resolutions. I’m being sarcastic, obviously – it’s REALLY irritating. YouTube, GMail and presumably many other Google services as well as other mainstream sites such as Facebook have supported IPv6 for some time by default and the range of sites supporting it is fortunately increasing (although not nearly fast enough). After getting my IPv6 running properly again, I noticed that YouTube videos were actually starting quite fast and playing back without interruption.

Presumably Deutsche Telekom is doing some fairly basic packet inspection or identification of YouTube flows based on Autonomous System numbers or known IP subnets, as the tunnelled traffic via IPv6 is not throttled it would seem. Despite being re-routed via Frankfurt and suffering a small additional latency penalty, I still get vastly superior YouTube performance over the IPv6 tunnel as opposed to regular IPv4 transit. Especially given that the much smaller IPv6 routing table is often not nearly as optimised as the vast number of IPv4 routes, this is pretty impressive.

So in actual fact I’m currently better off with my tunnelled IPv6 connectivity than having native IPv6 connectivity through Deutsche Telekom. Odd, but currently very satifying.

Tags: , , ,

Thursday, December 26th, 2013 Germany, Tech No Comments

UPnP, IPv6 and updater scripts

by Oliver on Saturday, December 21st, 2013.

I’ve written a couple of times in the past about my use of IPv6 at home. Sadly the state of native IPv6 for consumers is not so much better, so I’m left without native connectivity to my router (to be fair, I have assumed nothing has changed but it is possible Deutsche Telekom is now offering it without making any fanfare about its arrival).

So I still have the humble Hurricane Electric tunnel running as I’ve previously written about. Unfortunately the tunnel config at the HE end needs to be updated whenever your public IP changes (which it does almost every day), and I’ve only ever managed this by hacking up the DynDNS support in my router. This also meant that I couldn’t use any actual DynDNS updating in conjunction with the hack. For a time I had some kind of DynDNS client running on my HTPC but that also seemed somewhat unreliable.

Irritated with the poor state of this system, and looking to do a little bit of programming in Go, I set about building a small program that does the following:

  • Retrieves the current WAN IP from the router using Universal Plug ‘n’ Play.
  • Calls the Tunnelbroker API to update the configuration with the current WAN IP.
  • Profit!

Prior to this I only had an extremely cursory knowledge of UPnP, i.e., some technology in the router vaguely associated with Microsoft that introduces security holes into your network and is best left disabled! It is actually a very rich system of protocols that facilitates automation, integration of a wide variety of different devices and evented reactions to system changes. You can read through this guide to understanding UPnP which explains the intentions behind it, and a little of a (now slightly dated) vision of the future electronic home – despite it only being written in 2000.

My purpose is much simpler – grab the WAN interface IP address. Sure, I could do this with curl hitting one of the many “what is my IP”-style websites, but that requires actually going out onto the internet and making a request to some random site when my router already knows the address. It seems far more logical to retrieve it from there directly! Fortunately this is dead simple with UPnP, once you understand the general command flow and protocols/requests to use. Briefly, the exchange looks like this:

  • Send multicast UDP discovery message to the network. The message is basically an HTTP request.
  • Router responds with search results for each service it provides, via unicast UDP to the host that sent the discovery message. The responses again are basically HTTP.
  • Send an HTTP request over TCP this time to the router’s control endpoint with a SOAP/XML request asking for the IP address.
  • Router sends the HTTP response back with XML containing the IP address.

I can happily say that this works, and you can browse the code here. Pull requests and issues welcome. Making the subsequent HTTP request to the Tunnelbroker API is relatively straightforward, after the UPnP gymnastics.

In this implementation I just make a single control request to the router, and get a single response back, but I mentioned earlier that a core feature of UPnP is evented responses to system changes. The overview document I linked to above mentions such things as a program running on your computer that responds to events from the printer advising it is out of paper, or that its physical location has changed, but the possibilities here are really as limitless as the devices that can support UPnP. In this case, it is possible for the router to update subscribers about a new IP address once it has changed (which sadly I haven’t yet implemented).

So in summary, UPnP is a surprisingly useful technology that deserves looking into. If you have a use for my tunnel updater program, I’d love to hear any feedback on it.

Tags: , ,

Saturday, December 21st, 2013 Tech No Comments

Chef’s attribute system is broken

by Oliver on Thursday, December 19th, 2013.

Disclaimer: I’m not what I would call a “hard-core” Chef user, but I’ll explain in a moment why that is irrelevant.

I’ve been working on and off with Chef for almost a year and a half now; certainly not something I use every day or even every week but perhaps a handful of times per month something will come up that requires me to jump into Chef land and get some real work done. There are definitely people in the organisation with a lot more Chef knowledge than I, and I utilise their knowledge to make up for my own knowledge gaps.

If you don’t know me, I have a fairly good background already in configuration management. I would safely say I fit into the “power-user” category of Puppet users at my previous employer, having worked quite a lot on custom Ruby bits and pieces for our installation, and even giving a main-stage talk at Puppet Camp in Amsterdam in 2011. So I know plenty about the general landscape.

One of the main features of the system I helped build around that time was a very flexible data store and layered configuration engine – an External Node Classifier – which, mostly due to user demand from those using our system, went from a handful of layers at which you could define data to over 12. These allowed you to express variables tied to a given app, role within that app’s deployment, datacenter, region, environment, and machine itself. Originally we had just a few of these, and there were just a few combinations in which you could use them. Demand grew until we had covered a reasonable amount of combinations, at which point the system became a little unwieldy.

We tried documenting the order of overrides permissible, which turned into a fairly awful document, and it confused the hell out of new users who had much simpler requirements. But despite supporting so many combinations already, there would always be someone else coming up and asking for some other new combination we hadn’t yet supported. When I left, we were just starting to gear up for a rewrite of the engine that would support a completely different mechanism for configuration that wasn’t limited to just facts about the app or its deployment environment, but I’m not sure how well that went.

My point is, if you haven’t already guessed, is that Chef replicates all of that flexibility and all of the pitfalls that go along with it. Initially I was comforted by its familiarity, but now I find it to just painfully shackled to the same inescapable problems I was trying to forget, and even introduces problems of its own. Take a look at the documentation for deep merging attributes.

A feature in this area (“knockout” prefix for deep merging) was even removed in the last major release. Deep merging makes so little sense in Chef, it pains me. Chef has all the flexibility of raw Ruby at its fingertips, and yet deep merging was decided to be necessary anyway – even though you can more or less accomplish any data transformations you want with native code. And not all of it is even correct.

I’ve seen several examples in our own codebase of deep merging being used as intended – with hashes – but also several examples where arrays have been used “to work around deep merging”. If you read the documentation, you’ll see that both hashes and arrays are candidates for deep merging and yet apparently even on the latest version (11.8) this is not really the case. Another point that pains me is that in order to avoid the deep merging behaviour encourages all of the bad habits that go along with Ruby’s duck-typing system – if you have an array already in place in a cookbook default attribute, and define a hash (or any other type) in a role default attribute it will be overwritten wholesale. This potentially puts a large onus on cookbook authors to excessively type check and stamps a large “WAT” all over this.

Finally, the complexity of the attribute definition layers is just excessive. I’ve previously done a lot of stuff like this in Puppet and it still makes very little sense to me. I was able in one instance to override a hash defined in the cookbook default attributes with another hash defined in the role override attributes despite them apparently being candidates for merging, and yet when the role override attributes were moved down to role default attribute level they WERE merged. The system is just far too complex and complicated, even for relatively savvy users. If a configuration management system requires you to do excessive juggling of config data just to get it working properly, it’s actually costing rather than saving you time.

Fortunately there are now other (what I will still call “current generation”, since they haven’t made any huge noticeable improvements over current offerings) configuration management systems in the marketplace such as Ansible, Salt and Juju. In my brief examinations of them, none of them appear to do much of a better job but at least it is good to have other alternatives than just Chef and Puppet (and I suppose CFEngine3, if you’re a real masochist). Will there be another generation? Based on my experiences with containerised application hosting in the style of Docker, Heroku and others, perhaps today’s configuration management systems are trying to do too much already and we just don’t need more – we need less.

Tags: , ,

Thursday, December 19th, 2013 Tech No Comments

I moved my blog to HTTPS

by Oliver on Monday, December 2nd, 2013.

If you are reading this successfully via my site, or some RSS feed, it means that my blog migration has been successful. I’m unfortunately a typical internet technologist in some regards – if the internet machines are working, I don’t tend to spend much time maintaining them or planning any upgrades. The upcoming expiration of my domain name and shutdown of a trial VPS I had running for some time kicked me into action and I finally got around to moving the blog. It’s now running on an OpenVZ container (hence the firewall conundrum recently posted about) and I have a basic StartSSL free certificate in place to enable HTTPS.

Since basically all links currently point to the unencrypted version of the site, I have the webserver rewriting all incoming requests to the HTTPS version. Chris Siebenmann recently wrote at length on migrations to HTTPS, the virtues of automatic rewrites from every page on your site to the corresponding HTTPS version and various other concerns surrounding ciphers, protocols and SSL/TLS stacks. I have a fairly low care factor at the moment for most of these, and consider it to be at least a reasonable step forward to have HTTPS support at all.

Perhaps the most irritating aspect was having to do a text replacement on the database, due to many pages having explicit http:// links to images and other pages on the blog. In future I hope I can use relative links and links which reuse the same scheme as the page they are linked from (now that I finally know about them – e.g. //www.foo.com/index.html). Somewhat more interesting will be IPv6, which is also now a possibility. Huzzah, progress!

Tags: , , ,

Monday, December 2nd, 2013 Tech 4 Comments