Mobile shell and high latency connections

by Oliver on Monday, February 3rd, 2014.

I seem to recall first being introduced to MOSH after I saw a link to it in my RSS feed, and almost simultaneously in my work’s developer mailing list (where people tend to post about new and interesting technologies). People were literally losing their shit over it (well, ok, not literally) and declaring they would never use anything but MOSH ever again. I may be dramatising somewhat.

It’s certainly an interesting piece of software and I could see it might be of utility if you really do work “mobile” a lot of the time or have really shoddy connectivity, but I’m not under any illusion that most of the people I know actually have it that rough. 1st world problems and all of that. In any case, I now actually have a chance to use it since I’m working remotely and with much greater latency to the servers than I’m used to. I realise that up until now I’ve had it pretty good – most of our servers are easily within 100ms of us (continental Europe) and those that aren’t so close are usually on the east coast of the U.S. I have an account on a server back in Sydney, which is almost intolerable to use, but I barely ever use it.

So I installed MOSH and gave it a run. It is undoubtedly good for typing actual letters and deleting them – I will give it that much. There is nothing quite as horrible as making several typos and thundering on with the rest of the line only to realise that you must awkwardly backspace (usually just a few characters at a time to ensure you don’t go too far by accident) or move the cursor back to the scene of the crime. Once you get there, it’s another laborious process of fixing the error and moving back to where you left off. I won’t go into more detail – you understand the problems MOSH is trying to solve.

The underliney thing it does is great, and I truly enjoyed the performance improvement as already stated. Yes, you do have to wait at the end for a moment so that the remote end can confirm it has received your keystrokes, but you generally already know it is fine at that point. The few times my connection or VPN has dropped, MOSH has kept the session around and even printed a little message at the top of the screen to tell me about it. That was pretty awesome.

But the terminal experience could be even tighter. Moving the cursor by itself is for some reason unbuffered (or at least feels that way) – it is just as slow as if I were using normal SSH. I don’t see why this has to be the case, as it is not actually changing anything, and could presumably perform the same optimisations on the cursor positioning. Delete (rather than backspace) also (at least as far as I could tell) would drop back to regular unbuffered speeds. Given that I often can’t remember more shell shortcuts than Meta-Del, Ctrl-e, Ctrl-a etc I’m often using the cursor keys for navigation so this frequently kills me.

I was also not entirely sure how it would go combined with a screen session, but there didn’t seem to be any problems there. Overall I would definitely recommend it, if you have an unreliable or high-latency connection – it is far better than “raw” SSH. But I really hope the developers have further ideas to tweak and tune the experience – it goes perhaps half of the way to being truly awesome but is not quite there yet.

Tags: , , ,

Monday, February 3rd, 2014 Tech 1 Comment

Cool, interesting, useful, unique and innovative Shell Prompts

by Oliver on Wednesday, September 19th, 2012.

At $employer today, we had our bi-weekly tech talk session and one of the lightning talks given was on tmux. Tmux is an excellent piece of software (although I gave up on it and started using iTerm2) but that’s not what I wanted to talk about.

One of the other participants in the session noticed the presenter’s shell prompt had a little smiley/frowney face which changed both expression and colour depending on the exit code of the last command – how cool is that? How many times have we all typed echo $? just to find out if our last command was really successful? It really makes sense to have this information displayed at all times.

So in that spirit I’m sharing my PS1 prompt variable with you. It’s not the most advanced, doesn’t use all of the bells and whistles and I’m still not entirely sure the information it presents is essential but it’s a work in progress. I’d love for you to share your own in the comments in the hope of spreading know-how and ideas!

export PS1="\`if [ \$? = 0 ]; then echo \e[33\;40m\\\^\\\_\\\^\e[0m; else echo \e[36\;40m\\\-\e[0m\\\_\e[36\;40m\\\-\e[0m; fi\` \[\033[38m\]\u \[\033[0;36m\]\j \[\033[1;32m\]\!\[\033[01;34m\] \w \[\033[31m\]\`ruby -e \"print (%x{git branch 2> /dev/null}.split(%q{\n}).grep(/^\*/).first || '').gsub(/^\* (.+)$/, '(\1) ')\"\`\[\033[37m\]$\[\033[00m\] "

Roughly in order, this equates to:

  1. Smiley/frowney face based on exit code of last command.
  2. Username
  3. Number of backgrounded jobs
  4. Shell history number
  5. CWD leaving home directory an unexpanded ~
  6. Git repository branch, using Ruby 1.8/1.9-compatible code

On my machine it looks like this:

What does yours look like?

Tags: ,

Wednesday, September 19th, 2012 Tech 4 Comments

I’m done with shell scripting

by Oliver on Saturday, February 12th, 2011.

I think I will call this week the last I use shell script as my primary go-to language. Yes, by trade I am a systems administrator but I do have a Bachelor of Computer Science degree and there is a dormant programmer inside me desperately trying to get out. I feel like shell script has become the familiar crutch that I go back to whenever faced with a problem, and that is becoming frustrating to me.

Don’t get me wrong – there is a wealth of things that shell script (and I’m primarily referring to BASH (or Bourne Again SHell) here rather than C SHell, Korn SHell or the legendary Z SHell) can do, even more so with the old-school UNIX tools like grep, sed, awk, cut, tr and pals. In fact, if you have the displeasure of being interviewed by me, a good deal of familiarity with these tools will be expected of you. They have their place, that is what I am trying to say, but the reflex of reaching for these tools needs to be quietened somewhat.

The straw that broke my camel’s back in this instance was this sorry piece of scripting by yours truly. It’s not an exemplary piece of “code” and I think that demonstrates how little I cared about it at this point. I was briefly entertained by the idea of implementing a simple uploader for Flickr in shell script, and I did actually manage to write it up in a fairly short amount of time, and it did then successfully upload around 4GB of images. The problem was that while the initial idea was simple enough, the script took on a life of its own (especially once the intricacies of Flickr’s authentication API were fully realised) and became much more complex than initially envisaged.

Despite this, I had started out with the goal of making a reasonably “pure” shell uploader, and stuck to my guns. What I should have done, was call it quits when I started parsing the REST interface’s XML output with grep – that was a major warning sign. Now I have a reasonably inflexible program that barely handles errors at all and only just gets the job done. I had a feature request from some poor soul who decided to use it and I was actually depressed at the prospect of having to implement it – that’s not how a programmer should react to being able to extend the use of his/her work!

From a technical standpoint, shell is a terrible excuse for a “language”. The poor typing system, excruciating handling of anything that should be an “object” when all you generally have to work with are string manipulation tools, and a “library” that is basically limited by what commands you have available on the system. I know that I have probably barely plumbed the depths of what BASH is capable of, but when the basics are just so hard to use for what are frequently used programming patterns, I don’t really see the point.

So from next week, I’ve decided to reach for Python or Ruby when I have to code something up that is more than a few lines’ worth, or of reasonable complexity. Not that I don’t already use Python and Ruby when the occasion calls for it, but I think that those occasions are too few and far between. Shell scripting is an old-school sysadmin crutch and it is time to fully embrace the DevOps mentality and get into serious programming mode.

Tags: , , , , ,

Saturday, February 12th, 2011 Tech, Thoughts 2 Comments

I/O redirection “optimizations”

by Oliver on Sunday, September 12th, 2010.

Quite a while back, I had to migrate a few terabytes of data from one machine to another. Not that special a task, and certainly a few terabytes is not that much but at the time it was a reasonable amount and even over 1Gbps network it can take some time. Fortunately it was not time critical and I could take the server in question down for a while to facilitate the migration. The data in question was a number of discrete filesystems on a bunch of LVM logical volumes, thus I was able to basically just recreate the LVs on the destination and do a straight bit copy.

That all being said, I still wanted it to complete quickly! After eradicating the usual readahead settings being set too low for sequential reads from the source, the copy occurred more or less as expected, and I kept a watchful eye on iostat. This is where things got a bit strange, as I noticed identical read and write values coming back from the destination LV. The basic formula of the copy was as follows:

# source
for i in /dev/VolumeGroup/*; do
    LE=`lvdisplay $i | grep "Current LE" | awk '{print $NF}'`
    NAME=`basename $i`
    echo "${NAME}:${LE}" | nc newmachine 30001
    sleep 10
    dd if=$i bs=4M | nc newmachine 30000
    sleep 10
echo "DONE:0" | nc newmachine 30001

# destination
while true
    INFO=`nc -l 30001`
    NAME=`echo $INFO | cut -f1 -d:`
    LE=`echo $INFO | cut -f2 -d:`
    if [ $NAME == "DONE" ]
    lvcreate -l $LE -n $NAME /dev/VolumeGroup
    nc -l 30000 > /dev/VolumeGroup/$NAME

Unfortunately I don’t have the actual code around, so the above is only an off-the-top-of-my-head approximation, but you should get the idea:

  • Loop over our logical volumes we want to migrate over to the new machine, determining name and number of logical extents (yes, we have to do some extra work if logical extent size differs between source and destination).
  • Pipe the number of LEs and the name of the LV to the destination over a “control” channel so that the new LV can be created, and wait a few seconds for this to take place.
  • Read out the source LV with a reasonable block size, and pipe it over to the destination where it is piped directly into the new LV. I may have added an intermediate stage of dd to ensure an output block size of 4MB as well, but my memory fails me.

So, as I mentioned, at this point I noticed that not only was data being written to the destination LV (as you would expect) but a corresponding amount was being simultaneously read from it. I was not able to resolve this discrepancy at the time, although I suspected perhaps some intelligence in part of the redirection on the output side was trying to determine which blocks actually needed overwriting.

A couple of months ago I spotted this post in Chris Siebenmann’s blog which may explain it. He has certainly run into a similar confounding case of system “intelligence”.

Tags: , , , , ,

Sunday, September 12th, 2010 Tech No Comments