My Mobile (shell) Home
At work I ssh into a lot of machines. I recently came up with a script that would ensure that my dotfiles would be deployed to any server I have access to quickly and reliably.
I tweak my dotfiles constantly. I work hard and experiment regularly to make my
day-to-day work as effortless as possible. For a long time I decided to just
live with an unconfigured bash
on the servers that I connect to, but
eventually I got fed up and decided to attempt to ensure that my dotfiles would
be on any server I connect to. I came up with a three part solution.
🔗 fressh
fressh
,
for Fresh Secure Shell or fREW’s Secure Shell, is a simple tool that I run as a
replacement for ssh. Currently the code is:
#!/usr/bin/env perl
use strict;
use warnings;
# TODO: make this work for ssh options
my $host = shift;
if (!@ARGV) {
system('ssh', $host, <<'SH');
set -e
mkdir -p $HOME/code
if [ ! -d $HOME/code/dotfiles ]; then
set -e
timeout 20s git clone --quiet git://github.com/frioux/dotfiles $HOME/code/dotfiles
cd $HOME/code/dotfiles
./install.sh
fi
exit 0
SH
if ($? >> 8) {
warn "git clone timed out; falling back to rsync\n";
system('rsync', '-lr', "$ENV{DOTFILES}/", "$host:code/dotfiles/");
system('ssh', $host, 'cd $HOME/code/dotfiles; ./install.sh')
unless $? >> 8;
}
}
exec 'ssh', $host, @ARGV
It first tries to connect to the server, pull down my dotfiles from github, and
then run the install script. I found that some servers, every now and then,
will have the git://
protocol blocked, so the initial clone would take forever
to simply time out. As you can see in the script above, I used timeout
from
the GNU
coreutils
to limit it to a total of 20 seconds.
If the timeout fails I’ll get a non-zero exit code from ssh
and fall back to
rsync
ing my dotfiles from my laptop to the remote server, and then, assuming
that works, I again run the installer.
This doesn’t work if I pass argments to ssh
but that is so rare that I haven’t
run into it yet.
🔗 install.sh
Lots of people have installers for their dotfiles. Mine isn’t very special except that it is fairly simple and predictable. The one bit that I think is worth showing off is this:
echo "[submodule]\n\tfetchJobs = $(cat /proc/cpuinfo | grep '^processor' | wc -l)\n\n" > ~/.git-multicore
git submodule update --init
The above ensures that when I check out submodules I’ll use as many cores are on the machine, and I put it in a config file (which I have git configured to source), so if I am on a server that has a git that doesn’t support parallel submodule fetching it gracefully falls back to a single thread.
Of course immediately after setting up that file, I load the (many) submodules I use for my dotfiles.
🔗 Auto Update
Finally, I hate to check to see if a shell needs to be updated every time it starts, but I also hate to have to check and update it by hand. I devised an interesting workaround.
When my installer runs, it sets itself (the installer) as a git hook that basically runs after I do a pull:
link-file install.sh .git/hooks/post-checkout
link-file install.sh .git/hooks/post-merge
Next, when my shell starts, I check a special file to see if I need to update the shell. All I’m doing is comparing the contents of the file to the current epoch, so it’s fairly efficient. I could probably tweak it to to use file metadata but I haven’t gotten around to it and doubt I ever will:
if [[ $EPOCHSECONDS -gt $(cat ~/.dotfilecheck) ]]; then
echo "Updating .zshrc automatically ($EPOCHSECONDS -gt $(cat ~/.dotfilecheck))";
echo $(($EPOCHSECONDS+60*60*24*7)) > ~/.dotfilecheck
git --work-tree=$DOTFILES --git-dir=$DOTFILES/.git pull --ff-only
fi
So the script simply does a git pull
if I haven’t done one in about a week.
Because I linked the installer to a git hook, after the pull succeeds in pulling
new refs, the installer will automatically be triggerd and it will set up any
new files, update submodules, etc.
I’ve considered making my dotfiles be driven by some other framework like … or omz, but I keep running in to places where having a simple, reliable installer is completely sufficient and often superior.
(The following includes affiliate links.)
If you enjoyed this and would like to learn more, check out From Bash to Z Shell: Conquering the Command Line. That’s the book I used to learn shell and I continue to find it an excellent reference.
Posted Wed, Mar 8, 2017If you're interested in being notified when new posts are published, you can subscribe here; you'll get an email once a week at the most.