gravatar image

Static URL shortening with nginx maps

In 2012 when it was hip and cool to do so, I also had my own URL shortener. It was based on what I called “katana”, a convenience ruby wrapper around “guillotine” that made it easy to run it on Heroku backed by Redis. Back then Heroku still had a free tier and RedisToGo was available as a free add-on for databases up to 5MB or so. It was really fun to run, it had its own endpoint to support Tweetbot’s custom URL shortening integration and the free tier was more than good enough for the occasional shortening.

Over the years however I used it less and less, mostly because Twitter had started forcing its own URL shortening with auto-expansion when viewing a tweet on everything. And the experience of using a custom shortener was not on par with that. I’ve also almost lost the database of the shortener a couple of times because free tiers don’t usually come with backups. So I rigged up a quick GitHub Action that ran once a day, redis-dump-ed all the contents to plain text and committed them to a git repo as a low budget backup job. At this point I wasn’t really shortening anything anymore but wanted to keep the existing URLs functional. I had moved to Heroku’s own Redis service at that point and there was no real work involved to keep it running.

Fast forward to 2022 Heroku announced the end of the free tiers. And while I’m generally happy to pay for things I wasn’t convinced that just maintaining what was essentially by now a barely used URL lookup app was worth the $7/month for me. So I shut it down and thought about alternatives. I could run the app on my server that I use for a couple of things. But I really don’t want to run a ruby app + redis in my free time. I thought about implementing the shortener logic in Go and back it by something like sqlite or even just a yaml file. But again that felt like a lot of effort for not actually shortening anything.

And then I thought “this is just hosting 301 redirects, surely something nginx is good at”. And sure enough, after a quick internet search I found a stackoverflow post that provided a good example for managing a lookup map in a handful of lines of code. The core of it is basically:

# head -n 5 /usr/local/etc/nginx/mrtz_cc_redirect_map.conf
/-KmaJA '';
/-vvREg '';
/-yW3mQ '';
/09nQKA '';
/0YK2gg '';

# wc -l /usr/local/etc/nginx/mrtz_cc_redirect_map.conf
424 /usr/local/etc/nginx/mrtz_cc_redirect_map.conf

# cat /usr/local/etc/nginx/sites/redirect
map_hash_bucket_size 256; # see

map $request_uri $new_uri {
    include /usr/local/etc/nginx/mrtz_cc_redirect_map.conf;

server {
  listen ssl;

  if ($new_uri) {
    return 301 $new_uri;


So all I had to do was convert the plain text backup of my redis instance into the nginx map format, which was easy enough with this awk one-liner:

% head -n 5 backups/
SET     guillotine:hash:-KmaJA ''
SET     guillotine:hash:-vvREg ''
SET     guillotine:hash:-yW3mQ ''
SET     guillotine:hash:09nQKA ''
SET     guillotine:hash:0YK2gg ''

% awk '/guillotine:hash/ { split($2,a,/:/); print "/"a[3]" "$3";"}' < backups/ | head -n 5
/-KmaJA '';
/-vvREg '';
/-yW3mQ '';
/09nQKA '';
/0YK2gg '';

And then chef out the nginx config and Let’s Encrypt setup for the domain to my server, change the DNS records to the server instead of Heroku1. And voila:

% curl -sv 2>&1 | grep Location
< Location:

I really like this setup because running nginx is pretty straight forward for the small scale I use it at. And I care about keeping URLs working. So this makes me happy. I might at some point maybe want to start using it and adding URLs again. At which point I have to figure something out. But I don’t expect that to be any time soon (if at all).

  1. There were actually some hiccups in the middle where I still had the DNS configure in dnsimple but had apparently let the domain lapse 😅. But re-registering it with dnsimple was super fast and I just had to wait a bit then for the registration to propagate. ↩︎