A small island in the vast internet ocean



Running a script with large input on Heroku

04 August 2020

Running a script on Heroku is easy:

heroku run -a <your app> python

This runs your script in an environment with access to your production database, Redis and all the other stuff that.

This can be great for manual data migrations – say adding a demo course for all of your users.

However, in some cases you may want to run your script only for say 5,000 out of 20,000 users. Let’s say you’ve run some data analytics and you now have the IDs of the 5,000 users that the script should run on.

You don’t want to dirty up the repository with a custom file with the IDs of exactly those 5,000 users. What if you want to run the script again with 200 more users in a week?

The first thing you need to do is to allow your script to accept an input file: python <input file>.

Now how do you get those 5,000 IDs into Heroku without adding them directly to the repository?

On Heroku you don’t have access to convienient tools like curl or wget. But if your app includes the requests-library (assuming you have a Python app) you can do

python -c "import requests; print(requests.get('').text)" > test.txt

and then

python test.txt

The full session would look something like:

heroku run -a eduflow bash
Running bash on ⬢ <your app>... up, run.9942 (Standard-1X)
~ $ python -c "import requests; print(requests.get('').text)" > test.txt
~ $ python test.txt

If you don’t have access to requests, you can use the built-in urllib-library:

python -c 'from urllib.request import urlopen; print(urlopen("").read())'

Here, you’ll need to strip the beginning b' and ending ' since it returns a bytes-object.

For a Node app you can do something similar with fetch or axios.

Fixing your SSL

16 October 2015

Today I discovered a new tool: SSL Labs’ SSL test.

It let’s you input your domain URL and then checks the security of your SSL configuration for known vulnerabilities. After the test it gives you an overall security rating (on the US school A-F scale) along with a list of recommended changes to fix any security holes and follow best practice.

Testing on I got a horrendous F. The SSL proxy is running on an old Ubuntu machine, which hadn’t been updated in a while, making the SSL setup vulnerable to the Heartbleed1 vulnerability. Furthermore as the SSL-setup was an unconfigured/vanilla setup, POODLE2 and other downgrade attacks were also possible.

Updating packages and revising the SSL configuration I got the security rating up to an acceptable A-grading.

I can highly recommend using SSL test to check your SSL setup. Not only does it point out vulnerabilites but also links to resources on what the vulnerabilities entail and how to fix them.

1 Heartbleed:
2 POODLE: and

Flask exception as push notification

28 September 2015

I’m currently working on a small project: a web service written in Flask.

The service is currently at an early stage, and still so small that I want to be notified of every exception that happens in production.

I can easily get these as mails1 – which I am – but during peak hours I want to be able to respond immediatly to an error, and not have it drown in the rest of whatever mail I’m getting in.

So I thought, why not receive a push notification?

I was sure I had heard of at least one app made with just that in mind – receiving custom push notifications that can be created automatically via an API. (I don’t want to create my own iOS app just for that with provisioning and all that cruft and timesuck)

Sure enough: Pushover and Boxcar fit the bill. I ended up going with Pushover for now as Boxcar requires iOS 8 which I’m not on yet.

I’m already sending the exceptions as mail to my Gmail, with a “plus-alias”: my-email+flask-exceptions@gmail.com2 so the Pushover email API3 was perfect for quickly getting up and running.

I set up a filter in Gmail, forwarding those mails (filtered on the To: field) to – and now I’m receiving exceptions from Flask as push notifications on my phone. Pretty neat!


Getting SSH to choose the right key

22 January 2015

On Bitbucket I have a work user and a personal user. This morning trying to push a project where my work user is a member, failed:

> git push
conq: repository access denied.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The git remote looks like this:

> git remote -v
origin<WORK_PROJECT>/<WORK_PROJECT>.git (fetch)
origin<WORK_PROJECT>/<WORK_PROJECT>.git (push)

I even set up ~/.ssh/config to make sure my work SSH key (associated with my work SSH user) is being used on

> cat ~/.ssh/config
    IdentityFile %d/.ssh/<KEY_WORK>

Bitbucket’s troubleshooting suggests ssh -T to check if authentication is working

> ssh -v -T

OpenSSH_5.6p1, OpenSSL 0.9.8za 5 Jun 2014
debug1: Reading configuration data ~/.ssh/config
debug1: Applying options for
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: Connecting to [] port 22.
debug1: Connection established.
debug1: identity file ~/.ssh/<KEY_WORK> type 1
debug1: identity file ~/.ssh/<KEY_WORK>-cert type -1        <----   Yup, this is what I want, my work key being used

... protocol negotiation and fingerprint ...

debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: ~/.ssh/<KEY_PERSONAL>      <----   Wait, what?
debug1: Remote: Forced command: conq username:malthejorgensen

... ssh extensions ...

debug1: Authentication succeeded (publickey).
Authenticated to ([]:22).

... entering interactive session, setting environment...

logged in as malthejorgensen.                               <----   My personal user :(

You can use git or hg to connect to Bitbucket. Shell access is disabled.

... bla bla bla, many bytes were sent ...

debug1: Exit status 0

As we can see SSH sees in my configuration that KEY_WORK should be used, and yet it ends up authenticating with KEY_PERSONAL and logs me in as my personal user.

The problem is the ssh-agent, and that KEY_WORK is not added to it. We can check that as follows:

> ssh-add -l
2048 09:c1:32:4a:e7:c2:05:6b:4e:52:71:aa:b8:ee:3e:53 ~/.ssh/KEY_PERSONAL (RSA)

Which shows that only KEY_PERSONAL is known to ssh-agent. We can add KEY_WORK to ssh-agent like this:

> ssh-add ~/.ssh/KEY_WORK
Identity added: ~/.ssh/KEY_WORK (~/.ssh/KEY_WORK)
> ssh-add -l
2048 09:c1:32:4a:e7:c2:05:6b:4e:52:71:aa:b8:ee:3e:53 ~/.ssh/KEY_PERSONAL (RSA)
2048 08:5b:1e:8e:a8:7d:06:61:dc:07:42:7b:ca:b7:49:bd ~/.ssh/KEY_WORK (RSA)

We can see that both keys are now known to ssh-agent. And now – it works:

> ssh -T
logged in as malthe-at-socialsquare-dk.

You can use git or hg to connect to Bitbucket. Shell access is disabled.

In some cases you have to remove your personal key from ssh-agent (you can always re-add it later). You can remove it with ssh-add -d ~/.ssh/KEY_PERSONAL.


UNIX tools, spaces in filenames and \0

01 December 2014

A normal way to delete all files containing _old in the current folder (on a UNIX system) is a command like the following:

find . -iname '*_old*' | xargs rm

However this doesn’t work for filenames containing spaces as rm expects spaces to separate each file.

The usual solution to this is to add -print0 to the find command and -0 to xargs:

find . -iname '*_old*' -print0 | xargs -0 rm

Now my problem is that I only want to delete the first 5 files. The “UNIX way” is to do something like:

find . -iname '*_old*' | head -n 5 | xargs rm

– and now we’re back to something that doesn’t work with filenames containing spaces. Our -print0-trick from earlier won’t work because head doesn’t have an equivalent of the xargs -0 flag.

awk to the rescue?

Note: The following assumes that awk refers to BSD-awk – specifically the awk provided on OS X 10.7.5.

awk operates on a line-by-line basis by default, but this can be changed with the record separator variable RS. A simple head -n 5 implementation would be awk 'NR <= 5 {print}' in awk.

Lets say we’re in a folder with these files:

> ls -1
Los Angeles vacation_001_old.jpg
Los Angeles vacation_002_old.jpg
Los Angeles vacation_003_old.jpg
Los Angeles vacation_004_old.jpg
Los Angeles vacation_005_old.jpg
Los Angeles vacation_006_old.jpg
Los Angeles vacation_007_old.jpg
Los Angeles vacation_008_old.jpg
Los Angeles vacation_009_old.jpg

With awk we should be able to do something like this:

> find . -iname '*_old*' -print0 | awk 'BEGIN {RS="\0"}; NR <= 5 {print}'
./Los Angeles vacation_001_old.jpg

… but it doesn’t work: awk never finds records beyond the first one because of the NUL character \0. The NUL character is the C-string terminator and thus AWK stops reading input after seeing it because it thinks the input string has ended.

However the more modern awks gawk and mawk doesn’t suffer from the same shortcoming:

> find . -iname '*_old*' -print0 | mawk 'BEGIN {RS="\0"}; NR<=5 {print}'
./Los Angeles vacation_001_old.jpg
./Los Angeles vacation_002_old.jpg
./Los Angeles vacation_003_old.jpg
./Los Angeles vacation_004_old.jpg
./Los Angeles vacation_005_old.jpg

But the print function in awk (+ mawk and gawk) print newlines by default. We can change this by setting the output field separator ORS

> find . -iname '*_old*' -print0 | mawk 'BEGIN {RS=ORS="\0"}; NR<=5 {print}'
./Los Angeles vacation_001_old.jpg./Los Angeles vacation_002_old.jpg./Los Angeles vacation_003_old.jpg./Los Angeles vacation_004_old.jpg./Los Angeles vacation_005_old.jpg

The NUL characters doesn’t print – but they are there – you can check by piping into cat -v.

Finally we reach the command:

> find . -iname '*_old*' -print0 | mawk 'BEGIN {RS=ORS="\0"}; NR<=5 {print}' | xargs -0 rm

In sed you can do something like sed -n '1,5p' but sed expects newlines and has no setting to change the “separator” to be something other than newline (sed is Turing-complete, however, so nothing is impossible1).


I think the UNIX tools are great. No doubt about it. I use them every day, and the “do one thing, and do it well”-philosophy plus the composability of tools through pipes are some of the things that these tools invaluable to many programmers, including me. BUT the tools are from the 80’s (and sometimes even older) and have fallen out of touch with modern computing. Today the users’ needs and behaviour shapes the systems rather than the other way around.

We need to build tools that can handle filenames containing spaces and unicode characters. I have an older blog post that touches the same topic: The Unix shebang (#!).

For now I’ve made a head “implementation” for input separated by NULs instead of newlines:

head0() {
  mawk 'BEGIN {RS="\0"}; {print}' | head $@ | mawk 'BEGIN {ORS="\0"}; {print}'

Should we replace the UNIX tools with more modern, but equivalent tools? Is a command-line like PowerShell with built-in types the way to go?

One thing is clear to me: a solution to these problems need to be found. And preferably one that fixes them once and for all.

Fastest way to get your NTFS-drive writeable on OS X

01 December 2014

Create the file /etc/fstab (it doesn’t exists by default) and add the line

LABEL=DRIVENAME none ntfs rw,auto,nobrowse

where DRIVENAME is the name of your NTFS drive. Unmount and mount the drive by issuing the following commands:

diskutil eject DRIVENAME
diskutil mount DRIVENAME

Note: Do not use the command umount or diskutil unmount instead of eject – it forces the unmount and stops any file transfers currently running to and from the disk (which means you are likely to lose data).

You can now browse the drive by issuing the command open /Volumes/DRIVENAME.

Unfortunately the nobrowse flag in fstab means that the drive will not appear on the Desktop nor in the Finder sidebar. Sadly, if you omit it – the drive will appear, but will be read-only.

Tested on OS X 10.7.5.


League of Legends on Linux

23 February 2014

From time to time I play League of Legends with a couple of friends. After having switched to Linux from OS X, I needed to get it working on Linux as there’s no native client for Linux.

What worked for me was to install it via PlayOnLinux. This worked perfectly until I needed to start an actual game. (Updater, Login, Lobby etc. worked fine)

After the Character Select countdown, the splash showed and just stuck. Nothing happened. It never showed the loading screen or the actual game.

The fix for me was to the change the hosts file: /etc/hosts. In this file there should be the following lines (it is the second one that is important):

#<ip-address>	<>	<hostname>	localhost.localdomain	localhost

Look up your computer’s hostname

> hostname

And then change the line in /etc/hosts to this:

#<ip-address>	<>	<hostname>	localhost.localdomain	macbook-malthe

(replacing macbook-malthe with your own hostname from before)

Hopefully it will be as pain-free for you as it was for me ;)

Note: When starting the game, the load screen (in fullscreen with the loading bars and your ping showing) will be stuck for about 1 minute, but don’t worry: the game will start. It did for me at least.


Thunderbird opening links in Chromium

18 February 2014

Thunderbird on my Arch Linux installion was opening links in Chromium, even though everything else on the system pointed towards Firefox being the default browser.

Some people on the internet said to go into Preferences > Preferences > Advanced > Config Editor (yes, that’s preferences twice) and add the keys


and set their values to /usr/bin/firefox. This, however, did not work.

What did work was to set the keys

  • network.protocol-handler.warn-external.http
  • network.protocol-handler.warn-external.https

to true. This makes Thunderbird prompt you what program to open every time you click a link. Here you can choose Firefox, and also check “remember my choice…”.


Macbook Pro backlight on Arch Linux

19 January 2014

I recently installed Arch Linux on my Macbook Pro. In order to control the screen brightness I installed nvidia-bl from the AUR (I use yaourt so: yaourt -S nvidia-bl). This sets up a folder in /sys/class/backlight/nvidia_backlight and in this folder you can get and set the screen brightness via the file brightness:

> cat brightness
> sudo su
# echo 500 > brightness

But after a system update (pacman -Syu) the /sys/class/backlight/nvidia_backlight folder was gone and I could no longer change the brightness.

By running dmesg I could see that the nvidia_bl kernel module was not being loaded during boot:

[    7.183486] nvidia_bl: disagrees about version of symbol module_layout

Trying to load it manually also failed:

> sudo modprobe -v nvidia-bl
insmod /lib/modules/3.12.7-2-ARCH/extramodules/nvidia_bl.ko
modprobe: ERROR: could not insert 'nvidia_bl': Exec format error

These errors mean that the kernel module is not compatible with the kernel. This can be solved by recompiling the kernel module. Since I use yaourt I can simply reinstall nvidia-bl: yaourt -S nvidia-bl.

After that you can manually load the kernel module again:

> sudo modprobe -v nvidia-bl
insmod /lib/modules/3.12.7-2-ARCH/extramodules/nvidia_bl.ko

And you can now control the brightness again :)

Note: I use a Aluminium Unibody Macbook Pro 5,5 from 2008

Editing videos with Blender

09 November 2013

Blender is an Open Source 3D modelling- and animation tool.
– Today I found out it also does video editing!

This post is based on Blender version 2.69

It is well known that the Open Source community has yet to produce decent video editing software. But being stubborn and this being a small project I didn’t want to buy one of those large, expensive video editing suites.

Blender to the rescue

In order to use Blender as a video editor the first thing you need to do, is to switch the window layout to “Video Editing”. (There’s a dropdown in the top menu bar)

Screenshot of Blenders window layout menu

In this view you should be able to see the timeline at the bottom of the window. Here you can drag video files and images into your movie project’s timeline. The timeline is called the Video Sequence editor in “Blender speak”. The imported clips and images will be placed at the time slider (the green vertical line on the timeline)

Left clicking on the timeline sets the time slider and updates the scene view accordingly. The scene view is a preview of a specific frame in your movie project. This also means that you cannot move clips in the timeline by clicking and dragging them.

Each clip is called a Strip.

You select strips in the timeline by Right clicking. Moving strips is done by pressing G [Grab]. Having selected one or multiple strips by right clicking, and then pressing G, the selected strips will follow your mouse as if you were dragging. You should not hold down any mouse or keyboard button while doing this.
Left-click to end the move. Right-click to cancel the move.

Pressing G while moving strips will change the “move” mode. The default state is that if you move a strip on top of another the other strip will be cut (shortened). The other mode will not shorten other strips, but simply move them to make space for the moved strip.

To shorten or lengthen a strip you can select one of the “handles” (the small triangles) at each end of the strip and moving them same way you select and move the whole strip.

When you have selected a strip you should see a panel to the right of the timeline called “Edit Strip”. In this panel you can edit the properties of the strip. For example you can choose whether it should be played forwards or backwards, or that only every second frame should be shown (this is called Strobe).

When you add a strip to the timeline it is placed in a specific layer. These layers are called Channels by Blender and are shown in the timeline as 0, 1, 2… on the left-hand side. Strips in channel 1 will be on top of strips in channel 0, strips in channel 2 will be on top of strips in channel 1, and so on. The strips are drawn on top of eachother in according to the strip’s blend property. I recommend the Over Drop settings that simply draws the strip on top of the other without any blending.

You can watch your current movie project by pressing Alt-A. I recommend choosing the Sync Mode called AV Sync. Otherwise you risk the video and audio going out of sync while you watch your project, and the time slider might not show the correct position. In principle the Frame dropping setting for the Sync Mode should be better, but it didn’t work for me.
To find the Sync Mode menu look for the text “No Sync” at the bottom of the Blender window.

Picture-in-picture (PIP)

To scale a strip (make it smaller or larger), you can add a transform by selecting the strip, then in the menu below the timeline `Add > Effect Strip…

Transform`. You can then select the Transform strip and choose the settings for the effect.

The values of the effect can be changed by clicking the value and dragging left and right.

Saving the movie

As Blender is mainly a 3D modelling and 3D animating tool, saving your movie is called “rendering”. In the lower left corner of every window “tile” (the scene view and the timeline for example), you can choose what kind of editor should be shown in this tile. Rendering can be done from the an editor called Properties. The first pane of the Properties editor is a small camera icon: this is the rendering pane. Here you can choose the output format, frames per second and so on.

Rendering can be stopped by pressing ESC. The frames rendered so far will be saved to the file (i.e. they will not be lost).

Audio sync problem

When you import a movie strip it is split into a video and an audio part. When I did this, I got a video part with 13151 frames but the audio part only had 12625 frames. This was because my render was set to be 24 frames per second but the imported movie had 25 frames per second. Blender matches the frames of the imported video 1:1 to the frames in the render. This stretched the imported video to be longer than its original length and thus longer than the imported audio. I fixed this by setting the render to have 25 frames per second.

Thoughts about Blender

Blender is an awesome powerful tool – 3d modeling, 3d animation, compositing, video editing, and even a game engine is included in Blender.

But Blender is not for the faint of heart. The interface (in my opinion) is not optimized for newcomers or intuitive use. Rather it is optimized for effective use by experts. This means you have to accept things such as not being able to drag strips in the timeline by holding down the mouse, and that saving your movie is done via the “Properties” editor.

Because of this, I see as tool Blender similar to Vim – powerful, but with a tough learning curve.

Thank you Blender!

Last note

For small strips I found it hard to select (click) the left and right handles (the beginning and end of the strip). Luckily there’s a menu below the timeline called Select which has a Right Handle and a Left Handle item. By right-clicking on either of those you can assign a keyboard shortcut to that menu item.

Buying things online (scams)

23 June 2013

Long story short: is a great, legimate seller of Bosto drawing tablets. I bought a Kingtee 19-mb from the site and it was delivered within days of my purchase, and I am very happy with the product. However before I bought something off the site I was worried that it was a scam. This is a story of worry, research and a leap of faith.

So I was planning to buy a drawing tablet, like the ones from Wacom. However, after researching a little I found out that today (as opposed to a couple of years ago when I last checked) cheaper are alternatives available – possibly because some of Wacom’s patents have expired. These alternatives are made by two companies called Yiynova and Bosto. After reading some reviews and viewing a lot of videos demonstrating the tablets I decided to go for the Bosto kingtee 19-mb.

But I needed it shipped to Denmark at a reasonable price – so where was I going to get that? I found which looks quite nice. But the site isn’t really mentioned anywhere on the net – so how could I know that it was run by the manufacturer or a legitimate reseller? That it wasn’t a scam?

The site is very plain, and doesn’t really give a lot of information, about who runs the shop and where it’s located. The site also has a list of recent customers with date of purchase and model number. (At the time the table didn’t show the DHL tracking as it does now)
If you’re already suspicious of the site this list comes off as a baiting device, as you don’t normally see such lists on larger webshops. The list is basically yelling: “Hey look, we’re legit – just go ahead and buy something already!”, which is coincidentally also what you would want to do if you were luring people into a scam.

So I decided to probe a little further. The site presents the mail to which I wrote:

Hi Bosto

Do you ship to Denmark?

– Malthe

Perhaps I would get a badly written response? (as per the usual spam email, which would indicate a scam)

I got the reply:

Malthe, we ship to Denmark. It’s free. Have a good buy

Sincerely yours, Andrey Belkov, Bosto

Now Bosto is a Chinese company, so it was weird that someone with a russian sounding name responded. The response itself is well-written and short, basically what I would expect from a modern web company. Not from a Chinese tablet manufacturer. This well-written short response could just be luring me further into a possible scam…

Well, in the end I decided to go for it anyway and I was able to track my package via DHL on its 3-day journey from Hong Kong to Copenhagen (Denmark). I recieved the package 6 days after my Paypal payment.

Bosto PayPal vendor info

The PayPal vendor information was kinda funny (you can see it on the left). Obviously Chinese symbols aren’t allowed in email addresses and Chinese people often choose an English name to ease communication with English-speaking countries hence the “roger”. At least that’s my guess as to why the email is what it is. Forhandleroplysninger means Seller info in danish. (Displayed as Merchant in english Paypal)

Now, how do you know this blog post isn’t part of the scam? ^^

Possible solutions

Of course there should be and are sites and services that keep track of whether vendors are real and can be trusted. None of the ones mentioned here had a review of at the time of writing.


  • ResellerRatings even though they earn money by making the vendors pay to contact users on the site (to respond to any critique) the site is still riddled with ads.
  • TrustPilot has the same as business model as ResellerRatings but without ads. Being a newer, originally Danish, site it doesn’t have the volume ResellerRatings has (yet), but it is better. They have some problems with fake reviews, though I guess any site of this sort will. It’s a very important caveat and they really should work hard to come up with a good solution.

Browser plugins

Having such a service available as a browser plugin is a very good idea. It saves you the time of looking up the vendor – it’s available right in the browser. Futhermore the instant availability of the plugin makes it easy to review sites and could possibly mean that more sites get rated than on sites such as TrustPilot.

  • WOT tried it, and it worked very nicely.
  • websherpa another browser plugin, didn’t test it.

The Unix shebang (#!)

20 June 2013

The Unix shebang is the first line in a typical executable Unix script. For example a an executable Python script called ‘hello’ can be run by writing ./hello on the command line, instead of writing out the full command python hello, given that the first line in the script is


This line tells that the script should be run by using the Python interpreter located at /usr/bin/python. E.g.

> cat hello 
print 'Hello World!'
> ./hello
Hello World!

One would think that the shell, e.g. bash or zsh interprets this line and executes the proper script interpreter. But it’s not the shell – it’s actually the kernel that interprets this line.
(For further reference check out

I found this out when trying to use Python virtualenv which creates .venv/bin/pip. The pip file is a Python script with a shebang in the beginning – in this case something along the lines of

#!"/Users/Malthe/python project/.venv/bin/python"

Here virtualenv has quoted the path because it has a space in it. But this doesn’t work on OS X 10.7 (I haven’t tested on other systems).

I haven’t found a solution yet (working on it). The XNU (Mac OS X) kernel code parsing the shebang can be found here beginning at line 432.

So I wrote a file test to check out the behaviour:

#!/bin/exec-space arg1 arg2 arg3 arg4 arg5 arg6 arg7 arg8 arg9 arg10 arg11 arg12

With exec-space being:

echo $0
echo $1
echo $2
echo $3
echo $4

Running ./test in fish gives the following error

> ./test
Failed to execute process './test'. Reason:
exec: Exec format error
The file './test' is marked as an executable but could not be run by the operating system.

bash does nothing

> ./test

zsh does something

> ./test
arg1 arg2 arg3 arg4 arg5 arg6 arg7 arg8 arg9 a

The result from zsh show that all arguments are concatenated intro $1 but truncated after some number of characters. The morale of the story is then to not use virtualenv in neither paths containing spaces nor long paths. I guess computer software still lives in 1300 BC…

A guy called Steve Smith found out the same thing a couple of years ago:

Battle of the PaaSs

15 May 2013

So what is a PaaS?

PaaS stands for Platform as a Service and means that you pay for a service that provides a platform on which your code can run. That could be a Python platform for running webcrawlers, a node.js stack for running web applications etc.

In this context PaaS is an alternative to the usual web hosting setups:

  1. Regular hosting You have a user with an FTP directory where you upload your PHP files or the like. Very easy but not very customizable or scalable.
  2. Dedicated/virtual server You have a single server over which you have full control (OS, packages). Fully customizable, somewhat scalable (larger server)
  3. Roll your own cloud solution Set up instances and HTTP routing on Amazon Web Services. 100% scalable, very customizable

For the project I’m currently working on I need scalability (multiple instances) and customizability (custom Python packages), which rules out regular hosting (#1) and using a dedicated/virtual server (#2).

Amazon Web Services (#3) of course makes server management pretty easy, but I still don’t want to worry about choosing a Linux distribution, updating packages, and setting up HTTP routing. Therefore using a PaaS is the way to go.

Using a PaaS basically means that you can simply push your code to the service and it will handle deployment, setting up multiple instances and so forth (even installing needed Python packages automatically).

The contenders


The grandfather of PaaS: the first widely known and widely used service of this type. It’s high quality, very popular, and is still the trend-setting service in this field. All other services mentioned here are more or less watered-down copies of Heroku. Hopefully we will soon see new ideas coming from the competition but right now Heroku is still the driving force of innovation in the field.

My experience: Very easy to set up and great documentation. It took some time deploying the first time (~4 mins), but the service itself is very responsive both through the command line and the web interface.

Bottom line: Heroku is great (the best), but too expensive.


A relatively new, but also quite popular PaaS.

My experience: Most of the services mentioned here use Amazon EC2 as a backend, including dotCloud. When you need a server on EC2 you “provision” it. The selected OS is transferred to its local storage and keys and DNS is set up. This takes about 8-10 minutes. The other services here have a reserve of instances already booted up and ready so that when you create a new app the service simply need to setup the code and HTTP routing which takes less than five minutes.

BUT dotCloud provisions Amazon instances on demand. And it takes a looooooong time: more than 30 minutes.

30 minute wait time is just something that I don’t want to – and don’t have to – deal with.

This wait problem would also affect zero-downtime deployment where you typically have two different deployments of your site running at the same time (one old, one new) and then change the routing from the old one to the new. This provisioning lag would make zero-downtime deployment very time consuming.

Bottom line: dotCloud provisions instances on demand, which creates long wait times that other services don’t have. No, thank you dotCloud.

Smaller service focused on Python/Django which seems to fit just my needs.

My experience: For some reason it was very difficult to set up. Whereas the other services only needed a few changes to the Django settings, Gondor needed a lot of tweaking and I never actually succeeded in getting it running.

Bottom line: Never worked.

AppFog vs. cloudControl

That leaves two services: AppFog and cloudControl.

Setting up with cloudControl was extremely easy, thanks to concise and well-written documentation. The AppFog documentation on Django was a bit more sparse, but setup was still pain-free.

The cloudControl command-line is based on Heroku’s command-line tool and is pretty good (Heroku’s own is still better), whereas AppFog’s is based on cloudFoundry’s command-line tool and does have some minor problems.

Category AppFog cloudControl
Setup Okay Easy
Command-line Half-baked1 Better
Web interface Good Half-baked
Logs2 Slow and difficult Okay
Documentation Worse Better
Database3 Built-in N/A - outsourced
  1. af logs and af update are slower than on most other services but not irritatingly so. Also, you cannot run Python/Django commands on the server, e.g. python syncdb or python migrate which is a pain (you have to manually do these things).

  2. AppFog has no online interface for viewing logs and the command-line logs are slow. cloudControl has an online log viewing inferface and their command-line logs are much faster to access.

  3. You cannot setup triggers on the AppFog MySQL database. cloudControl doesn’t supply any database themselves but outsources them, for example to ElephantSQL which I found pretty great. Of course you can outsource with AppFog as well, but if they can’t supply a great database solution – they shouldn’t supply one at all – but rather default to outsourcing like cloudControl.

Bottom line

cloudControl is the better service, but their pricing is about double that of AppFog. The flaws of AppFog have so far not been deal-breakers for me.

Therefore my choice of PaaS is – for now – AppFog.


  • PaaS on Wikipedia
  • Zero-downtime deployment means updating your website with zero downtime for the end users (i.e. without “Website currently undergoing maintenance”). This is usually achieved by setting the updated version up on separate instances while having the old version still running. The HTTP routing is then shifted from pointing to the old version to the new, and the instances with the old version can then be shut down.

Adventures in DNS

09 April 2013

When you start a site on a specific domain you often wanna have some email account associated with that domain, such that users of your site can send mail to an email that is transparently belonging to the site. E.g. rather than

An easy and in my opinion very nice solution is setting up a Google Apps for the domain. This way you get the familiar Gmail-interface for your mail, and you don’t have to set up and manage some email server like Postfix or exim4. (Which can be HUGE hassle)

To get this email service you need to set up Google Apps MX records, which is the real reason I’m writing this post: I’m getting tired of setting up these records. It’s a tedious and too click-hungry process on my current DNS service GratisDNS.

To recieve mail sent to your domain through Google Apps what you gotta do is have the following MX records setup in the DNS for your domain: 3600 IN MX 1 3600 IN MX 5 3600 IN MX 5 3600 IN MX 10 3600 IN MX 10

At GratisDNS you submit a form (no-AJAX) for each MX entry and if you also want to have aliases (.com, .net, .org, .dk) you get to Copy/Paste or type these records in quite a few times. So I wanted to find some alternative DNS service where setting up DNS records would be faster.

So I looked for some danish alternatives and found:

The interface on these sites was weird and I didn’t really understand what was going on so I continued my search internationally.

Note: actually has a pretty good interface, and makes it a bit faster to set up the DNS records but still not really as easy as I wanted to be.

Route 53 on AWS (Amazon Web Services) definitely has the features I want but the interface could be better, which is why I ended up choosing which has a very slick interface.

Enter the Zone File

All along what I was looking for was the zone file. The zone file is the file the DNS server reads in order to know what to do when a DNS request for your domain is received. It is simply a text file wherein you specify your DNS records. It has a somewhat funny syntax, and different DNS services allow different parts of the syntax in their zone files.

This is the zone file I needed:

@ 3600 IN MX 1
@ 3600 IN MX 5
@ 3600 IN MX 5
@ 3600 IN MX 10
@ 3600 IN MX 10

@ is zone file syntax for “current domain” and in a DNS service this will most likely already be specified when the zone file is loaded. (This is true for

What you might wanna is to read up on:

, which I found to be nice resources. “Expert mode” Expert mode On, you can set Edit mode to “Expert” which lets you edit the zone file directly. This is actually the same as what you can do on and, though I didn’t know about zone files at that time. Furthermore on you can make a zone file template that you can use on multiple sites. This is ideal for the MX record setup for Google Apps because it is the same for all sites.

In a template you have to use @ as a placeholder for the domain name (otherwise it would be a template that could be used for other domains) and thus the zone file template for Google Apps MX records is exactly the one listed earlier. actually allows exactly this kind of templating but the interface is very confusing. allows for editing the zone file directly but not templating.

Gotchas for zone file templates:

  • You cannot refer to a specific domain name in the zone file – use @ to refer to the domain
  • You cannot specify the SOA directive – this is set by
  • You cannot use parenthesis (for multiline statements)