fanf: (dotat)
I am in the process of moving this blog to - follow me there!
fanf: (dotat)

Ansible is the configuration management tool we use at work. It has built-in support for encrypted secrets, called ansible-vault, so you can safely store secrets in version control.

I thought I should review the ansible-vault code.


It's a bit shoddy but probably OK, provided you have a really strong vault password.


The code starts off with a bad sign:

    from cryptography.hazmat.primitives.hashes import SHA256 as c_SHA256
    from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
    from cryptography.hazmat.backends import default_backend

I like the way the Python cryptography library calls this stuff HAZMAT but I don't like the fact that Ansible is getting its hands dirty with HAZMAT. It's likely to lead to embarrassing cockups, and in fact Ansible had an embarrassing cockup - there are two vault ciphers, "AES" (the cockup, now disabled except that for compatibility you can still decrypt) and "AES256" (fixed replacement).

As a consequence of basing ansible-vault on relatively low-level primitives, it has its own Python implementations of constant-time comparison and PKCS#7 padding. Ugh.


Proper random numbers:

    b_salt = os.urandom(32)


Iteration count:

    b_derivedkey = PBKDF2(b_password, b_salt,
                          dkLen=(2 * keylength) + ivlength,
                          count=10000, prf=pbkdf2_prf)

PBKDF2 HMAC SHA256 takes about 24ms for 10k iterations on my machine, which is not bad but also not great - e.g. 1Password uses 100k iterations of the same algorithm, and gpg tunes its non-PBKDF2 password hash to take (by default) at least 100ms.

The deeper problem here is that Ansible has hard-coded the PBKDF2 iteration count, so it can't be changed without breaking compatibility. In gpg an encrypted blob includes the variable iteration count as a parameter.


ASCII armoring:

    b_vaulttext = b'\n'.join([hexlify(b_salt),
    b_vaulttext = hexlify(b_vaulttext)

The ASCII-armoring of the ciphertext is as dumb as a brick, with hex-encoding inside hex-encoding.

File handling

I also (more briefly) looked through ansible-vault's higher-level code for managing vault files.

It is based on handing decrypted YAML files to $EDITOR, so it's a bit awkward if you don't want to wrap secrets in YAML or if you don't want to manipulate them in your editor.

It uses mkstemp(), so the decrypted file can be placed on a ram disk, though you might have to set TMPDIR to make sure.

It shred(1)s the file after finishing with it.

fanf: (dotat)

Because I just needed to remind myself of this, here are a couple of links with suggestions foe less accidentally-racist computing terminology:

Firstly, Kerri Miller suggests blocklist / safelist for naming deny / allow data sources.

Less briefly, Bryan Liles got lots of suggestions for better distributed systems terminology. If you use a wider vocabulary to describe your systems, you can more quickly give your reader a more precise idea of how your systems work, and the relative roles of their parts.

fanf: (dotat)

Today I rolled out a significant improvement to the automatic recovery system on Cambridge University's recursive DNS servers. This change was because of three bugs.

BIND RPZ catatonia

The first bug is that sometimes BIND will lock up for a few seconds doing RPZ maintenance work. This can happen with very large and frequently updated response policy zones such as the Spamhaus Domain Block List.

When this happens on my servers, keepalived starts a failover process after a couple of seconds - it is deliberately configured to respond quickly. However, BIND soon recovers, so a few seconds later keepalived fails back.

BIND lost listening socket

This brief keepalived flap has an unfortunate effect on BIND. It sees the service addresses disappear, so it closes its listening sockets, then the service addresses reappear, so it tries to reopen its listening sockets.

Now, because the server is fairly busy, it doesn't have time to clean up all the state from the old listening socket before BIND tries to open the new one, so BIND gets an "address already in use" error.

Sadly, BIND gives up at this point - it does not keep trying periodically to reopen the socket, as you might hope.

Holy health check script, Bat Man!

At this point BIND is still listening on most of the interface addresses, except for a TCP socket on the public service IP address. Ideally this should have been spotted by my health check script, which should have told keepalived to fail over again.

But there's a gaping hole in the health checker's coverage: it only tests the loopback interfaces!

In a fix

Ideally all three of these bugs should be fixed. I'm not expert enough to fix the BIND bugs myself, since they are in some of the gnarliest bits of the code, so I'll leave them to the good folks at Even if they are fixed, I still need to fix my health check script so that it actually checks the user-facing service addresses, and there's no-one else I can leave that to.


I wrote about my setup for recursive DNS server failover with keepalived when I set it up a couple of years ago. My recent work leaves the keepalived configuration bascially unchanged, and concentrates on the health check script.

For the purpose of this article, the key feature of my keepalived configuration is that it runs the health checker script many times per second, in order to fake up dynamically reconfigurable server priorities. The old script did DNS queries inline, which was OK when it was only checking loopback addresses, but the new script needs to make typically 16 queries which is getting a bit much.

Daemonic decoupling

The new health checker is split in two.

The script called by keepalived now just examines the contents of a status file, so it runs predictably fast regardless of the speed of DNS responses.

There is a separate daemon which performs the actual health checks, and writes the results to the status file.

The speed thing is nice, but what is really important is that the daemon is naturally stateful in a way the old health checker could not be. When I started I knew statefulness was necessary because I clearly needed some kind of hysteresis or flap damping or hold-down or something.

This is much more complex

There is this theory of the Möbius: a twist in the fabric of space where time becomes a loop

  • BIND observes the list of network interfaces, and opens and closes listening sockets as addresses come and go.

  • The health check daemon verifies that BIND is responding properly on all the network interface addresses.

  • keepalived polls the health checker and brings interfaces up and down depending on the results.

Without care it is inevitable that unexpected interactions between these components will destroy the Enterprise!

Winning the race

The health checker gets into races with the other daemons when interfaces are deleted or added.

The deletion case is simpler. The health checker gets the list of addresses, then checks them all in turn. If keepalived deletes an address during this process then the checker can detect a failure - but actually, it's OK if we don't get a respose from a missing address! Fortunately there is a distinctive error message in this case which the health checker can treat as an alternative successful response.

New interfaces are more tricky, because the health checker needs to give BIND a little time to open its sockets. It would be really bad if the server appears to be healthy, so keepalived brings up the addresses, which the health checker tests before BIND is ready, causing it to immediately fail - a huge flap.

Back off

The main technique that the new health checker uses to suppress flapping is exponential backoff.

Normally, when everything is working, the health checker queries every network interface address, writes an OK to the status file, then sleeps for 1 second before looping.

When a query fails, it immediately writes BAD to the status file, and sleeps for a while before looping. The sleep time increases exponentially as more failures occur, so repeated failures cause longer and longer intervals before the server tries to recover.

Exponential backoff handles my original problem somewhat indirectly: if there's a flap that causes BIND to lose a listening socket, there will then be a (hopefully short) series of slower and slower flaps until eventually a flap is slow enough that BIND is able to re-open the socket and the server recovers. I will probably have to tune the backoff parameters to minimize the disruption in this kind of event.

Hold down

Another way to suppress flapping is to avoid false recoveries.

When all the test queries succeed, the new health checker decreases the failure sleep time, rather than zeroing it, so if more failures occur the exponential backoff can continue. It still reports the success immediately to keepalived, because I want true recoveries to be fast, for instance if the server accidentally crashes and is restarted.

The hold-down mechanism is linked to the way the health checker keeps track of network interface addresses.

After an interface goes away the checker does not decrease the sleep time for several seconds even if the queries are now working OK. This hold-down is supposed to cover a flap where the interface immediately returns, in which case we want exponential backoff to continue.

Similarly, to avoid those tricky races, we also record the time when each interface is brought up, so we can ignore failures that occur in the first few seconds.


It took quite a lot of headscratching and trial and error, but in the end I think I came up with something resonably simple. Rather than targeting it specifically at failures I have observed in production, I have tried to use general purpose robustness techniques, and I hope this means it will behave OK if some new weird problem crops up.

Actually, I hope NO new weird problems crop up!

PS. the ST:TNG quote above is because I have recently been listening to my old Orbital albums again -

fanf: (dotat)

Following on from my recent item about the leap seconds list I have come up with a better binary encoding which is half the size of my previous attempt. (Compressing it with deflate no longer helps.)

Here's the new binary version of the leap second list, 15 bytes displayed as hexadecimal in network byte order. I have not updated it to reflect the latest Bulletin C which was promulgated this week, because the official leap second lists have not yet been updated.

    001111111211343 12112229D5652F4

This is a string of 4-bit nybbles, upper nybble before lower nybble of each byte.

If the value of this nybble V is less than 0x8, it is treated as a pair of nybbles 0x9V. This abbreviates the common case.

Otherwise, we consider this nybble Q and the following nybble V as a pair 0xQV.

Q contains four flags,

    |  W  |  M  |  N  |  P  |

W is for width:

  • W == 1 indicates a QV pair in the string.
  • W == 0 indicates this is a bare V nybble.

M is the month multiplier:

  • M == 1 indicates the nybble V is multiplied by 1
  • M == 0 indicates the nybble V is multiplied by 6

NP together are NTP-compatible leap indicator bits:

  • NP == 00 indicates no leap second
  • NP == 01 == 1 indicates a positive leap second
  • NP == 10 == 2 indicates a negative leap second
  • NP == 11 == 3 indicates an unknown leap second

The latter is equivalent to the ? terminating the text version.

The event described by NP occurs a number of months after the previous event, given by

    (M ? 1 : 6) * (V + 1).

That is, a 6 month gap can be encoded as M=1, V=5 or as M=0, V=0.

The "no leap second" option comes into play when the gap between leap seconds is too large to fit in 4 bits. In this situation you encode a number of "no leap second" gaps until the remaining gap fits.

The recommended way to break up long gaps is as follows. Gaps up to 16 months can be encoded in one QV pair. Gaps that are a multiple of 6 months long should be encoded as a number of 16*6 month gaps, followed by the remainder. Other gaps should be rounded down to a whole number of years and encoded as a X*6 month gap, which is followed by a gap for the remaining few months.

To align the list to a whole number of bytes, add a redundant 9 nybble to turn a bare V nybble into a QV pair.

In the current leap second list, every gap is encoded as a single V nybble, except for the 84 month gap which is encoded as QV = 0x9D, and the last 5 months encoded as QV = 0xF4.

Once the leap second lists have been updated, the latest Bulletin C will change the final nybble from 4 to A.

The code now includes decoding as well as encoding functions, and a little fuzz tester to ensure they are consistent with each other.

fanf: (dotat)

Here's an amusing trick that I was just discussing with Mark Wooding on IRC.

Did you know you can define functions with optional and/or named arguments in C99? It's not even completely horrible!

The main limitation is there must be at least one non-optional argument, and you need to compile with -std=c99 -Wno-override-init.

(I originally wrote that this code needs C11, but Miod Vallat pointed out it works fine in C99)

The pattern works like this, using a function called repeat() as an example.

    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    // Define the argument list as a structure. The dummy argument
    // at the start allows you to call the function with either
    // positional or named arguments.

    struct repeat_args {
        void *dummy;
        char *str;
        int n;
        char *sep;

    // Define a wrapper macro that sets the default values and
    // hides away the rubric.

    #define repeat(...) \
            repeat((struct repeat_args){ \
                    .n = 1, .sep = " ", \
                    .dummy = NULL, __VA_ARGS__ })

    // Finally, define the function,
    // but remember to suppress macro expansion!

    char *(repeat)(struct repeat_args a) {
        if(a.n < 1)
        char *r = malloc((a.n - 1) * strlen(a.sep) +
                         a.n * strlen(a.str) + 1);
        if(r == NULL)
        strcpy(r, a.str);
        while(a.n-- > 1) { // accidentally quadratic
            strcat(r, a.sep);
            strcat(r, a.str);

    int main(void) {

        // Invoke it like this
        printf("%s\n", repeat(.str = "ho", .n = 3));

        // Or equivalently
        printf("%s\n", repeat("ho", 3, " "));

fanf: (dotat)

Firstly, I have to say that it's totally awesome that I am writing this at all, and it's entirely due to the cool stuff done by people other than me. Yes! News about other people doing cool stuff with my half-baked ideas, how cool is that?


OK, DNS is approximately the ideal application for tries. It needs a data structure with key/value lookup and lexically ordered traversal.

When qp tries were new, I got some very positive feedback from Marek Vavrusa who I think was at CZ.NIC at the time. As well as being the Czech DNS registry, they also develop their own very competitive DNS server software. Clearly the potential for a win there, but I didn't have time to push a side project to production quality, nor any expectation that anyone else would do the work.

But, in November I got email from Vladimír Čunát telling me he had reimplemented qp tries to fix the portability bugs and missing features (such as prefix searches) in my qp trie code, and added it to Knot DNS. Knot was previously using a HAT trie.

Vladimír said qp tries could reduce total server RSS by more than 50% in a mass hosting test case. The disadvantage is that they are slightly slower than HAT tries, e.g. for the .com zone they do about twice as many memory indirections per lookup due to checking a nybble per node rather than a byte per node.

On balance, qp tries were a pretty good improvement. Thanks, Vladimír, for making such effective use of my ideas!

(I've written some notes on more memory-efficient DNS name lookups in qp tries in case anyone wants to help close the speed gap...)


Shortly before Christmas I spotted that Frank Denis has a qp trie implementation in Rust!

Sadly I'm still only appreciating Rust from a distance, but when I find some time to try it out properly, this will be top of my list of things to hack around with!

I think qp tries are an interesting test case for Rust, because at the core of the data structure is a tightly packed two word union with type tags tucked into the low order bits of a pointer. It is dirty low-level C, but in principle it ought to work nicely as a Rust enum, provided Rust can be persuaded to make the same layout optimizations. In my head a qp trie is a parametric recursive algebraic data type, and I wish there were a programming language with which I could express that clearly.

So, thanks, Frank, for giving me an extra incentive to try out Rust! Also, Frank's Twitter feed is ace, you should totally follow him.

Time vs space

Today I had a conversation on Twitter with @tef who has some really interesting ideas about possible improvements to qp tries.

One of the weaknesses of qp-tries, at least in my proof-of-concept implementation, is the allocator is called for every insert or delete. C's allocator is relatively heavyweight (compared to languages with tightly-coupled GCs) so it's not great to call it so frequently.

(Bagwell's HAMT paper was a major inspiration for qp tries, and he goes into some detail describing his custom allocator. It makes me feel like I'm slacking!)

There's an important trade-off between small memory size and keeping some spare space to avoid realloc() calls. I have erred on the side of optimizing for simple allocator calls and small data structure size at the cost of greater allocator stress.

@tef suggested adding extra space to each node for use as a write buffer, in a similar way to "fractal tree" indexes.. As well as avoiding calls to realloc(), a write buffer could avoid malloc() calls for inserting new nodes. I was totally nerd sniped by his cool ideas!

After some intensive thinking I worked out a sketch of how write buffers might amortize allocation in qp tries. I don't think it quite matches what tef had in mind, but it's definitely intriguing. It's very tempting to steal some time to turn the sketch into code, but I fear I need to focus more on things that are directly helpful to my colleagues...

Anyway, thanks, tef, for the inspiring conversation! It also, tangentially, led me to write this item for my blog.

fanf: (dotat)

The list of leap seconds is published in a number of places. Most authoritative is the IERS list published from the Paris Observatory and most useful is the version published by NIST. However neither of them are geared up for distributing leap seconds to (say) every NTP server.

For a couple of years, I have published the leap second list in the DNS as a set of fairly human-readable AAAA records. There's an example in my message to the LEAPSECS list though I have changed the format to avoid making any of them look like real IPv6 addresses.

Poul-Henning Kamp also publishes leap second information in the DNS though he only publishes an encoding of the most recent Bulletin C rather than the whole history.

I have recently added a set of PHK-style A records at listing every leap second, plus an "illegal" record representing the point after which the difference between TAI and UTC is unknown. I have also added a which should be the same as PHK's

There are also some HINFO records that briefly explain the other records at

One advantage of using the DNS is that the response can be signed and validated using DNSSEC. One disadvantage is that big lists of addresses rapidly get quite unwieldy. There can also be problems with DNS messages larger than 512 bytes. For the answers can get a bit chunky:



I've thought up a simple and brief way to list the leap seconds, which looks like this:


ABNF syntax:

    leaps  =  *leap end
    leap   =  gap delta
    end    =  gap "?"
    delta  =  "-" / "+"
    gap    =  1*DIGIT

Each leap second is represented as a decimal number, which counts the months since the previous leap second, followed by a "+" or a "-" to indicate a positive or negative leap second. The sequence starts at the beginning of 1972, when TAI-UTC was 10s.

So the first leap is "6+", meaning that 6 months after the start of 1972, a leap second increases TAI-UTC to 11s. The "84+" leap represents the gap between the leap seconds at the end of 1998 and end of 2005 (7 * 12 months).

The list is terminated by a "?" to indicate that the IERS have not announced what TAI-UTC will be after that time.


ITU recommendation TF.460-6 specifies in paragraph 2.1 that leap seconds can happen at the end of any month, though in practice the preference for the end of December or June has never been overridden. Negative leap seconds are also so far only a theoretical possibility.

So, counting months between leaps is the largest radix permitted, giving the smallest (shortest) numbers.

I decided to use decimal and mnemonic separators to keep it simple.


If you want something super compact, then bit-banging is the way to do it. Here's a binary version of the leap second list, 29 bytes displayed as hexadecimal in network byte order.

46464c4c 4c4c4c4c 4c524c4c 585e584c 524c4c52 52523c58 646a6452 85

Each byte has a 2-bit two's complement signed delta in the most significant bits, and a 6-bit unsigned month count in the least significant bits.

The meaning of the delta field is as follows (including the hex, binary, and decimal values):

  • 0x40 = 01 = +1 = positive leap second
  • 0x00 = 00 = 0 = no leap second
  • 0xC0 = 11 = -1 = negative leap second
  • 0x80 = 10 = -2 = end of list, like "?"

So, for example, a 6 month gap followed by a positive leap second is 0x40 + 6 == 0x46. A 12 month gap is 0x40 + 12 == 0x4c.

The "no leap second" option comes into play when the gap between leap seconds is too large to fit in 6 bits, i.e. more than 63 months, e.g. the 84 month gap between the ends of 1998 and 2005. In this situation you encode a number of "no leap second" gaps until the remaining gap is less than 63.

I have encoded the 84 month gap as 0x3c58, i.e. 0x00 + 0x3c is 60 months followed by no leap second, then 0x40 + 0x18 is 24 months followed by a positive leap second.


There's still quite a lot of redundancy in the binary encoding. It can be reduced to 24 bytes using RFC 1951 DEFLATE compression.


There is now a TXT record at containing the human-readable terse form of the leap seconds list. This is gives you a 131 byte plain DNS response, or a 338 byte DNSSEC signed response.

I've published the deflated binary version using a private-use TYPE65432 record which saves 58 bytes.

There is code to download and check the consistency of the leapseconds files from the IERS, NIST, and USNO, generate the DNS records, and update the DNS if necessary, at

fanf: (dotat)

I have a bit of a bee in my bonnet about using domain names consistently as part of an organization's branding and communications. I don't much like the proliferation of special-purpose or short-term vanity domains.

They are particularly vexing when I am doing something security-sensitive. For example, domain name transfers. I'd like to be sure that someone is not trying to race with my transfer and steal the domain name, say.

Let's have a look at a practical example: transfering a domain from Gandi to Mythic Beasts.

(I like Gandi, but getting the University to pay their domain fees is a massive chore. So I'm moving to Mythic Beasts, who are local, friendly, accommodating, and able to invoice us.)

Edited to add: The following is more ranty and critical than is entirely fair. I should make it clear that both Mythic Beasts and Gandi are right at the top of my list of companies that it is good to work with.

This just happens to be an example where I get to see both ends of the transfer. In most cases I am transferring to or from someone else, so I don't get to see the whole process, and the technicalities are trivial compared to the human co-ordination!

First communication

    Return-Path: <>
    Message-Id: <>
    From: "Transfer" <>
    Subject: Transfer Request for EXAMPLE.ORG

A classic! Four different domain names, none of which identify either of our suppliers! But I know Mythic Beasts are an OpenSRS reseller, and OpenSRS is a Tucows service.

Let's see what whois has to say about the others...

    Registrant Name: Domain Admin
    Registrant Organization:
    Registrant Street: 96 Mowat Avenue
    Registrant City: Toronto
    Registrant Email:

"Yummynames". Oh kaaaay.

    Domain Name: YUMMYNAMES.COM
    Registrant Name: Domain Admin
    Registrant Organization: Co.
    Registrant Street: 96 Mowat Ave.
    Registrant City: Toronto
    Registrant Email:

Well I suppose that's OK, but it's a bit of a rabbit hole.


    $ dig +short mx

Even more generic than Fastmail's infrastructure domain :-)

    Domain Name: HOSTEDEMAIL.COM
    Registrant Name: Domain Admin
    Registrant Organization: Tucows Inc
    Registrant Street: 96 Mowat Ave.
    Registrant City: Toronto
    Registrant Email:

The domain in the From: address, is an odd one. I have seen it in whois records before, in an obscure context. When a domain needs to be cancelled, there can sometimes be glue records inside the domain which also need to be cancelled. But they can't be cancelled if other domains depend on those glue records. So, the registrar renames the glue records into a place-holder domain, allowing the original domain to be cancelled.

So it's weird to see one of these cancellation workaround placeholder domains used for customer communications.

    Domain Name: NS-NOT-IN-SERVICE.COM
    Registrant Name: Tucows Inc.
    Registrant Organization: Tucows Inc.
    Registrant Street: 96 Mowat Ave
    Registrant City: Toronto
    Registrant Email:

Tucows could do better at keeping their whois records consistent!


    Domain Name: DOMAINADMIN.COM
    Registrant Name: Co. Co.
    Registrant Organization: Co.
    Registrant Street: 96 Mowat Ave
    Registrant City: Toronto
    Registrant Email:

So good they named it twice!

Second communication

    Return-Path: <>
    Message-ID: <>
    From: "<noreply"
    Subject: [GANDI] IMPORTANT: Outbound transfer of EXAMPLE.ORG to another provider

The syntactic anomaly in the From: line is a nice touch.

Both and belong to Gandi.

    Registrant Name: NOC GANDI
    Registrant Organization: GANDI SAS
    Registrant Street: 63-65 Boulevard MASSENA
    Registrant City: Paris
    Registrant Email:

Impressively consistent whois :-)

Third communication

    Return-Path: <>
    Message-Id: <>
    From: "Transfers" <>
    Subject: Domain EXAMPLE.ORG successfully transferred

OK, so this message has the reseller's branding, but the first one didn't?!

The web sites

To confirm a transfer, you have to paste an EPP authorization code into the old and new registrars' confirmation web sites.

The first site has very bare-bones OpenSRS branding. It's a bit of a pity they don't allow resellers to add their own branding.

The second site is unbranded; it isn't clear to me why it isn't part of Gandi's normal web site and user interface. Also, it is plain HTTP without TLS!


What I would like from this kind of process is an impression that it is reassuringly simple - not involving loads of unexpected organizations and web sites, difficult to screw up by being inattentive. The actual experience is shambolic.

And remember that basically all Internet security rests on domain name ownership, and this is part of the process of maintaining that ownership.

Here endeth the rant.

fanf: (dotat)

So I have a toy DNS server which runs bleeding edge BIND 9 with a bunch of patches which I have submitted upstream, or which are work in progress, or just stupid. It gets upgraded a lot.

My build and install script puts the version number, git revision, and an install counter into the name of the install directory. So, when I deploy a custom hack which segfaults, I can just flip a symlink and revert to the previous less incompetently modified version.

More recently I have added an auto-upgrade feature to my BIND rc script. This was really simple, stupid: it just used the last install directory from the ls lexical sort. (You can tell it is newish because it could not possibly have coped with the BIND 9.9 -> 9.10 transition.)

It broke today.

BIND 9.11 has been released! Yay! I have lots of patches in it!

Unfortunately 9.11.0 sorts lexically before 9.11.0rc3. So my dumb auto update script refused to update. This can only happen in the transition period between a major release and the version bump on the master branch for the next pre-alpha version. This is a rare edge case for code only I will ever use.

But, I fixed it!

And I learned several cool things in the process!

I started off by wondering if I could plug dpkg --compare-versions into a sorting algorithm. But then I found that GNU sort has a -V version comparison mode. Yay! ls | sort -V!

But what is its version comparison algorithm? Is it as good as dpkg's? The coreutils documentation refers to the gnulib filevercmp() function. This appears not to exist, but gnulib does have a strverscmp() function cloned from glibc.

So look at the glibc man page for strverscmp(). It is a much simpler algorithm than dpkg, but adequate for my purposes. And, I see, built in to GNU ls! (I should have spotted this when reading the coreutils docs, because that uses ls -v as an example!)

Problem solved!

Add an option in ls | tail -1 so it uses ls -v and my script upgrades from 9.11.0rc3 to 9.11.0 without manual jiggery pokery!

And I learned about some GNU carnival organ bells and whistles!

But ... have I seen something like this before?

If you look at the BIND distribution server, its listing is sorted by version number, not lexically, i.e. 10 sorts after 9 not after 1.

They are using the Apache mod_autoindex IndexOptions VersionSort directive, which works the same as GNU version number sorting.


It's pleasing to see widespread support for version number comparisons, even if they aren't properly thought-through elaborate battle hardened dpkg version number comparisons.

fanf: (dotat)

We have a periodic table shower curtain. It mostly uses Arial for its lettering, as you can see from the "R" and "C" in the heading below, though some of the lettering is Helvetica, like the "t" and "r" in the smaller caption.

(bigger) (huge)

The lettering is very inconsistent. For instance, Chromium is set in a lighter weight than other elements - compare it with Copper

(bigger) (huge) (bigger) (huge)

Palladium is particularly special

(bigger) (huge)

Platinum is Helvetica but Meitnerium is Arial - note the tops of the "t"s.

(bigger) (huge) (bigger) (huge)

Roentgenium is all Arial; Rhenium has a Helvetica "R" but an Arial "e"!

(bigger) (huge) (bigger) (huge)

It is a very distracting shower curtain.

fanf: (dotat)

Recently there was a thread on bind-users about "minimal responses and speeding up queries. The discussion was not very well informed about the difference between the theory and practice of addidional data in DNS responses, so I wrote the following.

Background: DNS replies can contain "additional" data which (according to RFC 1034 "may be helpful in using the RRs in the other sections." Typically this means addresses of servers identified in NS, MX, or SRV records. In BIND you can turn off most additional data by setting the minimal-responses option.

Reindl Harald <h.reindl at> wrote:

additional responses are part of the inital question and may save asking for that information - in case the additional info is not needed by the client it saves traffic

Matus UHLAR - fantomas <uhlar at>

If you turn mimimal-responses on, the required data may not be in the answer. That will result into another query send, which means number of queries increases.

There are a few situations in which additional data is useful in theory, but it's surprisingly poorly used in practice.

End-user clients are generally looking up address records, and the additional and authority records aren't of any use to them.

For MX and SRV records, additional data can reduce the need for extra A and AAAA records - but only if both A and AAAA are present in the response. If either RRset is missing the client still has to make another query to find out if it doesn't exist or wouldn't fit. Some code I am familiar with (Exim) ignores additional sections in MX responses and always does separate A and AAAA lookups, because it's simpler.

The other important case is for queries from recursive servers to authoritative servers, where you might hope that the recursive server would cache the additional data to avoid queries to the authoritative servers.

However, in practice BIND is not very good at this. For example, let's query for an MX record, then the address of one of the MX target hosts. We expect to get the address in the response to the first query, so the second query doesn't need another round trip to the authority.

Here's some log, heavily pruned for relevance.

2016-09-23.10:55:13.316 queries: info:
        view rec: query: IN MX +E(0)K (::1)
2016-09-23.10:55:13.318 resolver: debug 11:
        sending packet to 2001:500:60::30#53
;                       IN      MX
2016-09-23.10:55:13.330 resolver: debug 10:
        received packet from 2001:500:60::30#53
;               7200    IN      MX      10
;               7200    IN      MX      20
;       3600    IN      A
;       3600    IN      AAAA    2001:4f8:0:2::2b
2016-09-23.10:56:13.150 queries: info:
        view rec: query: IN A +E(0)K (::1)
2016-09-23.10:56:13.151 resolver: debug 11:
        sending packet to 2001:500:60::30#53
;               IN      A

Hmf, well that's disappointing.

Now, there's a rule in RFC 2181 about ranking the trustworthiness of data:

5.4.1. Ranking data

[ snip ]
Unauthenticated RRs received and cached from the least trustworthy of those groupings, that is data from the additional data section, and data from the authority section of a non-authoritative answer, should not be cached in such a way that they would ever be returned as answers to a received query. They may be returned as additional information where appropriate. Ignoring this would allow the trustworthiness of relatively untrustworthy data to be increased without cause or excuse.

Since my recursive server is validating, and is signed, it should be able to authenticate the MX target address from the MX response, and promote its trustworthiness, instead of making another query. But BIND doesn't do that.

There are other situations where BIND fails to make good use of all the records in a response, e.g. when you get a referral for a signed zone, the response includes the DS records as well as the NS records. But BIND doesn't cache the DS records properly, so when it comes to validate the answer, it re-fetches them.

In a follow-up message, Mark Andrews says these problems are on his to-do list, so I'm looking forward to DNSSEC helping to make DNS faster :-)

fanf: (dotat)

The jq tutorial demonstrates simple filtering and rearranging of JSON data, but lacks any examples of how you might combine jq's more interesting features. So I thought it would be worth writing this up.

I want a list of which zones are in which views in my DNS server. I can get this information from BIND's statistics channel, with a bit of processing.

It's quite easy to get a list of zones, but the zone objects returned from the statistics channel do not include the zone's view.

	$ curl -Ssf http://[::1]:8053/json |
	  jq '.views[].zones[].name' |
	  head -2

The view names are keys of an object further up the hierarchy.

	$ curl -Ssf http://[::1]:8053/json |
	  jq '.views | keys'

I need to get hold of the view names so I can use them when processing the zone objects.

The first trick is to_entries which turns the "views" object into an array of name/contents pairs.

	$ curl -Ssf http://[::1]:8053/json |
	  jq -C '.views ' | head -6
	  "_bind": {
	    "zones": [
	        "name": "authors.bind",
	        "class": "CH",
	$ curl -Ssf http://[::1]:8053/json |
	  jq -C '.views | to_entries ' | head -8
	    "key": "_bind",
	    "value": {
	      "zones": [
	          "name": "authors.bind",
	          "class": "CH",

The second trick is to save the view name in a variable before descending into the zone objects.

	$ curl -Ssf http://[::1]:8053/json |
	  jq -C '.views | to_entries |
		.[] | .key as $view |
		.value' | head -5
	  "zones": [
	      "name": "authors.bind",
	      "class": "CH",

I can then use string interpolation to print the information in the format I want. (And use array indexes to prune the output so it isn't ridiculously long!)

	$ curl -Ssf http://[::1]:8053/json |
	  jq -r '.views | to_entries |
		.[] | .key as $view |
		.value.zones[0,1] |
		"\(.name) \(.class) \($view)"'
	authors.bind CH _bind
	hostname.bind CH _bind IN auth IN auth

And that's it!

fanf: (silly)
Oh good grief, reading this interview with Nick Clegg.

I am cross about the coalition. A lot of my friends are MUCH MORE cross than me, and will never vote Lib Dem again. And this interview illustrates why.

The discussion towards the end about the university tuition fee debacle really crystallises it. The framing is about the policy, in isolation. Even, even! that the Lib Dems might have got away with it by cutting university budgets, and making the universities scream for a fee increase!

(Note that this is exactly the tactic the Tories are using to privatise the NHS.)

The point is not the betrayal over this particular policy.

The point is the political symbolism.

Earlier in the interview, Clegg is very pleased about the stunning success of the coalition agreement. It was indeed wonderful, from the policy point of view. But only wonks care about policy.

From the non-wonk point of view of many Lib Dem voters, their upstart radical lefty party suddenly switched to become part of the machine.

Many people who voted for the Lib Dems in 2010 did so because they were not New Labour and - even more - not the Tories. The coalition was a huge slap in the face.

Tuition fees were just the most blatant simple example of this fact.

Free university education is a symbol of social mobility, that you can improve yourself by work regardless of parenthood. Educating Rita, a bellwether of the social safety net.

And I am hugely disappointed that Clegg still seems to think that getting lost in the weeds of policy is more important than understanding the views of his party's supporters.

He says near the start of the interview that getting things done was his main motivation. And in May 2010 that sounded pretty awesome.

But the consequence was the destruction of much of his party's support, and the loss of any media acknowledgment that the Lib Dems have a distinct political position. Both were tenuous in 2010 and both are now laughable.

The interviewer asks him about tuition fees, and still, he fails to talk about the wider implications.
fanf: (dotat)

Towards the end of this story, a dear friend of ours pointed out that when I tweeted about having plenty of "doctors" I actually meant "medics".

Usually this kind of pedantry is not considered to be very polite, but I live in Cambridge; pedantry is our thing, and of course she was completely correct that amongst our friends, "Dr" usually means PhD, and even veterinarians outnumber medics.

And, of course, any story where it is important to point out that the doctor is actually someone who works in a hospital, is not an entirely happy story.

At least it did not involve eye surgeons!

Happy birthday

My brother-in-law's birthday is near the end of August, so last weekend we went to Sheffield to celebrate his 30th. Rachel wrote about our logistical cockups, but by the time of the party we had it back under control.

My sister's house is between our AirB&B and the shops, so we walked round to the party, then I went on to Sainsburys to get some food and drinks.

A trip to the supermarket SHOULD be boring.


While I was looking for some wet wipes, one of the shop staff nearby was shelving booze. He was trying to carry too many bottles of prosecco in his hands at once.

He dropped one.

It hit the floor about a metre from me, and smashed - exploded - something hit me in the face!

"Are you ok?" he said, though he probably swore first, and I was still working out what just happened.

"Er, yes?"

"You're bleeding!"

"What?" It didn't feel painful. I touched my face.

Blood on my hand.

Dripping off my nose. A few drops per second.

He started guiding me to the first aid kit. "Leave your shopping there."

We had to go past some other staff stacking shelves. "I just glassed a customer!" he said, probably with some other explanation I didn't hear so clearly.

I was more surprised than anything else!

What's the damage?

In the back of the shop I was given some tissue to hold against the wound. I could see in the mirror in the staff loo it was just a cut on the bridge of my nose.

I wear specs.

It could have been a LOT worse.

I rinsed off the blood and Signor Caduto Prosecco found some sterile wipes and a chair for me to sit on.

Did I need to go to hospital, I wondered? I wasn't in pain, the bleeding was mostly under control. Will I need stitches? Good grief, what a faff.

I phoned my sister, the junior doctor.

"Are you OK?" she asked. (Was she amazingly perceptive or just expecting a minor shopping quandary?)

"Er, no." (Usual bromides not helpful right now!) I summarized what had happened.

"I'll come and get you."


I can't remember the exact order of events, but before I was properly under control (maybe before I made the phone call?) the staff call bell rang several times in quick succession and they all ran for the front of house.

It was a shoplifter alert!

I gathered from the staff discussions that this guy had failed to steal a chicken and had escaped in a car. Another customer was scared by the would-be-thief following her around the shop, and took refuge in the back of the shop.


In due course my sister arrived, and we went with Signor Rompere Bottiglie past large areas of freshly-cleaned floor to get my shopping. He gave us a £10 discount (I wasn't in the mood to fight for more) and I pointedly told him not to try carrying so many bottles in the future.

At the party there were at least half a dozen medics :-)

By this point my nose had stopped bleeding so I was able to inspect the damage and tweet about what happened.

My sister asked around to find someone who had plenty of A&E experience and a first aid kit. One of her friends cleaned the wound and applied steri-strips to minimize the scarring.

It's now healing nicely, though still rather sore if I touch it without care.

I just hope I don't have another trip to the supermarket which is quite so eventful...

fanf: (dotat)

Thinking of "all the best names are already taken", I wondered if the early adopters grabbed all the single character Twitter usernames. It turns out not, and there's a surprisingly large spread - one single-character username was grabbed by a user created only last year. (Missing from the list below are i 1 2 7 8 9 which give error responses to my API requests.)

I guess there has been some churn even amongst these lucky people...

         2039 : 2006-Jul-17 : w : Walter
         5511 : 2006-Sep-08 : f : Fred Oliveira
         5583 : 2006-Sep-08 : e : S VW
        11046 : 2006-Oct-30 : p : paolo i.
        11222 : 2006-Nov-01 : k : Kevin Cheng
        11628 : 2006-Nov-07 : t : ⚡️
       146733 : 2006-Dec-22 : R : Rex Hammock
       628833 : 2007-Jan-12 : y : reY
       632173 : 2007-Jan-14 : c : Coley Pauline Forest
       662693 : 2007-Jan-19 : 6 : Adrián Lamo
       863391 : 2007-Mar-10 : N : Naoki  Hiroshima
       940631 : 2007-Mar-11 : a : Andrei Zmievski
      1014821 : 2007-Mar-12 : fanf : Tony Finch
      1318181 : 2007-Mar-16 : _ : Dave Rutledge
      2404341 : 2007-Mar-27 : z : Zach Brock
      2890591 : 2007-Mar-29 : x : gene x
      7998822 : 2007-Aug-06 : m : Mark Douglass
      9697732 : 2007-Oct-25 : j : Juliette Melton
     11266532 : 2007-Dec-17 : b : jake h
     11924252 : 2008-Jan-07 : h : Helgi Þorbjörnsson
     14733555 : 2008-May-11 : v : V
     17853751 : 2008-Dec-04 : g : Greg Leding
     22005817 : 2009-Feb-26 : u : u
     50465434 : 2009-Jun-24 : L : L. That is all.
    132655296 : 2010-Apr-13 : Q : Ariel Raunstien
    287253599 : 2011-Apr-24 : 3 : Blair
    347002675 : 2011-Aug-02 : s : Science!
    400126373 : 2011-Oct-28 : 0 : j0
    898384208 : 2012-Oct-22 : 4 : 4oto
    966635792 : 2012-Nov-23 : 5 : n
   1141414200 : 2013-Feb-02 : O : O
   3246146162 : 2015-Jun-15 : d : D
(Edited to add @_)

fanf: (dotat)

On Tuesday (2016-07-26) I gave a talk at work about domain registry APIs. It was mainly humorous complaining about needlessly complicated and broken stuff, but I slipped in a few cool ideas and recommendations for software I found useful.

I have published the slides and notes (both PDF).

The talk was based on what I learned last year when writing some software that I called superglue, but the talk wasn't about the software. That's what this article is about.

What is superglue?

Firstly, it isn't finished. To be honest, it's barely started! Which is why I haven't written about it before.

The goal is automated management of DNS delegations in parent zones - hence "super" (Latin for over/above/beyond, like a parent domain) "glue" (address records sometimes required as part of a delegation).

The basic idea is that superglue should work roughly like nsdiff | nsupdate.

Recall, nsdiff takes a DNS master file, and it produces an nsupdate script which makes the live version of the DNS zone match what the master file says it should be.

Similarly, superglue takes a collection of DNS records that describe a delegation, and updates the parent zone to match what the delegation records say it should be.

A uniform interface to several domain registries

Actually superglue is a suite of programs.

When I wrote it I needed to be able to update parent delegations managed by RIPE, JISC, Nominet, and Gandi; this year I have started moving our domains to Mythic Beasts. They all have different APIs, and only Nominet supports the open standard EPP.

So superglue has a framework which helps each program conform to a consistent command line interface.

Eventually there should be a superglue wrapper which knows which provider-specific script to invoke for each domain.

Refining the user interface

DNS delegation is tricky even if you only have the DNS to deal with. However, superglue also has to take into account the different rules that various domain registries require.

For example, in my initial design, the input to superglue was going to simply be the delegations records that should go in the parent zone, verbatim.

But some domain registries to not accept DS records; instead you have to give them your DNSKEY records, and the registry will generate their preferred form of DS records in the delegation. So superglue needs to take DNSKEY records in its input, and do its own DS generation for registries that require DS rather than DNSKEY.

Sticky problems

There is also a problem with glue records: my idea was that superglue would only need address records for nameservers that are within the domain whose delegation is being updated. This is the minimal set of glue records that the DNS requires, and in many cases it is easy to pull them out of the master file for the delegated zone.

However, sometimes a delegation includes name servers in a sibling domain. For example, and are siblings.  NS  NS  NS
    ; plus a few more

In our delegation, glue is required for nameservers in, like authdns0; glue is forbidden for nameservers outside, like Those two rules are fixed by the DNS.

But for nameservers in a sibling domain, like, the DNS says glue may be present or omitted. Some registries say this optional sibling glue must be present; some registries say it must be absent.

In registries which require optional sibling glue, there is a quagmire of problems. In many cases the glue is already present, because it is part of the sibling delegation and, therefore, required - this is the case for But when the glue is truly optional it becomes unclear who is responsible for maintaining it: the parent domain of the nameserver, or the domain that needs it for their delegation?

I basically decided to ignore that problem. I think you can work around it by doing a one-off manual set up of the optional glue, after which superglue will work. So it's similar to the other delegation management tasks that are out of scope for superglue, like registering or renewing domains.

Captain Adorable

The motivation for superglue was, in the short term, to do a bulk update of out 100+ delegations to add, and in the longer term, to support automatic DNSSEC key rollovers.

I wrote barely enough code to help with the short term goal, so what I have is undocumented, incomplete, and inconsistent. (Especially wrt DNSSEC support.)

And since then I haven't had time or motivation to work on it, so it remains a complete shambles.

fanf: (dotat)

Nearly two years ago I wrote about my project to convert the 25-year history of our DNS infrastructure from SCCS to git - "uplift from SCCS to git"

In May I got email from Akrem ANAYA asking my SCCS to git scripts. I was pleased that they might be of use to someone else!

And now I have started work on moving our Managed Zone Service to a new server. The MZS is our vanity domain system, for research groups who want domain names not under

The MZS also uses SCCS for revision control, so I have resurrected my uplift scripts for another outing. This time the job only took a couple of days, instead of a couple of months :-)

The most amusing thing I found was the cron job which stores a daily dump of the MySQL database in SCCS...

fanf: (dotat)

I have found out the likely reason that tactile paving markers on dual-use paths are the wrong way round!

A page on the UX StackExchange discusses tactile paving on dual-use paths and the best explanation is that it can be uncomfortable to walk on longitudinal ridges, whereas transverse ones are OK.

(Apparently the usual terms are "tram" vs "ladder".)

I occasionally get a painful twinge in my ankle when I tread on the edge of a badly laid paving stone, so I can sympathise. But surely tactile paving is supposed to be felt by the feet, so it should not hurt people who tread on it even inadvertently!

fanf: (dotat)

In Britain there is a standard for tactile paving at the start and end of shared-use foot / cycling paths. It uses a short section of ridged paving slabs which can be laid with the ridges either along the direction of the path or across the direction of the path, to indicate which side is reserved for which mode of transport.

Transverse ridges

If you have small hard wheels, like many pushchairs or the front castors on many wheelchairs, transverse ridges are very bumpy and uncomfortable.

If you have large pneumatic wheels, like a bike, the wheel can ride over the ridges so it doesn't feel the bumps.

Transverse ridges are better for bikes

Longitudinal ridges

If you have two wheels and the ground is a bit slippery, longitudinal ridges can have a tramline effect which disrupts you steering and therefore balance, so they are less safe.

If you have four wheels, the tramline effect can't disrupt your balance and can be nice and smooth.

Longitudinal ridges are better for pushchairs

The standard

So obviously the standard is transverse ridges for the footway, and longitudinal ridges for the cycleway.

(I have a followup item with a plausible explanation!)

Page generated Apr. 23rd, 2017 09:50 am
Powered by Dreamwidth Studios