Yay, I've had interesting comments on my previous article about using curl to test web servers before going live!
I think there are a couple of things that are worth unpacking: what kind of problems am I trying to spot? and why curl with a wrapper rather than some other tool?
What webmasters get wrong before deployment
Here are some of the problems that I have found with my
Is there a machine on the target IP address that is running a web server listening on port 80 and/or port 443?
Does it give a sensible response on port 80? either a web page or a redirect to port 443? A common mistake is for the virtual host configuration to be set up for the development hostname but not the production hostname.
Does it return a verifiable certificate on port 443? With the intermediate certificate chain?
Is the TLS setup on the new server consistent with the old one? Like, will old permanent redirects still work? If the old server has strict transport security, does the new one too? We have not had many security downgrades, but it's a looming footgun target.
This is all really basic, but these problems happen often enough that when I am making the DNS change, I check the web server so I don't have to deal with follow-up panic rollbacks.
Usually if it passes the smoke test, the content is good enough, e.g.
when I get HTML I look for a
h1 that makes sense.
Anything content-related is clearly a web problem not a DNS problem
even to non-technical people.
What other tools might I have chosen?
Maciej Sołtysiak said, "I usually just speak^H^H^H^H^Htype HTTP over telnet or openssl s_client for tls'd services." I'm all in favour of protocols that can be typed at a server by hand :-) but in practice typing the protocol soon becomes quite annoying.
HTTP is vexing because the URL gets split between the
and the request path, so you can't trivially copy and paste the
user-visible target into the protocol (in the way you can for SMTP,
say). [And by "trivially" I mean it's usual for terminal/email/chat
apps to make it extra easy to copy entire URLs as a unit, and
comparatively hard to copy just the hostname.]
And when I'm testing a site, especially if it's a bit broken and I
need to explain what is wrong and how to fix it, I'm often repeating
variations of an HTTP(S) request. The combination of command line
curl's flexibility makes it super easy to swicth between
GET and HEAD (
-I) or ignore or follow redirects (
-L), and so on.
OK, so I don't want to type in HTTP, but often I don't even need to
HTTP to find a site is broken. But checking TLS is also a lot more
For example, using my script,
curlto chiark.greenend.org.uk https://dotat.at
How do I do that with
openssl s_client -verify_return_error \ -servername dotat.at \ -connect chiark.greenend.org.uk:443
OK, that's pretty tedious to type, and it also has the chopped-up URL problem.
curl checks subject names in certificates,
s_client only checks the certificate chain. It does print the
certificate's DN, so you can check that part, but it doesn't print the
subjectAltName fields which are crucial for proper browser-style
So if you're manually doing it properly, you need to copy the
certificate printed by
s_client, then paste it into
-text and have a good eyeball at the
I have done all these things in the past, but really,
awesome and it makes this kind of smoke test much easier.