20151115 - Does HTTPS servers know the time?¶
I started a very crude script last week, and it has been chugging along I did things and then went down with a flu-like infection for a couple of days.
In total it tried 10611 random IP numbers over a week.
The vast majority, 7863, just didn’t reply with any packets, likely because of firewalls.
Another 2035 had no routing at all, which is interesting: 20% of IPv4 numbers are not in routing tables, months after we ran out of IPv4 ?
Another 478 refused connection to port 443.
One hundred and three of those were bogus, because I misread the jot(1) manual page and IPv4 numbers cannot have “256” in them.
That leaves us with 132 that actually did reply at the TCP level.
Of these 28 did not talk SSL/TLS on port 443, which shows that it is quite common to put other services on this port because you can generally get it out of firewalls, for instance on hotels etc.
Next up 72 did not speak HTTP on top of SSL/TLS.
That leaves us with 32 to genuine responses.
19 of those provably did not return an up to date UTC timestamp.
Unfortunately I did not save enough debugging output, so I had to try to probe them again to diagnose why:
Two which were barely outside UTC first time, were inside on second try. Both look like legitimate (semi-)serious servers.
Two were consumer-market NAS boxes and were 10 and 17 seconds off UTC. (This is not disk-spinup related, they were consistently off.)
One were consumer-market NAS HW, with non-std firmware and being <1s of UTC.
Five were < 2 seconds off UTC and stayed there on second try. (3 IIS, 1 Apache, one unknown, but probably some UNIX, good certs)
One apache with self-signed cert were 2.1 seconds off UTC.
One were Apache, had an expired cert and a clock 64 seconds off. (The cert looks like a bot-net cert for SEO activities.)
One IIS servers were 5 seconds off UTC with an expired cert.
Two IIS servers had clocks 14 and 2733 seconds off UTC, with unofficial but not self-signed certs.
Four hosts could not be reached on second tru (1, 55, 503 and hundred million seconds off UTC).
None of that should surprise anybody.
Here are the timekeeping results from 15 hosts from the reprobe:
The Y-axis is time difference from UTC.
The X-axis are the sequential probes.
The plot shows the possible interval inside which the remote servers clock lies.
A number of things stand out:
We should not expect to get more than 3 transactions through before the other side drops the connection.
That’s perfectly fair IMO, and perfectly usable.
More exchanges generally do improve things, but not nearly as much as the first three, which generally manage to cut the interval in half. This is where the RTT comes into the picture.
At least one server is precisely a second wrong.
I can’t help wondering if it has still not caught up with the most recent leap-second?
Four servers are some fraction of a second wrong.
The benign explanation is that they run Windows or synchronize from an uncalibrated NTP server (For instance the DCF77 S1 NTP servers seldom calibrate the sizeable radio propagation and receiver LP filter delays out).
But it might also sink the whole idea: If these servers use HTTP caching, the “Date:” header may (or may not) be cached and therefore useless for timekeeping.
One way to circumvent that is to trigger an error message rather than a “legitimate” HTTP response, but the downside to that is that a lot of HTTP(S) servers close after errors.
But so far I don’t think it invalidates my idea, at most it teaches us that if you pull time from random servers on the net, you probably get random time.
But we knew that already, didn’t we ?
What is absolutely clear is that an organisation would be able to put up a trivial dedicated HTTPS server, focus on getting time on it right, and clients spread out over the net can use it to validate their NTP (or otherwise!) derived timekeeping to well within a second.
That seems like an improvement to me.