Skip to content Get in touch

Contact form

Heartbleed bug

It started like a normal Tuesday morning. First thing I did after waking up, was to check my Twitter. After seeing a lot of tweets about OpenSSL, I had to dig deeper. At first glance, it seemed like the worst thing to hit the web for many years. And it was.

OpenSSL was leaking its memory contents. The programmer had missed one size check when writing the heartbeat implementation. But it was bad. All traffic , usernames and passwords and even server private keys are in that particular memory.

Once I got to the office, we started replacing our customer systems’ OpenSSL libraries immediately. We started with our Exove Response team, but also notified all the current project tech leads to update all servers their projects are running on currently.

We started with our support customers’ servers. At the same time, our Support Services Manager Harri communicated to our customers, current and former, about the issue. We also asked our non-support customers whether they’d like us to update their servers.

Updating servers was pretty straightforward for the majority of cases. All recent Linux distributions released the fixed version of OpenSSL quickly after the bug was made public.

Some servers, especially the ones that were not on any kind of support agreement, were running some older distribution with OpenSSL updated from a third-party repository. For these cases we quickly got the source codes for the same OpenSSL version, added all the security patches including Heartbleed fix and compiled a compatible binary package for the server, then updating the OpenSSL to a safe version.

This work continued from Tuesday to Thursday, pretty much all hours we had people awake. We don’t regularly do long hours here at Exove, but this time our customers needed it.

During Wednesday it was already confirmed that the certificate private keys can leak via the bug and we continued into re-keying all our customer certificates and revoking the current certificated to prevent any future man-in-the-middle attacks.

Customer systems we had direct access to were safe within the first couple of days. The certificate replacement took a bit longer, due to re-keying delays at CAs, as well as customer who had certificates through their own contacts.

Other thoughts

More thoughts

Related services

  • Care