Current State of TLS Security: Interview with Prof. Kenny Paterson

Posted on Updated on


 

The Current State of TLS Security


During the Q&A session of the November 2015 ISG Alumni talk, a question was raised regarding various compliance requirements to move away from older versions of TLS was brought up, and how worried should we be of the current issues in TLS. Off the back of that Prof. Kenny Paterson (whose research included the Lucky 13 attack on CBC-mode encryption in TLS and attacks on RC4), was invited by the ISG Alumni to do an interview discussing the current state of TLS security.

The interview video is available in full (above), as well as a transcript of the conversation (further below). Due to the length of the topic we have also added a short summary of the key take aways.

In summary:

  • SSL 3.0 is dead, because of POODLE.
  • TLS 1.0 has problems because of BEAST, but you can mitigate the attack.
  • All versions of TLS using RC4 have problems because of weaknesses in that algorithm.
  • All versions of TLS using CBC mode have problems because of padding oracle attacks, and Lucky 13.
    • You can mitigate them, the attack is hard to pull off, but on the other hand, the mitigations are actually very hard to get right.
    • Even today, some implementations are getting it wrong.
    • This is mainly because because you’ve got to completely seal off this very delicate timing channel, and it’s not such an easy thing to do.

All of this points towards the long term future, that we should be moving away from these old versions of SSL and TLS.

The attacks have different levels of severity, and should be treated with different levels of seriousness, but they all show that the crypto in old versions of SSL and TLS is getting a bit long in the tooth, it’s getting a bit old.

The future is really TLS 1.2. In TLS 1.2, there are new types of encryption, called authenticated encryption, that you can use. In particular, there’s something called AES-GCM, and as far as we know, at the current time, AES-GCM is not vulnerable to any of those kinds of attacks that we’ve talked about so far.

In fact, TLS 1.3 will say, “No more RC4, no more CBC mode.” Only authenticated encryption using modern encryption mechanisms will be supported in TLS 1.3.


Interview with Prof. Kenny Paterson


Sherif Mansour: Welcome to another ISG video. I am with Professor Kenny Paterson at Royal Holloway, and today we’ll talk briefly about the different versions of TLS, and what are the security issues with it. It’s a bit of a hot topic right now also, because there are some compliance, and some recommendations to move away from older versions of TLS, so it will be good to get some understanding of the technical issues, and how worried should we be about them?

Kenny Paterson: Okay, great. Thank you very much for coming today. Welcome to my office. It’s a bit of a mess, as your viewers can probably now see, but we’ll do our best. SSL/TLS. SSL and TLS mean more less the same thing. They’ve been around for a long, long time. SSL was first introduced in the mid 1990s. You’ll still see SSL 3.0 around today. TLS 1.0 was released in 1999, so that’s now 16 years ago, by the IETF as an RFC, RFC2246.

Subsequently, we’ve had TLS 1.1, and then TLS 1.2 finally was released in 2008. Actually, right now the IETF is working on TLS 1.3, motivated by a lot of the attacks that have been found against earlier versions of TLS, but also trying to get something that is more performant, that actually has lower latency in establishing the initial connection. That’s one of the things that plagues TLS, it’s relatively slow at that, it needs a lot of round trips in the network, in order to establish keys, and get communications secured. TLS 1.3 aims to do that better.

Let’s talk now about some of the different versions of SSL and TLS, and talk about what some of the problems are. In earlier versions of SSL and TLS, before version 1.2 of TLS, you only had available two different encryption options. One of them was based on the RC4 stream cipher. It’s a beautiful stream cipher designed by Ron Rivest in the 1980s. Very compact, very efficient, relatively fast. Good on low end devices, like early smart phones. Early smart phones, for example, would prefer to use RC4. The other option is to use something called CBC mode encryption, and CBC mode encryption is a was of using a block cipher to encrypt large amounts of data.

Let’s talk about RC4 first of all, very briefly, and get it out of the way. It turns out that RC4 is not a very good stream cipher. For a long time people knew there were weaknesses in RC4, but nobody really knew whether they applied to TLS or not, and what we showed in 2013 was the first realistic attack against the use of RC4 in TLS. Our attack was able to extract maybe session cookies, which are used to provide states in web sessions, and basically if I can get a hold of your cookies, I can log in and be you on a website. The attack was able to target those.

The attack was not entirely practical, it required something like 2^30 encryptions of the cookie in order to get enough ciphertext to do some statistics, and then to be able to extract the cookie from all of those ciphertexts. There is a way that you can make a web browser give you that many encryptions of a cookie, by using malicious JavaScript, so there are circumstances under which this kind of attack if plausible, but the very high ciphertext requirements meant that people were not convinced about the seriousness of the attack.

However, there is a truism in cryptography, that attacks do only get better, or worse with time, depending on your perspective, and indeed in 2015, two or three research papers were published which improved those initial attacks, made them stronger, and so you can see that over time, RC4 is looking weaker and weaker as an encryption option. It’s increasingly indefensible.

That leaves CBC mode as your encryption option in the early versions of TLS, up to TLS 1.1. It turns out that CBC mode has been plagued with problems in TLS, and the reason for that is really right down in the layer of bits and bytes, understanding how the bytes are put together to make plaintext, which then get encrypted in CBC mode encryption. I don’t think it’s worth it going into those very little details, but it turns out that the particular way that CBC mode is used in TLS is problematic, and difficult to make secure.

This was already noticed, in 2002, in a research paper by Serge Vaudenay, who showed that there was an issue with the way that padding was used in CBC mode. Padding is needed because CBC mode operates in whole blocks of data at a time, and we need to then add some padding to get the amount of data up to a multiple of the block size before we can apply the encryption. He pointed out that by analyzing the error messages that were produced when TLS decryption happens, you can leak information about whether the padding was good, or the padding was bad, whether it was correctly formatted, or incorrectly formatted. This is a so called padding oracle attack.

What Vaudenay showed way back in 2002, was that you could somehow leverage a padding oracle attack to recover plaintext, which sounds mystical. How can it be that something about the padding can leak something about the actual message? But by being a clever attacker, you can cut and paste blocks around, and you can move blocks to the end, that you’re interested in decrypting, that contain plaintext, but they actually get treated as though they contain padding, and so the leakage from the padding oracle can tell you something about the plaintext. That’s the basic idea.

Old versions of SSL and TLS were vulnerable to this. If you look at the specifications of TLS 1.1, from 2004, it contains a protection against Vaudenay’s padding oracle attack. Basically what you try to do, is you try to make the error messages all appear to be the same, both in terms of the error message that you get, and also the timing of the error message, when your see the error message, because slight timing differences can tell you what actually happened inside a decryption, whether the padding was good or bad. Equalizing the timing is meant to get rid of this problem.

Unfortunately, they didn’t get it quite right, and what we showed in 2013, in the Lucky 13 attack, was that there was still a small timing difference that would leak information about whether the padding was good or bad, and so can reboot the padding oracle attack. That attack is the Lucky 13 attack, and basically the only way to protect against Lucky 13 is to do very careful patching on the server, to really equalize the timing difference.

Sherif Mansour: For TLS 1.1?

Kenny Paterson: No, you can apply the Lucky 13 patch to all versions of SSL. Sorry, let me start again. You can apply this patch to all versions of TLS. 1.0, 1.1, and 1.2. You can’t apply it to SSL 3, because SSL 3 has a different padding format, and different requirements on that padding, so the protections wouldn’t work there. Let’s come back to SSL 3 in just a moment. Let’s finish the discussion briefly of Lucky 13. You can patch against Lucky 13 in TLS 1.0, 1.1, and 1.2. However, the patch is actually really quite difficult to get right. In OpenSSL, for example, which is one of the leading open source implementations, you need about 500 new lines of code to make the decryption routine absolutely constant time, and the code is quite invasive, it doesn’t work very well with the HMAC algorithm, you have to patch HMAC in a particular way. Different releases of the TLS protocol, different implementations of the TLS protocol did slightly different things. Some of them patched it well, and some of them didn’t patch it so well.

The attack, the Lucky 13 attack, is not an easy attack to pull off. It’s again, a bit like the RC4 attack. It’s a threat, but it’s not clear exactly under what circumstances an attacker would be able to do it. In the research paper in 2013, we talked about targeting session cookies again, with the same kind of mechanism that we were using for the RC4 attacks. The status of different implementations, and their security against Lucky 13 is somewhat unclear. I do know that some vendors, and some implementors of TLS don’t regard Lucky 13 as a serious threat, because the timing differences are very small, they’re hard to measure over a network, and you can put in place adequate mitigation against the attack.

Let’s now talk about SSL 3. It turns out that because of the way that SSL 3 works, there is a sort of variation of these padding oracle attacks, and Lucky 13 attacks, that also applies to SSL 3. That attack is called the POODLE attack, it’s got a cool name. POODLE does stand for something. I think the PO is padding oracle, and the LE is legacy encryption, and then the other letters in the middle mean something else. The POODLE attack is a killer for SSL version 3.0. It’s not actually possible to mitigate it. What that means is that SSL 3 is effectively killed by the POODLE attack. I would also say that the POODLE attack is easier to carry out than the Lucky 13 attack. It doesn’t rely on these very, very, very small timing differences. It relies on whether you get an error message, or don’t get an error message from the TLS encryption process. SSL 3 is effectively killed by POODLE. POODLE is actually a very aggressive dog, that completely kills SSL version 3.

There’s a problem here, which is that modern web browsers are trying to find a way of using TLS. They might start by saying, “Hey, let’s use TLS 1.2” to a server. If that doesn’t work out, maybe the server doesn’t support TLS 1.2, it would then try again with TLS 1.1, and TLS 1.0, and eventually it would go all the way down to SSL 3.0. The problem with that is, most servers these days will support higher versions of TLS, and so they will avoid using SSL 3.0, but an active man in the middle attacker, can pretend to be the server, and can reply to the client of the first message, “Hey, I don’t know about TLS 1.2, sorry,” and then a modern client will then say, “I’d really like to give some version of TLS to my user, so I’ll then try TLS 1.1,” and then the active man in the middle attacker can again say, “Hey, I don’t know about TLS 1.1,” and the TLS handshake will probably fail, and you’ll end up using TLS 1.0, and so it goes on, so an active man in the middle attacker can force a modern client to downgrade all the way to SSL 3.0, if the client is willing to use SSL 3.0.

The real bite of the POODLE attack is not that SSL 3.0 is vulnerable, it’s that SSL 3.0 is vulnerable and clients can be forced down to SSL 3.0. The only solution to that is to switch off SSL 3.0 altogether on the client side, and on the server side, and just do not go that far down. POODLE kills SSL 3.0, and the POODLE attack is a variation of these padding oracle attacks, the Vaudenay attack from 2002, and the Lucky 13 reboot from 2013. It’s actually an easier attack to carry out than Lucky 13, and therefore is more of a threat if you’re willing to use SSL 3.0.

There’s one attack that we still need to talk about that’s very important here, and that’s called the BEAST. The BEAST came out in 2011, and actually it was a trigger for lots and lots of research on SSL and TLS protocols by people like me, and other people too. What the BEAST does is that it targets specifically TLS 1.0, and SSL actually, but the real focus was on TLS 1.0. At the time that the BEAST was released, almost every website and web browser was using SSL 3, or TLS 1.0, and not the later versions. It was an attack that was relevant for the large majority of TLS usage out there on the internet in 2011.

The BEAST targets something else. It’s also specific to CBC mode. What it targets is the fact that earlier versions of SSL and TLS, SSL 3 and TLS 1.0 specifically, used something called chained initialization vectors, or chained IVs. The IV in CBC mode is the block of ciphertext that comes at the very beginning of the ciphertext, it’s like a zero’th block that’s used to bootstrap the encryption process. The theory of cryptography tells us that the IV should be a uniformly random block to get security. Early versions of SSL and TLS were using, as that random block, the last block from the previous ciphertext that was sent on the channel. That block looks random, because it’s a ciphertext block, but actually the attacker knows that block before the encryption process starts for the new ciphertext. That initialization vector is actually predictable for the attacker, he knows the value of that block.

What the BEAST attack does is exploit that knowledge. The details are actually really quite difficult to understand, but it turns out that we knew already from the late 1990s, that we should avoid using predictable initialization vectors, and we already knew that there were types of attack called distinguishability attacks, if you had a chain initialization vector, for example. Those distinguishability attacks would enable an attacker to tell which one of two messages had been encrypted, so if you encrypted “yes” or “no”, the attacker could do the attack, and then find out whether “yes” or “no” had been encrypted. That’s not the same as recovering a cookie, or recovering a plaintext completely, but it’s a weakness, it’s like learning one bit of information about the message. Whether the message M_0 was encrypted, or message M_1 was encrypted.

What the originators of the BEAST attack showed, that was Doung and Rizzo, they showed that you could leverage that single bit of information, and turn it into a plaintext recovery attack. Again, it required something like malicious JavaScript running in the browser, so the user first of all had to visit a bad website, have some JavaScript persistent in a tab in the browser, running in the background, but if you could make that circumstance arise, then potentially the attack would be able to turn this one bit of information into an attack that completely recovered all plaintext.

The details are very complex. One of the things that Duong and Rizzo did, which was very impactful, was they made a video of it on YouTube, showing step by step how they could recover the session cookies for a Paypal session, and this got a lot of attention. It turned out, though, that in order to make the attack work, they actually needed a zero day vulnerability in the browser. Not a lot of people know that, and they never actually formally wrote up a research paper describing their attack. There’s a Blackhat presentation, or something of that ilk, and nothing more than that, and a video that looks very impressive. In some sense that attack is more of a proof of concept, or a demonstration, because would you waste a zero day vulnerability in a browser to do something like the BEAST attack? It’s not clear. You might use it for more nefarious purposes. To directly take control of the browser, for example.

Sherif Mansour: They needed two things. They needed a malicious JavaScript, and also logging the ciphertext, and using man in the middle, so they’re on the network, they record the ciphertext at the same time they’re making the requests.

Kenny Paterson: It’s good that you mention that, Sherif, because all of the attacks that I’ve talked about need that double capability. JavaScript in the browser, plus man in the middle attack observing the ciphertext. They’re pretty tough attacks to pull off, but then TLS is meant to be completely bulletproof, so … There we are.

You can mitigate the BEAST attack on the client side, and a lot of organizations, a lot of browser vendors did that. There’s something called TLS record splitting, that allows you to put in place a particular countermeasure against that specific attack. I think pretty much all of the main browsers did put into place mitigations against BEAST. BEAST, remember, is specific again to CBC mode cipher suites. It would attack a CBC mode cipher suite in SSL 3.0 or TLS 1.0. TLS 1.1 and 1.2 were immune to the BEAST, because they didn’t use these chained initialization vectors. 1.1 and 1.2 were using already the fully random initialization vectors for each message sent.

You might say, “Why would this be such a big deal if we already have TLS 1.1 and 1.2?” In some sense, the IETF had already anticipated the BEAST attack, and put in place protections against it in 1.1 and 1.2. The point is that nobody was using 1.1 and 1.2 at the time the BEAST came out in 2011, and therefore the impact was considered to be quite large at that time.

Where do we stand today? SSL 3.0 is dead, because of POODLE. TLS 1.0 has problems because of BEAST, but you can mitigate the attack. All versions of TLS using CBC mode have problems because of padding oracle attacks, and Lucky 13. You can mitigate them, the attack is hard to pull off. On the other hand, the mitigations are actually very herd to get right, because you’ve got to completely seal off this very delicate timing channel, and it’s not such an easy thing to do. Even today, some implementations are getting it wrong.

For example, just this week, we released a new research paper that showed that the initial release of Amazon’s own implementation of SSL/TLS, something called S2N, had a vulnerability with regard to Lucky 13. Amazon patched their implementation immediately after we notified them of it, so there’s no issue now, but it’s interesting to see that large organizations with pretty strong development resources are still finding it difficult to get all of the protections against Lucky 13 in place.

All of this points towards the long term future, that we should be moving away from these old versions of SSL and TLS. The attacks have different levels of severity, and should be treated with different levels of seriousness, but they all show that the crypto in old versions of SSL and TLS is getting a bit long in the tooth, it’s getting a bit old. The future is really TLS 1.2. In TLS 1.2, there’s new types of encryption that you can use. In particular, there’s something called AES-GCM, and as far as we know, at the current time, AES-GCM is not vulnerable to any of those kinds of attacks that we’ve talked about so far.

It may have other issues that might emerge in due course, as it becomes more and more popular. Large websites have already switched to using TLS 1.2, particularly with modern browsers that support TLS 1.2. For example, if you go on Google now, or you go on Facebook, in fact I did this yesterday in my teaching class, with the undergraduate computer science students here at Royal Holloway, and we looked, and we checked, and indeed TLS 1.2 with AES-GCM is now being used by Facebook, and by Google, and many other large websites too. The world is gradually moving across to to TLS 1.2. TLS 1.3 is coming. It will have better performance with the handshake, and it also allows the use of things like AES-GCM.

In fact, TLS 1.3 will say, “No more RC4, no more CBC mode.” Only authenticated encryption using modern encryption mechanisms will be supported in TLS 1.3.

Sherif Mansour: On that note, it’s actually some of the issues with the cipher suites, or the type of block chains in cipher suites that’s an issue. For example, in the past we had a streaming cipher like RC4, and then the block ciphers, it’s independent. A triple DES block cipher, an AES block cipher, Camellia, ChaCha, the other ones. It’s the fact that it’s using this type of … Because they’re using CBC-

Kenny Paterson: Exactly. There’s no new weakness in AES or triple DES that’s been discovered, it’s the particular way that it’s used in the TLS protocol. Specifically in the way that it’s combined with padding and HMAC, and also the way that IV is selected.

Sherif Mansour: Even in TLS 1.2, those cipher suites are still supported. If, for example, we switch off TLS 1.0, and 1.1, but still have those cipher suites, do we still have that problem?

Kenny Paterson: If you use CBC mode in TLS 1.2, you don’t have to worry about the BEAST attack, because the IVs are properly random for each ciphertext that you send. You still have to worry about Lucky 13. Lucky 13 is a difficult attack to pull off, and I actually don’t know what the CVSS score is, for example, for Lucky 13. That’s something perhaps we can check, and we can plug it into the blog. It may not even have a CVSS score. I don’t know.

With TLS 1.2, and CBC mode cipher suites, you only now have to worry about these very sophisticated timing attacks like Lucky 13. BEAST is not a concern anymore. TLS 1.2 will also support RC4, and you really should be thinking seriously about switching off RC4. The attacks are coming down in complexity and cost. The latest version of the attacks released in July 2015, in a research paper by Vanhoef and Piessens, from the University of Leuven in Belgium, require something like maybe 100 hours of traffic to mount the attack, whereas the initial versions of the attack that we had, back in 2013, needed something like 2,000 hours of traffic. You can see the direction that it’s going.

Sherif Mansour: You mentioned also, with your attack, it went down from 2,000 hours to 75 hours, just by using a better performing browser.

Kenny Paterson: Part of the speedup was from a faster browser, but part of the speedup was from actually using more powerful biases in the RC4 keystreams, and combining them in the right way. It’s an impressive piece of work.

Anything else?

Sherif Mansour: The other thing is now, deciding slowly but surely, removing the cipher suites, as the browser support goes away, because for example, the use of triple DES is for IE8, and it’s actually specific to XP. That’s an old version, and still uses CBC. As the browser support goes away, those can also be removed.

Kenny Paterson: Why don’t you … Let me ask you a question, and then you can talk about that?

One area that I don’t have a lot of expertise in is exactly how these cipher suites, and the different options interact with the different browser versions that are available, and what the impact would be, for customers, for example, if a website decided to stop supporting RC4. If you could give us a bit of a picture of what-

Sherif Mansour: Depending on the website, and if it’s mixed mode or not. For example, there are some websites where certain parts of the site are in cleartext, and the rest are over full site SSL, and then some sites just want to be completely over SSL. If, for example, you remove cipher suites for browsers, for example, some browsers can only support RC4 (and triple DES), IE8 for example. If you get rid of RC4 and triple DES, and therefore it can’t see anything. For full site, it will not even see the site. It will just throw up an error. For ones that are mixed mode, it will go from the landing page, and a few pages, and so on, and then nothing. It will die.

Picking the right cipher suite is important, and the order that they’re in.

Kenny Paterson: How long do you think we’ll see a substantial user base who are in IE8, on Windows XP, for example? Do you think that’s going to be with us for several years to come?

Sherif Mansour: It depends. There’s definitely a downward trend, and there’s a downward trend for several reasons. As newer generations of the platform updates, and the versions of the operating system go end of life. Therefore, we’ll see more of the change, because it’s not just over time, it’s based on events, and I’ll give you an example. The Internet Explorer browsers, the libraries that they use for SSL and so forth, are tied to the operating system, the way that they update.

So, the ones for XP don’t necessarily update as the ones that they’ve updated for Vista, and so forth. The example of Windows, IE for example, in IE8 for XP, it needs triple DES. We may try, for example, turning on AES on an Apache server, for example, and it just didn’t want to see it. We turned off a couple of cipher suites, and only triple DES, and it liked it.

Kenny Paterson: I guess the issue here now is that XP is out of support, so IE is not getting any updates any longer.

Sherif Mansour: Yes, for that cipher suite, unless Microsoft goes and backports it, and so forth. At the moment … That’s the reason why. It’s not that necessarily the browser isn’t patchable, because it’s end of life, it isn’t patched. If you use Internet Explorer 8, in Vista and above, it will support AES. That’s because the library that they use actually supports it. It’s also a bit event driven, as opposed to just over time. Over time, people buy new laptops, new machines, that’s fine, but also, an event will be from Google, or another vendor, or Firefox will just say, “As of today, we’re no longer supporting this,” and then over time, Google started moving away-

Kenny Paterson: That actually pushes customers to say, “Hey, my computer doesn’t work anymore, I need to install a new operating system.”

Sherif Mansour: My computer dies, and I cannot buy anything else except Windows 10, or a Mac, or a Chromebook, or something like that, and those will have more updated, and as over time, now that modern browsers also have auto updates, hopefully we’ll see also faster uptake of newer versions of TLS, but also hopefully HTTP/2 and other items as well, so that developers can also, as they build a site, they feel that people are moving to the more up to date version.

Kenny Paterson: I’ve definitely seen that, for example, with Google Chrome, its auto update feature means that on some websites, you can now use ChaCha, and Poly1305 as your encryption algorithm. There’s a brand new RFC for that. It might not even be out yet, but it must be out soon. On an experimental basis, when Google Chrome talks to some Google sites, you actually get that as your cipher. A large company like Google that controls both ends, at least a fraction of both ends, can experiment in that way, and try pushing out new software.

We’ve also seen it, of course, in things like certificate warnings, and now having to cross through the browser lot, because maybe you have a SHA1 certificate that has an end of life beyond 2018, or something. Companies like Google are being quite aggressive about the way they push security updates and browser warnings to their users. Aggressive in a good way, I think. I think it’s the right thing to do.

Sherif Mansour: The thing that we are also interested in seeing, and we haven’t had until this point, is monitoring of fake certificates, or signed certificates that we have that the company hasn’t issued. At the moment, Google has certificate pinning, so that you can actually tell the browser, I think there’s an RFC now, where you can actually put that in the DNS record, saying, “This is my public key,” and therefore if the browser sees another certificate for that domain that isn’t actually in the record, and so forth, it will then report back. That’s one of the reasons why Google every now and then reports they’ve seen a certificate out in the wild. It would be also good to have that visibility, because at the moment, we talk about man in the middle attacks, but we don’t necessarily have data on how prevalent it is, how often it happens.

As we saw, for example, with Facebook and [inaudible 00:31:33] when they realized, there’s these strange error warnings that are coming up, and then they realized what’s going on, and then they went over to full site SSL. The same thing would apply elsewhere, is they actually get data on how often that happens. Although, the problem with pinning, is that if you get it wrong, then people won’t see the site.

Kenny Paterson: Your site disappears completely, absolutely.

Sherif Mansour: Probably, as with everything else, you can put it in monitoring mode, that you just see what’s going on, rather than turning it on in enforcement, but if it’s enforcement, and you do it wrong, if it’s applied wrong, then people will not even be able to see the site.

Kenny Paterson: There’s some interesting research going on in the academic community right now, about the effectiveness of pinning, and how widely deployed it is. There are other things, like HSTS, which are also being deployed as part of TLS. The uptake of these new approaches is not that big yet, but I think that we can expect to see it grow over time.

As you say, when it goes wrong, it can go very badly wrong, and your site can disappear. There’s also some scalability issues, as well, with some of the techniques that are being proposed. For example, there is certificate transparency, which Google is also promoting. This is a system which basically, thinking now, this is about certificates, not about the low level crypto any more, it’s a slightly different topic, but basically your browser providing lots and lots of information about which certificates are valid for which sites, and it’s not clear how this would scale to the whole internet. Maybe it works for the top 1,000 sites, or something, but beyond that, it’s hard to see how it’s going to scale.

Sherif Mansour: There’s a protocol, actually, for emails, called DMARC. Nothing to do with SSL and so forth, but what they’ve done is that they’ve actually … It’s a combination of DKIM and SPF. Basically this is for spam. What they do is that you look at the DNS records, and you check … There would be a DNS record for SPF, which is actually the Sender Policy Framework, and basically it’s a bunch of IP addresses that the emails can be sent from, and then there’s another one, DKIM, which says the email header will be signed by this public key, or a series of public keys, so I believe there’s, for scalability, for the browser, now going back to SSL, you won’t necessarily have every certificate under the sun in the browser, but you can tell the browser to go into a DNS record, and look up the public key there, the danger is if it’s over cleartext, if the record isn’t signed, so you need DNSSEC, and then you need DANE on top of that, in order for it to work.

That actually is a little bit less process intensive. Memory intensive as well, so you don’t have every certificate under the sun in your browser, but at the same time, you can look it up. At least that’s the theory.

Kenny Paterson: What approach do you think is going to win out in the end. Do you think we’ll see DANE and DNSSEC widely deployed? Or do you think something like certificate transparency will be the solution in the end? Or maybe things will coexist?

Sherif Mansour: Exactly. Only time will tell, and there’s several reasons for the success of whether or not it’s going to be taken up, and sometimes is isn’t necessarily always security that’s the reason. It might be performance, it might be that. The idea is also to embrace as much as they can, and then pick the winning variant. Obviously, also with academia, not just finding the solution, but also identifying the holes in it, and actually having more activity towards it. You remember, a while back, there was also convergence. The idea that there will be a notary.

There was also some questions around … Google came out and said that there was also some scalability issues, they would have to support the notary, so they would have to be the notary on the site, and there were some privacy issues there as well, but the idea is that somebody else checks the site for you was interesting.

There’s also Let’s Encrypt, so there’s basically now an NGO CA that you can get your own certificates from, that is free.

Kenny Paterson: They’re now up and running, I believe, and they’re shipping certificates, and they’ve been cross certified by one of the major CAs, too.

Sherif Mansour: Which means that your browser will trust these certificates. I was looking at it in the summer, it seems promising.

Kenny Paterson: I know the guys who are doing that, actually. It includes people like Eric Rescorla from Mozilla, for example, who’s on the advisory group, making sure that their CA works properly. They’ve got very, very good people involved in making it happen. I think it’s quite exciting, actually. It could … I mean, the CA marketplace is all about business, about making money. It’s about selling certificates, and selling trust. Along comes Let’s Encrypt, maybe it’s quite disruptive. Let’s see, it could be interesting.

Sherif Mansour: The fact that there’s about 600 CAs that can issue certificates, so that-

Kenny Paterson: Now there’s 601.

Sherif Mansour: Which is great. Hopefully now there’ll be one. But, it’s the fact that you might be using one CA for your certificates, or two, but out in the middle of nowhere you might find a CA that has issued a certificate on your behalf. Now, that doesn’t often happen, but we won’t even know unless we have some kind of way of actually monitoring it, and getting feedback on, you’ve accepted a certificate, and so forth, so getting that kind of logs from the browser is also helpful to give people metrics on how often does man in the middle happen?

Kenny Paterson: I guess this is another topic that we could discuss in another blog in another time. Maybe we should do that, actually. It could be the next one.

Sherif Mansour: Yeah, that would be … Getting more data off of browsers and so forth, on security issues, will actually help us also say, “This is how big the problem is at the moment.” Google did something with third party JavaScript, and they said it’s like 5% of the traffic that they see, there’s malicious, or ad-injection, which is they inject adverts in there. It’s also a massive economy. You can also imagine, that from the types of attacks that we discussed, actually injecting the JavaScript isn’t necessarily that hard given that it’s so prevalent. It’s a matter of having that, and the man in the middle, and then combining that with enough money and power to be able to break the encryption.

Kenny Paterson: Good.

Sherif Mansour: Thank you very much for your time.

Kenny Paterson: Great talking to you.

Leave a comment