by Karla Burnett

SSL has been in the news a lot in the last few years—flaws like BEAST, Heartbleed, and POODLE have all had far-reaching consequences for the security of the Internet. But what do each of these vulnerabilities affect? And how does SSL actually work again?

To understand, we’ll first need a brief primer on the history of SSL. Initially developed by Netscape in the early 1990s, SSL was designed to provide security for HTTP connections, underlying the new HTTPS scheme. This meant Internet users could be sure they really were talking to, for example their banking site, and that no one else could see their communications. While version 1.0 was circulated internally within Netscape, 2.0 was the first version publicly released, in 1995. SSL version 3.0 was released a year later, in 1996.

By 1999 the development of SSL had been taken over by independent researchers. They made minor changes (version 3.1), that unfortunately broke backwards compatibility with SSL 3.0. Instead, they released a new version, named TLS 1.0. TLS 1.1 and 1.2 were released in 2006 and 2008 respectively, and as of July 2015, TLS 1.3 is being drafted.

These days, when people describe SSL, what they typically mean is SSL/TLS, since the protocols are largely interchangeable, and both can be used as the underlying security protocol for HTTPS.

But how do they actually work? At a high level, SSL and TLS allow, though don’t require, a client and a server to authenticate one another, and to decide on a secure communication method.

Now, let’s talk about what we mean by “secure communication method”. We really care about two properties: confidentiality, that no one else listening can understand our messages, and integrity, that the messages we receive really come from the the party we’re expecting. We also care about availability, that the messages we send can be read by the intended recipient, but it largely falls out of the protocol as designed.

We can achieve confidentiality in two ways, using either a stream or a block cipher. A stream cipher takes a secret key, and uses it to generate a stream of pseudorandom bytes. These bytes are then combined with the message, using a mathematical operation known as XORing, to produce a ciphertext. A block cipher works similarly, though on fixed size chunks of the message, known as blocks. In both cases, the ciphertext produced can only be used to recover the original message by someone who knows the secret key. To use either construction we need to agree on a cipher to use, and a secret key to use with them.

Integrity we achieve using something called an HMAC, or Hashed Message Authentication Code. A hash is a function that maps an arbitrary amount of data into a value of a particular length. Cryptographic hashes are hashes designed to be collision resistant, or hard to invert, meaning that given a hash value it is essentially impossible to find any piece of data that hashes to that value. An HMAC uses this property to provide integrity—we hash the message together with a secret key that only the two parties communicating know, to produce what’s called a tag, which we send along with the message. When the other side receives the message, they regenerate the tag themselves, and compare it to the tag we sent. If they are the same, they can guarantee that someone who knew the secret key sent the message, and that it hasn’t been tampered with. To do this, we need to agree on a hash to use, another key, this time for integrity.

Both our confidentiality and integrity schemes are forms of symmetric cryptography, meaning they require both sides to know the same key. To prevent replay attacks, where an attacker resends messages they previously saw go by, we also need to decide on a new set of keys for each new connection we establish.

To choose these keys, we use a construct called asymmetric cryptography, in which two parties with no shared knowledge can securely decide on a secret without an eavesdropper discovering it. There are several ways to do this, so both sides need to agree on how they’re going to exchange these keys. As part of this, they also need to authenticate one another, as we talked about earlier.

So, all together we need: a key exchange algorithm, including an authentication method; a cipher, for confidentiality; and a hash to use for our HMAC, to guarantee integrity.

These three properties are combined into one long string, called the ciphersuite. For example, if a client negotiated the Diffie-Hellman protocol (DH) for key exchange, with RSA for authentication, AES_256_CBC as a cipher, and SHA-256 as a hash, the connection would have a ciphersuite of DH_RSA_WITH_AES_256_CBC_SHA256.

Since clients and servers might each support a different set of ciphersuites, the first thing established in an SSL or TLS connection is the ciphersuite to be used.

When the client opens a connection to the server, it sends a ClientHello message, indicating the ciphersuites that it supports, and the version of SSL or TLS it is using, among other things.

The server then picks the “best” ciphersuite the client supports, based on its own preference list, and sends that back to the client in a ServerHello message. Before waiting for a response from the client, the server also sends its certificate, if server-side authentication is desired, its half of the key exchange, a request for a client certificate if one is desired, and then a ServerHelloDone message.

The client responds with a copy of its certificate, if requested; its half of the key exchange; a change cipher spec message, to indicate that all further messages will be encrypted with the chosen cipher; and a finished message, recording the client’s view of everything that has happened. This finished message is used to ensure that none of the unauthenticated communication between the client and the server was intercepted.

The server responds with its own change cipher spec and finished messages, and then the desired communication between the two sides takes place, using the agreed upon ciphersuite. This negotiation process is called the SSL/TLS handshake.

That’s all there really is to SSL and TLS, as they were designed. There are some subtleties around closing connections, and lots of things to watch out for in error cases, but those are the basic principles you need to understand for the attacks of the recent years.

Let’s start with the BEAST attack. Discovered in September 2011, it’s a vulnerability in TLS 1.0’s CBC mode block ciphers, which make up more than half of the ciphers provided by TLS 1.0. Although the vulnerability had been known about for years, and was preemptively patched in TLS 1.1, it took until 2011 for a practical attack to be demonstrated. The vulnerability allows a person in the middle to determine the content of an encrypted packet by guessing later ones, in what’s called a chosen plaintext attack. Client side mitigations are possible, while server side mitigation consisted of promoting RC4 ciphers above CBC ones, so that most clients would negotiate the more safe RC4.

In September 2012, CRIME was released—a compression attack against all current versions of TLS. In short, TLS supports compression, but was not correctly separating trusted and untrusted parts of a compressed message. An attacker who could control part of the plaintext, and observe the size of packets sent, could use this to calculate unknown parts of the plaintext.

Lucky Thirteen was released in February 2013, and was a timing attack padding oracle, affecting all SSL and TLS CBC mode block ciphers. It occurs because of the placement of data in messages; those with two correct bytes of padding are processed slightly more quickly than those without. This acts as a padding oracle, allowing an attacker to determine the plaintext of an encrypted message.

Just a month later, in March of 2013, additional biases were found in RC4, a stream cipher used in both SSL and TLS. This means that it takes a small number of bytes, and extends them into a much longer stream, which the original message can be XORed with. These bytes should be random, but as discovered in 2013, some of them are not as random as hoped. While not an attack per se, these biases allow an attacker who can perform 224 requests to recover parts of an encrypted message. This meant the RC4 stream cipher was no longer considered secure.

A year after CRIME’s release, in August 2013, BREACH was released. This was another compression attack against all versions of SSL and TLS, this time at the application level. Rather than relying on TLS compression, an attacker instead uses HTTP level compression.

In February 2014, goto fail was discovered. This was not a flaw in SSL or TLS themselves, but a bug in Apple’s SecureTransport implementation. When using DHE or ECDHE cipher suites, a particular check would always be ignored—that the certificate provided by a server actually belonged to it. This meant that the authenticity of the server a client was talking to could not be guaranteed.

Two months later, in April 2014, Heartbleed was released. This too was an implementation bug, though this time in OpenSSL, inside the heartbeat TLS extension. An incorrect bounds check meant that the server would trust the client to specify the length of a message sent. A malicious client could ask the server to send a message much longer than the amount of data available, leaking other information stored in server-side memory. This would allow, for example, the server’s private key to be leaked, and its connections to be intercepted.

Another implementation bug in OpenSSL was discovered in June 2014, CVE-2014-0224, or more informally, CCS injection. In this case, CCS stands for ChangeCipherSuite, and is the message that’s sent just before the SSL or TLS handshake is finished, to indicate that communication from then on in should be encrypted using the negotiated ciphersuite. Unfortunately, if an attacker sent a CCS message at a certain earlier point in time during the connection, they could coerce both sides into generating their key exchange secrets using only publicly available information. This would allow an attacker to read all further communication.

POODLE was released in October 2014 and, bucking the 2014 trend, was not an implementation bug. A padding vulnerability in SSL 3.0’s CBC block cipher allowed encrypted content to be leaked to an attacker, if they could persuade a client to visit a site that they controlled. This, in combination with the RC4 weaknesses previously discussed, meant that even with BEAST mitigations, no SSL 3.0 cipher could still be considered secure. This problem was exacerbated by the downgrade behavior of many clients—since TLS was not backwards compatible, in the event of a failed TLS handshake, many clients would automatically retry the connection with SSL 3.0. Unfortunately, an attacker could also trigger this behavior. Mitigations against this part of the attack were released, in the form of the TLS_FALLBACK_SCSV ciphersuite, but they required both client and server support to be effective.

FREAK, another implementation bug, affecting both OpenSSL and Apple’s SecureTransport, was discovered in March of 2015. To fully understand this bug, we need to take a brief trip back to the 1990s, when the US had strict laws around exporting weaponry, including strong cryptography. To support foreign clients who were unable to use this strong cryptography, a number of intentionally weak ciphersuites were added to SSL and TLS—the so-called export ciphersuites. In the late 90s, the restrictions on cryptography were weakened, however, support for the export ciphersuites has lived on in server-side implementations, often for compatibility reasons. FREAK allows a person in the middle to change the ClientHello ciphersuites from standard RSA to export grade RSA, even if the client did not allow export grade RSA.

Finally, in May 2015, an attack named Logjam was released, targeting the Diffie-Hellman key exchange method used in SSL and TLS. It provided a way for 512-bit, or export grade, Diffie-Hellman parameters to be factored. This meant the connections negotiated with this key exchange method could be intercepted. 1024-bit Diffie-Hellman keys also become unsafe, as they were thought to be within the range of adversaries with significant power, such as nation states like China or the US. The attack was also made more feasible by the sharing of Diffie-Hellman parameters across servers, which was previously thought to be safe, but drastically cut the cost of performing an attack. Reminiscent of FREAK, a flaw in the design of SSL and TLS also means that it is possible for an attacker to intercept the ClientHello message and downgrade the connection from standard Diffie-Hellman to export grade.

So where does all this leave us? SSL 2.0 has fundamental protocol flaws and is known to be broken. SSL 3.0 has no ciphers left that are considered totally secure, and support is currently being phased out of major browsers. TLS 1.0 ciphers are also in a poor place, requiring BEAST mitigations to be considered secure. TLS 1.1 and 1.2 are better off, with several of their ciphersuites still thought to be totally secure.

Unfortunately, adoption of TLS versions greater than 1.0 remains low, with only around 60% of sites supporting TLS 1.1 or 1.2. Additionally, keeping server-side ciphersuite preferences and mitigations up-to-date has also proved challenging—more than half of servers still support RC4 ciphers, and more than 80% of them are still vulnerable to BEAST.

What does this all mean for you? If you maintain servers, please keep their SSL/TLS libraries up-to-date. Take updates when you can, and review your ciphersuites periodically, using a tool like SSL Labs’ server test. As a client, your task is much easier—just use an up-to-date browser.

There’s still a lot of work to be done to make TLS easy to use, and to bring the protocol up to date with more modern cryptography. However, having more people understand the pitfalls we’ve fallen into in the past, and how to configure things in the present, will help keep everyone’s communications secure into the future.

Karla is a security engineer at Stripe, who enjoys breaking computers, fixing software, and assorted arts and crafts.

Like what you’re reading? Sign up for our newsletter to receive the latest news from The Recompiler in your inbox.