TLS is Flawed: New TLS Vulnerability Found
TLS is flawed. The decades-old protocol is vulnerable to man-in-the-middle attacks throwing into question the nature of encryption on the world wide web. An Austrian IT firm has prevented a security disaster threatening many popular Internet websites and applications by compiling research on a new vulnerability in the twenty year old cryptographic system.
Austrian researchers from Research Industrial Systems Engineering GmbH (RISE) unveiled last month vulnerabilities found in the TLS (Transport Layer Security) protocol at a USENIX WOOT conference. TLS, most commonly known as a wrapper around HTTP as HTTPS, evolved from SSL (Secure Sockets Layer).
The new vulnerability, called “Key Compromise Impersonation (KCI) attack,” allows MitM attackers to control client-side code running in a victim’s browser. This means the information appearing on a website or application can be changed.
This security protocol forms the basis of most Internet security critical to important functions like e-banking. As RISE reported, the protocol has been vulnerable to a man-in-the-middle attack. The attack could allow hackers to read private communication between users. You can learn more about the vulnerability here. The group published a proof of concept video on Youtube demonstrating a hack against Facebook.
In the RISE press release, “Security disaster prevented,” the Austrian IT firm outlines how hackers can attack major Internet pages and apps. The vulnerability had already been disclosed to companies like Google, Microsoft, Apple and Facebook before RISE went public.
Solutions are in the works. Thomas H. Ptacek, well-known in InfoSec and co-chair of the USENIX WOOT conference this year recently stated on Twitter: “Suggestion: there should be a ‘Secure Transport Protocol Competition’, to design the alternative to TLS.”
Suggestion: there should be a “Secure Transport Protocol Competition”, to design the alternative to TLS.
— Thomas Ptacek (@tqbf) May 23, 2015
RISE security researcher Clemens Hlauschek answered some questions for Hacked
Do you have any comments on the process of exposing the bug?
CH: Everything has been responsibly disclosed before our firm went public. Since TLS implementations are so numerous, and we do not have access to scrutinize all possible TLS client implementations (some are proprietary, etc), there are probably still some other clients apart from the ones identified by us out
there who remain vulnerable. Of course, all the major player have been informed: Google, Microsoft, Apple, etc.
If this is not a Facebook specific problem, does this means that all social networks are vulnerable on this level. Does it stop there? Can info be changed in our online banking accounts, perhaps the most sensitive accounts people have?
CH: In principle yes. In practice, servers must provide elliptic curve certificates, and they are not that widespread yet as less than 10% of all certificates are EC certificates. Other security measures, such as X509 Key Usage extension settings, can further prevent the attack, but they are neither mandatory, not always correctly honored by client implementations, and often poorly configured by certification authorities. This is also a situation that will hopefully change in part due to the disclosure of our attack.
For Internet laymen, it would appear that using the Internet is hopeless. That the entire thing needs to re-done. How does your team feel these sorts of issues should be approached?
CH: Keeping critical systems secure these days is a full time job involving many experts who specialize in different areas, and who always keep up-to-date with the newest developments and state-of-the-art research with a sharp and alert mindset.
But it is not completely hopeless. Security needs to be built into critical systems and IT infrastructure from the beginning of the planning and development phase. Many systems in the past have been designed without deep security considerations, and security has been added only as an afterthought during the end of the development phase.
It is still common practice to design a system and only afterwards employ a penetration testing team who take a cursory look at the system for a week. This kind of testing stops some basic and obvious attacks, so is not always completely useless and better than nothing at all. But to build a secure system, security considerations have to be taken into account from the very beginning of the development.
My company develops various different security-critical IT solutions, such as airport software, payment infrastructure, as well as government and health-care systems. These systems deal with very sensitive, private data. The damage to the public, to the citizens of a country, would be immense if health data leaks and gets into the hands of private actors. Therefore, at RISE, the development of critical IT systems is always accompanied and supervised by security engineers and security experts from the very beginning of the initial planning phase. You need experts who live and breath security.
Your feeling that many things have to be redone is certainly right. Take TLS, which was designed more than twenty years ago. It has been attacked over and over by researchers, and evolved to be a much securer system over the years.
But still, its design is outdated. In the meantime, the science of cryptography has advanced over the years. Cryptographers established the tools that allow us to build many cool things. We have tools available now that we did not have 20 years ago. We have formalized the notion of a secure channel, we can prove and formally verify that a protocol is secure in these models. However, these tools cannot easily be applied to outdated systems such as TLS. Many research groups have tried to provide formal proofs of security for TLS, but the task turned to to be elusive. TLS was not designed with formal analysis or provable security in mind, because these tools did not really exist back then.
Now, more is available, but we are still stuck with the old tools, mainly due to compatibility issues. The main reason TLS use is so widespread today is because TLS is supported across all kinds of different devices. It’s difficult to get rid of that, and to provide an alternative. In a system where we develop both clients and backends, we do not need that kind of compatibility, and can employ something more secure than TLS.
What’s the future of TLS?
CH: TLS evolved out of a SSL, the first versions of which have been really crappy and insecure, but TLS got attacked, fixed, attacked, fixed in an seemingly endless circle.
Currently, the TLS WG at the IETF is working on the next version of TLS (1.3). This time, much more established cryptographers are involved in the process than the last times, so it looks a little bit more promising than the iterations before, and they plan to redo quite a lot of stuff, and get rid of many insecure options. But still, I am not convinced yet regarding the end result of this process.
How did the companies respond to your warning?
CH: The affected vendors we worked with quickly fixed their systems. Facebook basically did everything, such as change their server certificates and setting more restrictive X509 Key Usage Extension settings. It really was a pleasure to work with Facebook’s security teams – they are polite, quick, and very responsive. But since certification revocation is basically universally broken, attackers can still use the old Facebook server certificates to pull off the attack successfully, provided the client uses unpatched software.
The MitM attack against Facebook was nice for demonstration purposes, because everyone knows Facebook, and it creeps people out if someone tampers with their private communication and data. But I guess, the main message of our research should not be that Facebook has been vulnerable, but that (1) TLS is old, overly complex, and has some serious issues, partly because it carries too much luggage, and that (2) system implementers should pay attention so that this protocol issue does not resurface. Only recently did OpenSSL implement non-ephemeral Diffie-Hellman client authentication – which we identified to be a security problem – and they seemed to be on a trajectory to also implement the even more security-critical elliptic curve version (fixed ECDH).
What do you hope the results of your findings bring about?
CH: We hope that our disclosure of the attack will stop such developments from ever being integrated in large-scale and production-ready software. It could have been a real disaster if such changes would have been pushed into systems such as Google Android, where client certificates can be easily inserted into the systems’ certificate store by completely benign looking apps in order to attack security-critical apps – a backdoor undetectable by malware researchers, automated malware analysis systems – without the knowledge of the attack vector as described in our research paper.
You can view RISE’s findings in PDF format here.
Featured image from Shutterstock.