Fighting Cargo Cult – The Incomplete SSL/TLS Bookmark Collection

Engage Padlock!Throughout the recent months (and particularly: weeks), people have asked me how to properly secure their SSL/TLS communication, particularly on web servers. At the same time I’ve started to look for good literature on SSL/TLS. I noticed that many of the “guides” on how to do a good SSL/TLS setup are actually cargo cult. Cargo cult is a really dangerous thing for two reasons: First of all, security is never a one-size-fits-all solution. Your setup needs to work in your environment, taking into account possible limitation imposed by hardware or software in your infrastructure. And secondly, some of those guides are outdated, e.g. they do neglect the clear need for Perfect Forward Secrecy, or use now-insecure ciphers. At the worst case, they are simply wrong. So I won’t be providing yet another soon-outdated tutorial that leaves you non-the-wiser. Instead, I’ll share my collection of free and for-pay documents, books and resources on the topic which I found particularly useful in the hope that they may help you in gaining some insight.

Introduction to SSL/TLS

If you’re unfamiliar with SSL/TLS, you definitely should take half an hour to read the Crypto primer, and bookmark SSL/TLS Strong Encryption: An Introduction for reference.

Deploying SSL/TLS

So you want to get your hands dirty? Check your server setup with Qualys SSL Labs’ server test. Make sure you fix the most important issues. You should at least be able to get an “A-” grading. If you find yourself in trouble (and are the administrator of an Apache or nginx setup), you should read the OpenSSL cookbook. Professional system administrators should have Bulletproof SSL/TLS and PKI on the shelf/eBook reader.1)

If you find yourself with too little time on your hands, you can skip through to Mozilla’s awesome config tool which will help you with setting up your SSL vhost for Apache, nginx and HAproxy. However, some background may still be needed. You will find it on Mozillla’s Cipher recommendation page and the OpenSSL cookbook.

The SSL, the TLS and the Ugly

If you are a dedicated IT professional, you should not miss the next section. Although it’s not crucial for those wishing to “simply secure their server”, it provides those who are responsible for data security with a clear understanding of the numerous theoretical and practical limitations of SSL/TLS.

Tools and Utilities for Debugging SSL/TLS

Sometimes you need to debug errors during the SSL handshake. While a bit primitive, OpenSSL’s s_client tool is the weapon of choice. When it comes to monitoring SSL/TLS encrypted communications, use mitmproxy or Charles. They need to be added as proxies, but can also intercept PFS connections, due to their active MITM position.

This list is not exhaustive and if you have more suggestions, please go ahead and post them in the comments. I’ll be happy to add them. Finally, just like with system administration in general, you’re never “done” with security. SSL/TLS is a swiftly moving target, and you need to be aware of what is going on. If you are an IT professional, subscribe to security mailing lists and the announcement lists of your vendor. Finally, while I’m aiming to update this page, there’s never a guarantee of up-to-dateness for this list either.

Update (22.04.2014): Don’t miss the discussion on this article over at Hacker News.

Article History

  • 21.04.2014 – Initial version
  • 21.04.2014 – Added “The Case for OCSP-Must-Staple”, Mozilla Cipher suite recommendation
  • 22.04.2014 – Updated to add sslyze and cipherscan, added HN link, fixed typos
  • 02.05.2014 – Add “Analyzing Forged SSL Certificate” paper
  • 19.12.2014 – Add Mozilla SSL Generator, updated text on book availability

1) I do realize that I am courting Ivan a lot in this section and that relying on only an a single external web service that can go away any day is not a good thing. At the same time I think that the handshake simulation and the simple rating process are priceless, as such assessment cannot be trivially done by people whom’s life does not revolve around crypto and security 24/7. At the same time, I’m happy for any pointers towards other, user friendly tools.

2) While blindly following the rating can easily lead to the establishment of cargo cult, is continuously updated to only give those a good grading that follow the best pactices. Again: Avoid Cargo Cult, make sure you have a good idea of what you are doing.

On Practical Qt Security

At 30C3, Ilja van Sprundel gave a talk on X Security. In this talk, he also discussed Qt security matters, specifically how running a setuid binary which links against Qt is unsafe due to exploitable bugs in the Qt code base (citing the infamous setuid practice in KPPP). While his points are valid, he misses the greater picture: Qt was not designed for use in setuid applications! Consequently there are a lot of ways the security of a Qt application can be compromised when it runs as root. So I went on to discuss this issue with QtNetwork maintainer Richard Moore, and we both agree that in contrary to Iljas claim, we do need to dictate policy. So here it goes:

Do not ship Qt applications that require setuid. While the same is probably true for any other toolkit, we have only discussed this for Qt in more depth. Actually, Rich has prepared a patch for Qt 5.3 that will quit if you try to run an application setuid unless you ask it to. This should make it harder to shoot yourself into the foot.

While making QtCore and QtNetwork safe for setuid use is possible, they currently are not. If you absolutely have to (and you really shouldn’t), at least unset QT_PLUGIN_PATH and LD_LIBRARY_PATH in main(). The latter is required because even though LD_LIBRARY_PATH is ignored by the linker for setuid binaries, it is used internally by QtNetwork unconditionally to look for OpenSSL. Of course, you also need to follow all the other best practices (note that even this list is incomplete, e.g. it doesn’t mention to close FDs).

However, there are also situations where a Qt application running as user can be unsafe, so to those who ship their own Qt build to their customers, there are even more policies:

  • Never build Qt so its prefix is a publicly writable directory, such as /tmp: Suppose you build a in-source (developer) build in /tmp/qt, then Qt will go ahead and look for plugins in /tmp/qt/plugins. A malicious user could simply provide a fake style there that next to calling the style which the user would expect (e.g. via QProxyStyle) executes arbitrary malicious code. The same goes for Image IO plugins, which are handled in QtCore.
  • Never build Qt so its prefix is a home directory: This one is more tricky and a lot harder/unlikely to exploit, but it’s a valid attack vector nonetheless: Suppose Joe Coder compiles Qt in-source on /home/joe/dev/qt. Now every customer needs to make sure that a local user by the same name is a really nice person.

So in conclusion, a better summary of the above would be:

Never distribute binaries built with a prefix that is a non-system directory!

If you already have this setup, but need a hotfix, there is hope: contains strings starting in qt_plugpath= and qt_libspath=. Both are padded to 1024 bytes. Adding a binary null after the first / keeps Qt from looking for loadable code in user accessible locations.

TL;DR: The bugs Ilja points out are valid, but only affect applications that don’t follow good practice. We will attempt to make it harder for developers to make these mistakes, but writing suid applications isn’t something that will ever be recommended, or easy to do safely. Apart from the suid issue however, there are more traps lingering if you provide your own Qt and build it in an unsafe way.

Further reading: Google+ discussion on the topic.
Acknowledgements: Richard Moore for contributing vital information to this document, Thiago Macieira for proof-reading.

Update: Clarified the wording to ensure it’s clear that a prefix is meant. Thanks, Ian.

Update 2: As Rich and David Faure pointed out, KPPP is dropping permissions before calling Qt code, and KApplication already has a setuid safeguard in place.

Update 3: Richs setuid check has been merged.

FrOSCon 2013: Call for Projects, Papers

frosconThe Free and Open Source Software Conference (FrOSCon) will take place on the 24th and 25th of August this year, kindly hosted by the Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin.

We are looking for projects that would like to present in the exhibition area, as well as interesting talks or workshops. Additionally, you can request a project room if you want to sit together to do some hacking, or have your own, topic specific course of lectures. Sign up now!

But even more so, we are looking for speakers. The focus this year is on Seamless Computing, How Free Software should deal with closed ecosystems and How to let the computer do your chores. Even if your topic is not in one of those domains: Submit your proposal.

Badge and Progress support in Qt Creator

For quite a long time now, Qt Creator has been using the native features of Mac OS X and Windows 7 to display build errors and progress as badges and progress bars on the icon in the Dock and Task Bar respectively. Unfortunately, the X11 variant has been sorely empty. After realizing that a few applications, among them Chrome, could display certain information such as the number of ongoing downloads in Unity, I was wondering how this was implemented.

It turns out that libunity provides all the features required. Applications, identified by their .desktop name, can add progress and a badge. So I implemented it for Qt Creator, here is unity showing two compile errors in a demo application:


Now the gory details: libunity seems to be binary incompatible. After researching for a while, I noticed that it is best to follow the Chromium implementation and only try to open known-to-work versions of libunity. This it not really optimal, but it works, at least until the next unity release :(.

*Magnum narrator voice on*: Now I know what you’re thinking 1), and you are right: “X11” does not equal “Unity”, but I have no idea if there are equivalents in GNOME 3 or KDE/Plasma. Probably the terminology is just different. If so, I’d happy to learn about them and implement these as well. A plus if there a FD.O library (and I’m afraid there isn’t). If not, I’d like to toss in the question whether KDE should have such a feature, or how else applications can usefully reflect certain statuses while in the background.

1) especially if you are reading this through Planet KDE!

Gran Canaria, here I come!

Wow! The last few days have been eventful. Only four Days after LinuxTag and the KDE Wiki Meeting I am sitting in the check-in area of the Berlin-Tegel Airport heading for Madrid. If everything works out as expected, I will then transfer to a flight to Las Palmas. I swore myself not to blog before I have checked in successfully, so the time for this entry is now, and to make it even more obvious:

The weather is awesome in Berlin already so I am looking forward how Gran Canaria will beat this (probably less thunderstorms in the evening, although they are really refreshing).

At GDCS, I will present Qt Creator, the scalable C++ IDE from Qt Software (I even brought the leaflets I printed LinuxTag, my bag I short of over baggage). I am looking forward to meet everyone again tonight at the welcome party!

FrOSCon 2009: Call for Papers About to Close

The Call for Papers for this years’ Free and Open Source Conference (FrOSCon) will close in three days. Hot topics are Cloud Computing, Open Hardware, Free Software and SaaS (Software as a Service) as well as mobile Gadgets (Netbooks, Phones, …).

Traditionally, FrOSCon has always hosted a sub conference. After hosting the Python and PHP community, this years programming language du jour is Java. Does anyone feel like giving a Jambi talk? 🙂

Btw: Qt Software supports FrOSCon as a Gold Sponsor and both Qt Software and the KDE team will of course be present during the conference. Visit us from 22.- 23. August 2009 in the premises of the University of Applied Technology in St. Augustin near Bonn!

KDE Dot News: Back To Where It Belongs

Following up on my last post, I wanted to give you a few more admin updates: Since a few weeks, KDE Dot News is back on its old server. Just like before the move to Drupal, after a short visit to Immanuel in Munich, it is hosted at Oregon State’s Open Source Labs (OSUOSL) along with some other Drupal-hosted sites.

I want to thank OSUOSL for their continuous and now even extended hosting of KDE sites. If you like the Dot, please consider a donation to those fine guys so they can keep us up and running. Thanks OSUOSL!

PS: I wanted to note that we moved away from Google Analytics to a private Piwik installation for the Dot due to understandable privacy concerns.

UserBase and TechBase: Achievements and Challenges

Finally I took the time to do some long-standing maintenance work on UserBase, our home for KDE users and enthusiasts and TechBase, our page for Admins and Developers, based on MediaWiki technology, for everyone to particilate

  • MediaWiki bumped to v14 (SVN)
  • True MultiHoming, lowers meantime between updates.
  • Case insensitive search for short words (i.e. ‘kde”) works
  • Search-as-you-type works
  • If a site search fails, you can now use other search engines to search the sites in a second pass.
  • TechBase and UserBase can now be added as search providers for e.g. FireFox and IE 8.
  • All wikis have been moved to a centralized unprivileged account on the server, so interested contributors can get shell access.
  • Finally: UserBase now allows normal logins next to OpenID logins

I changed away from exclusive OpenID logins minly because of two reasons: firstly, it seems there are just too many people who reject to the idea of a “unified login provider” (with the chance of their password leaking here and there once in a while). Secondly, not all OpenID providers seem to work perfectly. Interoperability is an important factor, but we are not there yet. Still, OpenID will remain an option for now. KDE support OpenID for a wide range of other sites such as the KDE Dot News or the KDE developers blogging platform.

But there is a lot of challenges ahead, from both the admin and the content side: That is why I renew my call for contributors and web developers to help UserBase and TechBase:

We need more solid i18n: Users should be able to dynamically switch the language of MediaWiki or possible be provisioned with the right language based on their browser settings or their IP (-> Geo IP). Also, the content should be delivered in a native language. Work in the MediaWiki community is on the way, but we need more dedicated people, as I am likely to have less and less time for these things due to my day job at Qt Software. If you are interested, please leave a comment.

Concerning the content, Lydia and the CWG are pushing for more content on UserBase, and TechBase needs more love from the content point of view. That is because although we do have a lot of information, it is not organized in a problem oriented way: Say for example you are an admin who wants to know how to pre set default settings: We do have details on the Kiosk Mode and other facilities, but most people will not know what a Kiosk Mode is. A FAQ style page (“How do I…”) would be helpful and provide more value to its users. If anyone is interested in solving this problem, please also leave a comment.

Qt Creator RC 1 Out For Your Testing Pleasures

With the awesome Qt Software guys in Oslo shipping a Release Candidate for Qt 4.5, we here at Qt Software Berlin couldn’t help but release a RC on our own. Presenting Qt Creator RC 1, a.k.a. 0.9.2 (Don’t ask, we just like the number). This version has seen quite some polishing, e.g.

  • Improved user interface with feedback option for your feedback
  • “Fake Vim” mode for VIM lovers
  • Improved Version Control Support (Perforce, Git and Subversion)

If you got curious you can get more details from our lovely team-member-in-Norwegian-exile Kavindra and the binaries from the Qt Creator page.

Userbase I18n And You

KDE UserBase needs you! UserBase is the wiki-driven site for user-related content, in case you have been living under a rock for the last year or so. So far, I am the technical contact for this site. I upgrade the software (which is Mediawiki) and add plugins and write some of the templates in accordance with the Team from the KDE Community working group, who has helped a lot to build up the contents of the site.

This worked as far as Mediawiki delivered all the features that we required. However there is one thing where no Wiki really works well, and Mediawiki, even though genereally best suited for our tasks, is particulary bad: Internationalization, or i18n for short. Some people are really dying to get localized versions going, so I really want to pursue this.

Whatever we pick as a solution should enable the follwing goals:

  • Up-to-date translation for a given language
  • A warning if the content is not up to date

Assets we have (depending on the activity of the translation team):

  • a lot of man power all the time
  • almost no man power most of the time

and we need to be realistic in assuming that “almost no man power most of the time” is true for a lot of languages, at least that’s what other wikis suffer from.

So far, there are two approaches to go about translations with a wiki:

1. Tie international sites to the original english content.
2. Have individual content for each language.

So far, UserBase implements Option 1, while lots of other Wiki sites, such as the OpenSUSE site or Wikipedia itself go for option number 2. The big disadvantage I see in number 2 is that, as long as most of the people that drive the development communicate in English, the English wiki will always be more up-to-date. If you look at projects like Wikipedia and OpenSUSE you will find that the most up-to-date versions are in fact the primary language (i.e. English), and while the others may have individual concepts, they usually lag behind severely.

This is why I am proposing to solve this in a more centralized fashion: One master article, lots of translations. That gives us two choices again, this time of more technical nature:

1. Set up an individual wiki per language

THis needs maintenance of even more Mediawiki installations, and I won’t be able to do this all on own. Problem is that we are particulary short on voluntary admins. It also means complete content disjointness, although Mediawiki sometimes helps with Interwiki links.

2. One Wiki for all languages.

This is probably the best approach to go about it, at least that’s my conclusion.

So far, we use URLs like

I would prefer an installation where we add one virtual host per language, all sharing a central wiki installation. Then
we could have something like this

This would link [[Adjusting plasma]], but without requiring a separate login or multiple setups for the contributors and thus conviniently link all language versions. To make that easier, we could add a “start translation” button that does this automatically.

As for the problem of aging translations, we could have a metric that kicks in after every edit and indicates how outdated the translation is, compared to the master page. If the translation contains more recent content, we could have that wrapped in special flags so other people knowing both languages get notified to “backport” the changes in the original version.

Which means one thing: We need someone with PHP and possibly also Mediawiki expirience to help us out with more functionality. Ideally, something that can be merged into Mediawiki proper.

I know that this is a fairly complex problem. I tried to outline it very briefly, but probably there are many open issues. Feel free to comment on them. Also, feel free to argument why one wiki is stupid and we should have multiple ones, while still solving the problems I pointed out above. Also, any other kind of ideas, preferably combined with some code or active contribution is very welcome.

If anyone from the Mediawiki community is reading this, I would appreciate a comment, too.