Offener Brief an das “üpo”: Barking up the wrong tree

(Disclaimer: Ich repräsentiere nicht den CCC oder seine Organe. Ich bin aber seit Jahren auf dem Congress präsent.)

Lieber Richard Schneider!

Da haben Sie aber ganz schön Wind gemacht. Der CCC als Sklavenherr einer Laienarmee und die studierten Diplomdolmetscher aussen vor! Dabei hätte das Ergebnis doch so viel besser sein können!

Anstatt mich in bester Twitter-Laune zurück zu echauffieren, möchte ich Ihnen anbieten, Ihnen einmal die Hintergründe des Chaos Communication Congresses zu erläutern.

Der Zentrale Punkt: Der CCC und der Congress sind eine non-for-profit Veranstaltung, und leben ausschließlich von Freiwilligen.

Der Congress verfügt bei Weitem nicht über das Budget vieler anderer Fachkonferenzen und tat es auch nie. Das liegt vor allem daran, dass er mit einem klassichen Fachcongress so viel gemein hat wie der Kreis mit dem Rechteck: Es mag Ähnlichkeiten geben, aber Vergleiche halten einer nähren Überprüfung in vielen Aspekten nicht stand. So auch beim Congress (und dem alle vier Jahre stattfindenden Freiluftcamp), wenn man ihn mit klassischen Konferenzen vergleicht. Ja, beide finden in einem Konferenzzentrum statt. Große sogar in Messehallen. Aber: Die CCC-Veranstaltungen entstehen aus mehreren Motivationen heraus, die wenige Kongresse in dieser Größe verfolgen:

1. Erschwinglichkeit: die Veranstaltung soll für so viele Leute zugänglich sein. Es gibt vergünstigte Tickets für CCC-Mitglieder, soziale Härtefälle, sowie subventionierte Aktionen, in denen Kinder im Rahmen des Junghackertages vergünstigt Eintritt erhalten, wobei ein Elternteil ebenfalls kostenfrei dabei ist. Gleichzeitig zahlt jeder Teilnehmer regulär Eintritt, auch wenn er dort als Engel oder in der Orga arbeitet.

2. Unabhängigkeit: Das bedeutet auch den Verzicht auf große Sponsorenlogos oder -stände, die sonst bei Fachmessen lukrative Einnahmen bedeuten. Das reicht jedoch auch bis hin dazu, dass es mit media.ccc.de eine Alternative zu YouTube gibt, damit die Vorträge von CCC-Veranstaltungen nicht von den Launen eines Internet-Giganten abhängig sind. Zudem können hier viel besser Dinge wie mehrere Tonspuren entwickelt und der Allgemeinheit zur Verfügung gestellt werden. Ein Feature, dass YouTube bis heute im Übrigen nicht bietet.

3. Verfügbarkeit: Die Vorträge sollten möglichst für Jedermann zugänglich sein. Für Leute, die nicht vor Ort sein können, gibt es Streams, Congress Everywhere, und später die fertigen Videos. Allerdings vermitteln diese nur einen Hauch von dem, was der Congress eigentlich ist. Das kann man vielleicht erahnen, wenn man durch die bunte Bildergallerie klickt (die ohne das freiwillig arbeitende Arts&Beauty-Team nicht halb so bunt wäre), aber erfahren kann man es nur vor Ort.

Aber hier der letzte Punkt, der zugleich der Wichtigste ist:

4. Es geht auf dem Congress darum, zusammenzukommen, zu lernen und miteinander Projekte zu gestalten. Und jeder Besucher kann und soll mitmachen. Daher stammen auch die vielen Aufrufe im Blog, aus dem Sie in Ihrem Artikel zitiert haben. So lernen sich Leute kennen, so entstehen Freundschaften. C3lingo ist eines der Projekte, die mit viel Herzblut jedes Jahr mehr leisten. Aus Eigeninitiative. Dabei bereitet sich das Team oft mit Fachvokabularstudium vor allem auf die hochkarätigen Talks vor, oder übersetzt die Spieleshows am abend auf Swizerdütsch.

Aus erster Hand kann ich nur vom Video-Team erzählen (Video Operation Center, VOC). Alleine das VOC hat im Vorfeld mehrere hundert Personenstunden an vorbereitender Arbeit in das Event gesteckt. Das Gleiche gilt noch einmal für die Arbeit vor Ort. Würden wir nicht regulär Eintritt bezahlen, sondern sogar noch den Lohn einer Fachkraft einfordern, so kämen wir alleine für den Bereich Video locker auf eine hohe, sechsstellige Zahl. Das wäre nicht nur finanziell mit einer Truppe aus reinen “Profis” nicht zu machen, es würde uns auch die Möglichkeit nehmen, Dinge unkonventiell anzugehen. Dabei ist viel Software entstanden, die nun auch Andere nutzen können und dies auch tun.

Denn: Nerds lieben es, sich in Themen und Sachgebiete einzuarbeiten. Im Idealfall werden dabei aus ihnen professionelle Amateure, ansonsten war es zumindest eine großartige Erfahrung.

Dies vorausgeschickt glaube ich allerdings die Motivation zu verstehen, mit der Sie diesen Artikel, leider in Unkenntnis obiger Fakten, verfassten. Von Dolmetschern in meinem Bekanntenkreis weiss ich: Dolmetscher unterziehen sich in Deutschland einem sehr umfangreichen Studium. Demgegenüber steht gerade in der freien Wirtschaft eine oft schlechte Auftragslage, denn es mangelt den Auftraggebern oft an der Möglichkeit, eine gute von einer schlechten Übersetzung zu unterscheiden. Da wird dann mal lieber schnell ein Student engagiert, der das aus Sicht des Auftraggebers genausogut kann, aber viel billiger ist.

In der IT ist dieses Problem ebenfalls bekannt. Auch hier wird oft gerade in mittelständischen Firmen “mal eben schnell ein ‘Student’ bezahlt” (und mit etwas Glück hat der auch was auf dem Kasten), anstatt Probleme strukturiert und nachhaltig mit erfahrenen Leuten zu lösen (was in der IT allerdings erfahrungsgemäß nur bedingt mit einer entsprechenden Ausbildung korreliert, sondern mehr mit der Erfahrung).

Sie sehen also, ich kann kann durchaus Verständnis für Ihre Position aufbringen. Allerdings ist dies ein klassischer Fall von “barking up the wrong tree”, wie anglistisch Inklinierte sagen würden. Sie kriegen mit ihrem Artikel bestimmt die Sympathie von Leuten, die weder den CCC noch den Congress genauer kennnen, doch alle Anderen können nur den Kopf schütteln. Daher mein Vorschlag: Kommen Sie doch zum 35c3 vorbei und überzeugen sich selbst.

Mit besten Grüßen,

Daniel Molkentin

LetsEncrypt Support for openSUSE

For my first hackweek, I joined forces with Klaas to work on a LetsEncrypt integration for openSUSE. So we went to create yast-acme. Too many acronyms already? Alright, let’s start with…

The Elevator Pitch

“Imagine setting up encrypted websites and services was as simple as setting up your web server. Our aim is to provide that simplicity in openSUSE.”

The Motivation

This will take some dry background, please bear with me: Until recently, encrypting websites, mail traffic, etc through TLS certificates was a pretty painful process: You either had to purchase a certificate from a Certificate Authority such as Comodo, GoDaddy, Symantec, and others. Their job is to verify that you are in posession of the hostname (e.g. www.mydomain.com), and only issue a certificate if that is the case. In return, they are demanding a (sometimes pretty hefty) fee. That’s because they underwent procedural and technical audits to be accepted in the major browsers, but also e.g. Java. Members of this exclusive club can issue different kinds of Certificates, we only care about Domain Validated (DV) ones. Other forms include Extended Validation (EV), where CAs actually check Company Records and take more effort. This is however not important for most website owners.

On top of being expensive, some CAs have shown that even though they are making good money, recently incidents have shown that the reputation of a “Notary” is not warranted. We have seen everything: bugs in the validation process, gross technical incompetence and even deception. The latter has caused all major browser vendors to distrust the Root Certificates of WoSign and StartCom, both of which have been issuing free Certificates (although at least StartCom charged a fee for identity validation). And every year or two, certificates need to be swapped for new ones, which means spending more money and effort just to get your communication channels secured.

Whoever refused to go that way either had to create a custom CA, and publish the Certificates to all their users/employees, or ship a so called self-signed certificate. Both can (rightfully) lead to pretty scary browser warnings.

The web has been idling in this state until Edward Snowdens’ relevations made is clear that the unencrypted web is dead. However, it was clear that if ubiquitous encryption was to succeed, a new approach would be required. So Mozilla and the EFF, along with several commercial Sponsors created the non-profit Internet Security Research Group (ISRG). IRSG in turn runs LetsEncrypt, a new CA that provides proof-based certificates through the ACME protocol. Contrary to previous approaches, ACME requires a proof of (administrative) ownership of the actual host (more specifically: Port 80), which is a much stronger proof than just ownership of any email address associated with a domain name (e.g. retrieved through whois records). At the same time, this process is repeatable, allowing for automatic renewal.

Finally, the beauty of ISRGs efforts is that both client and server implementations are open source, so anyone could start an ACME-based CA (of course, they would still need to get their root certs accepted by the browser vendors).

Acquiring a Certificate through ACME

In essence, ACME clients first create an account, i.e. registers with LetsEncrypt. Optionally, it’s possible to provide an email address. It will be used by the CA to warn about expiring CAs in case the automatic renewal has failed. For both initial issuance and every renewal, a challenge-response protocol is performed via HTTP on port 80. The LetsEncrypt CA will verify that an agreed-upon token is available in .well-known/acme-challenge. If this succeeds, it will issue the certificate for the requested domain names.

The Path

Implementing the ACME protocol ourselves was out of scope for this project. Mostly because there already are quite some client implementations. So the first task was to pick an implementation that was concise, suitable and simple to package. Certbot, the official client currently developed by EFF is a dependency hell. Remember, we want encryption to become ubiquitous. Then there is acmetool, which has quite some nice features. Unfortunately, it is written in Go, which is notoriously hard to package. So we went with dehydrated (formerly “letsencrypt.sh”), which only depends on bash, openssl, curl and diff.

Even before hackweek 15 started, I had started to package up dehydrated for openSUSE (and SLES, and other RPM based distros). Thanks go to Roman Drahtmüller and Darix for improvements.. This includes providing default location handling for .well-known/acme-challenge for Apache (and nginx/lighttpd with limitations).

Through the course of the hackweek, we added a JSON-based status output, which might go upstream after some cleanup.

The Challenge

The Yast-ACME module in action
Next was a Yast-Interface for requesting and managing certificates. The real challenge was that neither Klaas nor I had done Yast hacking before, so we knew nothing about YCP widgets, the ecosystem, etc. Also, my Ruby knowledge was really rusty, and Klaas had never done any Ruby before. But nothing can stop a fearless Perl-Veteran! Also, the Yast module tutorial proved truly useful to get started.

The Result

Can be found in this OBS repository. It contains a patched dehydrated, along with the yast(2)-acme module. The module can be used to request certificates.

The Work Ahead

The Yast ecosystem turned out to be a bit more complex than anticipated. Since we had to start from square one in a few places, there is much to be done to make this a really smooth experience:

  • Account Setup
  • Integration with e.g. the yast-http-server and yast-mail modules
  • Certificate revocation
  • Auto tests
  • Provide a stub responder on Port 80 in case no web server should be installed

However, with the initial success we plan to pursue this project after hackweek. I hope you will join us. Please get in touch with either me or Klaas.

The FAQ: Why not call It Yast-LetsEncrypt?

After Comodo tried to register a trademark for LetsEncrypt, ISRG had to start protecting its trademark. Hence they cannot allow any non-official project to use the name “LetsEncrypt”. This is why we resorted to “ACME”, the name of the protocol.

Introducing Improved Project Collaboration with ownCloud Central

New York Grand Central Terminal June 2013 - 5 The ownCloud community has long suffered from a gap between users and developers:

  • In the forum (and to a lesser extend on the user mailinglist and IRC), a lot of regular volunteers have very successfully covered users’ issues in getting up to speed.
  • On the mailing lists and on GitHub, sometimes miles high above and in a mostly disjointed community developers have been developing away.

Between the two, there has been a big divide that was unsatisfying. Also neither Forums nor Mailing lists are a good fit for agile communities these days.

So let’s fix this! The forum moderators and sysadmins, have long been wanting to move off our old trusted forum for this reason, and had decided to use Discourse. Now we finally have a new host, courtesy of ownCloud GmbH, to carry a successor.

Please welcome: ownCloud Central!

I want to give a big shout out to RealRancor and tflidd, who have done terrific work on migrating the FAQs and vital articles to the new platform. Next, we’ll put the forum into read-only mode and will archive its contents. ownCloud Central will take over. We also migrated all accounts from the old forum. Let’s continue to make ownCloud awesome together!

Some Education While Feeding a Code Troll

It doesn’t happen often that i feel inclined to comment on something that is not part of my daily work life. Today, I’ll make an exception because a decent answer does not fit into 140 chars.

It all started out with this post on Twitter that someone in my timeline retweeted:

Note that I’m not a Java developer (but write a lot of C++), and I’m not going to defend the language. However, I think this post delibrately deceives people instead of educating them. This is what this post tries to compensate for.

To make things a bit more interesting, I’d actually like to start with the last example, going up in the reverse order:

int q = 022 - 2;
> 16 // because fuck math

This is simply a matter of notation. Any C-style language I am aware of (and Java clearly uses C traits in many ways), uses the following conventions for integer literals:

aa // 10-based
0aa // 8-based (octal)
0xaa // 16-based (hexadecimal)

where ‘aa’ represents a number. You can complain and moan about it, or simply accept that even today many people still use that kind of notation, at least the hexadecimal representation. In case of Java, it would be simple enough to make the IDE or Linter warn about the octal notation, should studies show that it does more harm than good. Anyway, a well accepted idiom among developers of C-style languages, and “Fuck math” is a clever deception to trick you into thinking it was a compiler bug.

Next, we see this:

(byte) + (char) - (int) + (long) -1;
> 1 // I'm not sure what this even means

Squeezing language features into one line in an unusual way and then complain about it is possible in any language. The real learning value lies in the answer to two questions:

1. What are we looking at (i.e. what does it mean)?

We are looking at a number of casts. In C-style languages, a cast is a way to tell the compiler that I want to convert a value. Some casts have to be explicit, for some it’s optional. In this case, it’s pure mind-fuckery.

The way your mind needs to parse this is like the compiler would do: from right to left.

2. Why is the result 1?

We start with -1, which would be an integer in Java. We cast it to a long integer, but the value is still 1. Effectively, the compiler is now looking at this:

(byte) + (char) - (int) + (-1);

So we simply cast this back to a normal integer:

(byte) + (char) - (-1);

This is where first-grade math kicks in and we get

(byte) + (char) + 1;

The next two casts simply convert it to char and to byte. So after two more casts, we end with this:

1;

For the simple example of 1, the data type does not matter. Even a boolean could hold it. If you are interested into what kind of numbers integer, long, char and byte can hold, check Oracles’ data type reference.

Which brings us to closer to the really interesting case, by looking at the second example:

Integer.valueOf(1000) == Integer.valueOf(1000)
> false // WTF?

This is caused by a decision that I cannot really comprehend: Sun decided to introduce an Integer class that wraps ints to support Generics, instead of adding support for non-class data types (sometimes called Plain Old Data types, or PODs), like C++ does:

std::list<int> myIntList;

Instead, in Java you have to use

List<Integer> myIntList;

Usually, that’s not a problem, as ints get transparently wrapped into Integers (appropriately called ‘boxing’), so you shouldn’t have any dealing with the Integer class outside Generics.

But even if you choose to be ignorant about that, or simply don’t know better, the == operator should sound every last siren in your brain, because in Java, there is no operator overloading.

This means that whenever you compare two Objects (Integer.valueOf() will return an object of type Integer), it will always compare the addresses of the objects, rather than any of the values that it might hold. Of course, addresses of two distinct objects are not the same, even if their value is. The correct way would be to either use straight int‘s, or use the compare() method, which is recommended for comparing value type objects in Java.

So why on $deity‘s earth is this happening?

Integer.valueOf(6) == Integer.valueOf(6)
> true // Of course

If you’ve paid attention so far, you will realize that the comment is again deceptive. It’s the only actual WTF here, should you choose to not have read this post for the occasional chuckle.

We just learned that Java’s == operator compares object addresses. I was briefly puzzled by this, but then theorized that the only logical solution is that the implementation of valueOf(int) must have some kind of cache. And indeed, a look at the Java source code reveals that the real solution is indeed close.

Java provides a cache for number Objects with values from -128 to +127. This means that the addresses for objects created from integers within this range will always result in true.

Again, your Linter or IDE should warn about both of the above.

Conclusion

I hope I was able to show that sometimes, there is more to seemingly weird code snippets than meets the eye. It’s usually worthwhile to lean back to try and understand some of them, and then re-evaluate your assessment on whether or not you like a given language. So again, this is not in defense of Java. Also, a tip of the hat to the author of the code for identifying these interesting language quirks.

ownCloud at re:publica and LinuxTag 2014

Last week saw several events in Berlin’s Station event location that also featured ownCloud in one way or the other. The first, re:publica, is probably mostly known by bloggers and internet activists. Titled Into the wild, an obvious tongue in cheek about the unsafe and well-surveilled place the internet has become, it was a great place to talk about ownCloud. Frank took that opportunity and was received by a packed room, to which he delivered a talk about ownCloud, despite The Hoff performing on Stage 1 at the same time.

Jos and Arthur at a freshly set up booth.
Jos and Arthur at a freshly set up booth.
The last day of re:publica coincided with the first day of LinuxTag 2014, which moved from Fairgrounds to Station Berlin. This brought a lot of new people to visit our booth, which ownCloud shared with openSUSE and KDE, courtesy of our new Community Manager Jos.

On Thursday, Frank and I also got invited and interviewed about ownCloud by the Sondersendung Podcast. If you can understand German, you can listen to our 15 minute interview .

ownCloud is a proud sponsor of LinuxTag 2014.
ownCloud is a proud sponsor of LinuxTag 2014.
At the presentation area of our booth, Arthur and Georg gave workshops on writing your first ownCloud app, while I was covering the details of the synchronization process in depth. Every day of LinuxTag, quite some people took the chance to listen and ask questions.

Others just walked up to our demo point for a quick demonstration of ownClouds capabilities and concepts. Some inquired about the improvements from earlier versions they have used, and most were impressed by the progress that ownCloud 6 and ownCloud Client 1.6 represent. Since LinuxTag joined forces with droidcon, we also had lots of questions on our mobile integration for Android (and iOS, :), for both ownCloud app and calendar/addressbook sync.

In total, LinuxTag has been a really great show this year, which was mostly owed to the co-location with other events and the more central location. We’re looking forward to LinuxTag 2015!

Arthur explaining how to write your own ownCloud app.
Arthur explaining how to write your own ownCloud app.
The workshop on file synchronization.
The workshop on file synchronization.

Fighting Cargo Cult – The Incomplete SSL/TLS Bookmark Collection

Engage Padlock!Throughout the recent months (and particularly: weeks), people have asked me how to properly secure their SSL/TLS communication, particularly on web servers. At the same time I’ve started to look for good literature on SSL/TLS. I noticed that many of the “guides” on how to do a good SSL/TLS setup are actually cargo cult. Cargo cult is a really dangerous thing for two reasons: First of all, security is never a one-size-fits-all solution. Your setup needs to work in your environment, taking into account possible limitation imposed by hardware or software in your infrastructure. And secondly, some of those guides are outdated, e.g. they do neglect the clear need for Perfect Forward Secrecy, or use now-insecure ciphers. At the worst case, they are simply wrong. So I won’t be providing yet another soon-outdated tutorial that leaves you non-the-wiser. Instead, I’ll share my collection of free and for-pay documents, books and resources on the topic which I found particularly useful in the hope that they may help you in gaining some insight.

Introduction to SSL/TLS

If you’re unfamiliar with SSL/TLS, you definitely should take half an hour to read the Crypto primer, and bookmark SSL/TLS Strong Encryption: An Introduction for reference.

Deploying SSL/TLS

So you want to get your hands dirty? Check your server setup with Qualys SSL Labs’ server test. Make sure you fix the most important issues. You should at least be able to get an “A-” grading. If you find yourself in trouble (and are the administrator of an Apache or nginx setup), you should read the OpenSSL cookbook. Professional system administrators should have Bulletproof SSL/TLS and PKI on the shelf/eBook reader.1)

If you find yourself with too little time on your hands, you can skip through to Mozilla’s awesome config tool which will help you with setting up your SSL vhost for Apache, nginx and HAproxy. However, some background may still be needed. You will find it on Mozillla’s Cipher recommendation page and the OpenSSL cookbook.

The SSL, the TLS and the Ugly

If you are a dedicated IT professional, you should not miss the next section. Although it’s not crucial for those wishing to “simply secure their server”, it provides those who are responsible for data security with a clear understanding of the numerous theoretical and practical limitations of SSL/TLS.

Tools and Utilities for Debugging SSL/TLS

Sometimes you need to debug errors during the SSL handshake. While a bit primitive, OpenSSL’s s_client tool is the weapon of choice. When it comes to monitoring SSL/TLS encrypted communications, use mitmproxy or Charles. They need to be added as proxies, but can also intercept PFS connections, due to their active MITM position.

This list is not exhaustive and if you have more suggestions, please go ahead and post them in the comments. I’ll be happy to add them. Finally, just like with system administration in general, you’re never “done” with security. SSL/TLS is a swiftly moving target, and you need to be aware of what is going on. If you are an IT professional, subscribe to security mailing lists and the announcement lists of your vendor. Finally, while I’m aiming to update this page, there’s never a guarantee of up-to-dateness for this list either.

Update (22.04.2014): Don’t miss the discussion on this article over at Hacker News.

Article History

  • 21.04.2014 – Initial version
  • 21.04.2014 – Added “The Case for OCSP-Must-Staple”, Mozilla Cipher suite recommendation
  • 22.04.2014 – Updated to add sslyze and cipherscan, added HN link, fixed typos
  • 02.05.2014 – Add “Analyzing Forged SSL Certificate” paper
  • 19.12.2014 – Add Mozilla SSL Generator, updated text on book availability

1) I do realize that I am courting Ivan a lot in this section and that relying on only an a single external web service that can go away any day is not a good thing. At the same time I think that the handshake simulation and the simple rating process are priceless, as such assessment cannot be trivially done by people whom’s life does not revolve around crypto and security 24/7. At the same time, I’m happy for any pointers towards other, user friendly tools.

2) While blindly following the rating can easily lead to the establishment of cargo cult, ssllabs.com is continuously updated to only give those a good grading that follow the best pactices. Again: Avoid Cargo Cult, make sure you have a good idea of what you are doing.

ownCloud Client 1.6: The Tour

Now that ownCloud 1.6.0 beta1 is out, it’s time to explain the story behind it:

owncloud-icon-256This release was developed under the promise that it would improve performance 1), and we have made tremendous improvements: Using a new Qt-based propagator implementation, we can now perform multiple simultaneous up- and downloads. We still provide the old propagator for certain situation where it’s more suitable, such as for situations where bandwidth limitation is needed.

Furthermore, the sync journal access code has been significantly optimized. It paid tribute to most of the high CPU load during the mandatory interval checks. CPU usage should be much lower now, and the client should be usable with more files at the same time.

Windows users should also find update times improved as the time spent in file stat operations has been reduced. Mac OS X users will enjoy the benefits of a much improved file watcher. To be able to use the more efficient API, 1.6 drops support for Mac OS Snow Leopard (10.6) and now requires Mac OS 10.7 or better.

At the same time, production releases are now using Qt 5 rather than Qt 4 on Windows and Mac OS X2). This fixes a lot of visual bugs in Mac OS X, especially for Mavericks users, and allows us to profit from improvements in the SSL handling, especially on the Mac.

We also implemented an item that was on many peoples wish list: a concise sync log. Next to the database, the sync folder now holds a hidden file called .owncloudsync.log. It will store all sync processes in a minimal CSV file. Contrary to previous logging facilities, it always logs and only collects information relevant to the actual sync algorithm decisions.

Because this tour was not as colorful as the previous one, let’s close this blog post with a feature contributed by Denis Dzyubenko: The settings dialog on Mac OS X now has a native look & feel:

Get ownCloud Client 1.6.0 beta1 now and provide feedback!

1) Now that while the client is multi-threaded, you may find that the transfer time still doesn’t improve as much as you would expect. This is due locking issues on the server which prevent efficient parallel transfers. This has been improved in 1.7, and could potentilly improved even further by implementing support for X-Sendfile/X-Accel-Redirect in SabreDAV, the DAV framework used by ownCloud server.

2) We can’t do the switch even on modern Linux distributions mostly due of the poor support for modern and divergent Systray/Notification area support in Qt5: Even in Qt 4 we could only use it because Canonical had patched their Qt to make QSystemTrayIcon work with Unity, which they have not ported to Qt 5 yet. Gnome 3 also hides away traditional Systray icons way to well, not to speak of Plasma. Any leads would be helpful.

PS: Martin’s blog on the subject indicates that Qt 5.3 might solve the problem.

On Practical Qt Security

At 30C3, Ilja van Sprundel gave a talk on X Security. In this talk, he also discussed Qt security matters, specifically how running a setuid binary which links against Qt is unsafe due to exploitable bugs in the Qt code base (citing the infamous setuid practice in KPPP). While his points are valid, he misses the greater picture: Qt was not designed for use in setuid applications! Consequently there are a lot of ways the security of a Qt application can be compromised when it runs as root. So I went on to discuss this issue with QtNetwork maintainer Richard Moore, and we both agree that in contrary to Iljas claim, we do need to dictate policy. So here it goes:

Do not ship Qt applications that require setuid. While the same is probably true for any other toolkit, we have only discussed this for Qt in more depth. Actually, Rich has prepared a patch for Qt 5.3 that will quit if you try to run an application setuid unless you ask it to. This should make it harder to shoot yourself into the foot.

While making QtCore and QtNetwork safe for setuid use is possible, they currently are not. If you absolutely have to (and you really shouldn’t), at least unset QT_PLUGIN_PATH and LD_LIBRARY_PATH in main(). The latter is required because even though LD_LIBRARY_PATH is ignored by the linker for setuid binaries, it is used internally by QtNetwork unconditionally to look for OpenSSL. Of course, you also need to follow all the other best practices (note that even this list is incomplete, e.g. it doesn’t mention to close FDs).

However, there are also situations where a Qt application running as user can be unsafe, so to those who ship their own Qt build to their customers, there are even more policies:

  • Never build Qt so its prefix is a publicly writable directory, such as /tmp: Suppose you build a in-source (developer) build in /tmp/qt, then Qt will go ahead and look for plugins in /tmp/qt/plugins. A malicious user could simply provide a fake style there that next to calling the style which the user would expect (e.g. via QProxyStyle) executes arbitrary malicious code. The same goes for Image IO plugins, which are handled in QtCore.
  • Never build Qt so its prefix is a home directory: This one is more tricky and a lot harder/unlikely to exploit, but it’s a valid attack vector nonetheless: Suppose Joe Coder compiles Qt in-source on /home/joe/dev/qt. Now every customer needs to make sure that a local user by the same name is a really nice person.

So in conclusion, a better summary of the above would be:

Never distribute binaries built with a prefix that is a non-system directory!

If you already have this setup, but need a hotfix, there is hope: libQtCore.so contains strings starting in qt_plugpath= and qt_libspath=. Both are padded to 1024 bytes. Adding a binary null after the first / keeps Qt from looking for loadable code in user accessible locations.

TL;DR: The bugs Ilja points out are valid, but only affect applications that don’t follow good practice. We will attempt to make it harder for developers to make these mistakes, but writing suid applications isn’t something that will ever be recommended, or easy to do safely. Apart from the suid issue however, there are more traps lingering if you provide your own Qt and build it in an unsafe way.

Further reading: Google+ discussion on the topic.
Acknowledgements: Richard Moore for contributing vital information to this document, Thiago Macieira for proof-reading.

Update: Clarified the wording to ensure it’s clear that a prefix is meant. Thanks, Ian.

Update 2: As Rich and David Faure pointed out, KPPP is dropping permissions before calling Qt code, and KApplication already has a setuid safeguard in place.

Update 3: Richs setuid check has been merged.

ownCloud 6 Release Party — Berlin Edition

A packed room listens to the talks at the ownCloud 5 release event.
A packed room listens to the talks at the ownCloud 5 release event.
(Deutsche Version drüben bei Arthur)

With the final release of ownCloud 6 imminent, it is time to celebrate!

This time, BeLUG, who is also running an ownCloud installation for their members, was kind enough to host the Berlin release event. We’ll have short talks by both developers and admins, free pizza and beverages at affordable prices.

Talks (~20 min each):

  • ownCloud 6 Tour — Arthur Schiwon, ownCloud
  • Improvements in ownCloud Client 1.5 — Daniel Molkentin, ownCloud
  • ownCloud @ BeLUG, an Admin Perspective — tba, BeLUG

Coordinates:

Please give a short shout in the comments if you want to join us.

See you there!

PS: There will be parties in other places as well:

ownCloud Client 1.5 Tour

It’s been quite a while since my last post about ownCloud Client 1.4. Now that ownCloud Client 1.5 beta1 has been released, it’s time to demonstrate what’s in for you this time.

The New Propagator

owncloud-icon-256First of all, we have completely redesigned The Propagator. It’s the component that is responsible for actually performing all the changes that earlier phases in a sync run have determined to be required. It is vital that the propagator does things in a clever way, and the new design allows just that. The new propagator writes changes to the sync journal as they happen, and does not rewrite the journal after every run. This means that sync runs can be paused or even terminated, and on the next start, the client will pick up where we left it. This is especially important for the initial sync which may take quite a while.

Next, we sped up sync runs significantly. If you are using an up-to-date server version, ownCloud Client 1.5 only requires one instead of three round trips to get a simple file uploaded, since the server can now accept the modification time as a header value. This will especially help with small files.

Another thing this release gets straight is support for remote moves: The old propagator handled them in terms of delete and re-download, which is a bit silly to begin with. Finally, with the new propagator, we can correctly handle moves for what they are, which turns pushing Megabytes of files into a simple mv instruction. In order to detect moves reliably, we now use file IDs next to ETags other meta data, which requires ownCloud 6.0 on the server side.

When you deleted folders, the old propagator would work strongly recursive, meaning a deletion one-by-one. This had several implications, as the non-atomic way of the old approach was problematic as it allowed for unexpected errors to happen. Also every file would be moved to the trash separately (assuming you had the trash app activated), making restore rather painful. The new propagator does away with all this: If you delete one directory, only the directory with all its structures will be moved to trash. As a side effect, this makes the delete operation on the wire much faster.

Handling Error Conditions

Ignored and blacklisted files now get featured more prominently.
Ignored and blacklisted files now get featured more prominently.
There are some situations where syncing a file cannot succeed, for instance when the shared folder on the server cannot be written to. Previously, we would try again and again and again which caused system load.

Now in cases like read-only shared folders we actually know that we will never succeed — until someone changes permissions on the server, that is. So now the client will put files it cannot write to on a black list. Only when the file or one of its parent directories changes, we check again if we are now allowed to write. This should improve traffic and CPU load a lot.

state-information-64state-ok-64Another issue we addressed was our new handling of files that are on the local ignore list or which contain characters that cannot be replicated to other operating systems (which is an ongoing discussion). Most people were well aware that it would never work, making the (i)-indicator we were showing an annoyance. We also indicated the failure in a log dialog, which turned out to be too well-buried.

In the new release, the sync log has been renamed “Sync Activity” and was placed more prominently as a top level item. It shows all files that have been synced, as well a items that could not be synced at all. The systray icon will not show the (i) icon anymore.

One Account to rule them all

Another major change won’t be visible to you until you look at the source: The Account. It has been introduced as a first step to implement support for multiple accounts in forthcoming versions. For now, it suffices to say that this change has made the internals easier to understand and extend.

Password handling and Signing out

The client when singed in...
The client when singed in…
A direct implication is that the client now has a folder-spanning notion of being on- or offline. In addition to that, you can now also sign out. This means your password will be locally discarded. Should the password change or should you have signed out and want to sign back in, you will be queried to enter your password.

The Statistics

... and when signed out.
… and when signed out.
This release addresses more than 50 issues (features, enhancements, bugs), so this tour is by no means complete.

We hope you like the new client, and we appreciate your feedback. Please head over to the issue tracker to tell us about bugs or regressions.