LetsEncrypt Support for openSUSE

For my first hackweek, I joined forces with Klaas to work on a LetsEncrypt integration for openSUSE. So we went to create yast-acme. Too many acronyms already? Alright, let’s start with…

The Elevator Pitch

“Imagine setting up encrypted websites and services was as simple as setting up your web server. Our aim is to provide that simplicity in openSUSE.”

The Motivation

This will take some dry background, please bear with me: Until recently, encrypting websites, mail traffic, etc through TLS certificates was a pretty painful process: You either had to purchase a certificate from a Certificate Authority such as Comodo, GoDaddy, Symantec, and others. Their job is to verify that you are in posession of the hostname (e.g. www.mydomain.com), and only issue a certificate if that is the case. In return, they are demanding a (sometimes pretty hefty) fee. That’s because they underwent procedural and technical audits to be accepted in the major browsers, but also e.g. Java. Members of this exclusive club can issue different kinds of Certificates, we only care about Domain Validated (DV) ones. Other forms include Extended Validation (EV), where CAs actually check Company Records and take more effort. This is however not important for most website owners.

On top of being expensive, some CAs have shown that even though they are making good money, recently incidents have shown that the reputation of a “Notary” is not warranted. We have seen everything: bugs in the validation process, gross technical incompetence and even deception. The latter has caused all major browser vendors to distrust the Root Certificates of WoSign and StartCom, both of which have been issuing free Certificates (although at least StartCom charged a fee for identity validation). And every year or two, certificates need to be swapped for new ones, which means spending more money and effort just to get your communication channels secured.

Whoever refused to go that way either had to create a custom CA, and publish the Certificates to all their users/employees, or ship a so called self-signed certificate. Both can (rightfully) lead to pretty scary browser warnings.

The web has been idling in this state until Edward Snowdens’ relevations made is clear that the unencrypted web is dead. However, it was clear that if ubiquitous encryption was to succeed, a new approach would be required. So Mozilla and the EFF, along with several commercial Sponsors created the non-profit Internet Security Research Group (ISRG). IRSG in turn runs LetsEncrypt, a new CA that provides proof-based certificates through the ACME protocol. Contrary to previous approaches, ACME requires a proof of (administrative) ownership of the actual host (more specifically: Port 80), which is a much stronger proof than just ownership of any email address associated with a domain name (e.g. retrieved through whois records). At the same time, this process is repeatable, allowing for automatic renewal.

Finally, the beauty of ISRGs efforts is that both client and server implementations are open source, so anyone could start an ACME-based CA (of course, they would still need to get their root certs accepted by the browser vendors).

Acquiring a Certificate through ACME

In essence, ACME clients first create an account, i.e. registers with LetsEncrypt. Optionally, it’s possible to provide an email address. It will be used by the CA to warn about expiring CAs in case the automatic renewal has failed. For both initial issuance and every renewal, a challenge-response protocol is performed via HTTP on port 80. The LetsEncrypt CA will verify that an agreed-upon token is available in .well-known/acme-challenge. If this succeeds, it will issue the certificate for the requested domain names.

The Path

Implementing the ACME protocol ourselves was out of scope for this project. Mostly because there already are quite some client implementations. So the first task was to pick an implementation that was concise, suitable and simple to package. Certbot, the official client currently developed by EFF is a dependency hell. Remember, we want encryption to become ubiquitous. Then there is acmetool, which has quite some nice features. Unfortunately, it is written in Go, which is notoriously hard to package. So we went with dehydrated (formerly “letsencrypt.sh”), which only depends on bash, openssl, curl and diff.

Even before hackweek 15 started, I had started to package up dehydrated for openSUSE (and SLES, and other RPM based distros). Thanks go to Roman Drahtm√ľller and Darix for improvements.. This includes providing default location handling for .well-known/acme-challenge for Apache (and nginx/lighttpd with limitations).

Through the course of the hackweek, we added a JSON-based status output, which might go upstream after some cleanup.

The Challenge

The Yast-ACME module in action
Next was a Yast-Interface for requesting and managing certificates. The real challenge was that neither Klaas nor I had done Yast hacking before, so we knew nothing about YCP widgets, the ecosystem, etc. Also, my Ruby knowledge was really rusty, and Klaas had never done any Ruby before. But nothing can stop a fearless Perl-Veteran! Also, the Yast module tutorial proved truly useful to get started.

The Result

Can be found in this OBS repository. It contains a patched dehydrated, along with the yast(2)-acme module. The module can be used to request certificates.

The Work Ahead

The Yast ecosystem turned out to be a bit more complex than anticipated. Since we had to start from square one in a few places, there is much to be done to make this a really smooth experience:

  • Account Setup
  • Integration with e.g. the yast-http-server and yast-mail modules
  • Certificate revocation
  • Auto tests
  • Provide a stub responder on Port 80 in case no web server should be installed

However, with the initial success we plan to pursue this project after hackweek. I hope you will join us. Please get in touch with either me or Klaas.

The FAQ: Why not call It Yast-LetsEncrypt?

After Comodo tried to register a trademark for LetsEncrypt, ISRG had to start protecting its trademark. Hence they cannot allow any non-official project to use the name “LetsEncrypt”. This is why we resorted to “ACME”, the name of the protocol.

Introducing Improved Project Collaboration with ownCloud Central

New York Grand Central Terminal June 2013 - 5 The ownCloud community has long suffered from a gap between users and developers:

  • In the forum (and to a lesser extend on the user mailinglist and IRC), a lot of regular volunteers have very successfully covered users’ issues in getting up to speed.
  • On the mailing lists and on GitHub, sometimes miles high above and in a mostly disjointed community developers have been developing away.

Between the two, there has been a big divide that was unsatisfying. Also neither Forums nor Mailing lists are a good fit for agile communities these days.

So let’s fix this! The forum moderators and sysadmins, have long been wanting to move off our old trusted forum for this reason, and had decided to use Discourse. Now we finally have a new host, courtesy of ownCloud GmbH, to carry a successor.

Please welcome: ownCloud Central!

I want to give a big shout out to RealRancor and tflidd, who have done terrific work on migrating the FAQs and vital articles to the new platform. Next, we’ll put the forum into read-only mode and will archive its contents. ownCloud Central will take over. We also migrated all accounts from the old forum. Let’s continue to make ownCloud awesome together!

ownCloud at re:publica and LinuxTag 2014

Last week saw several events in Berlin’s Station event location that also featured ownCloud in one way or the other. The first, re:publica, is probably mostly known by bloggers and internet activists. Titled Into the wild, an obvious tongue in cheek about the unsafe and well-surveilled place the internet has become, it was a great place to talk about ownCloud. Frank took that opportunity and was received by a packed room, to which he delivered a talk about ownCloud, despite The Hoff performing on Stage 1 at the same time.

Jos and Arthur at a freshly set up booth.
Jos and Arthur at a freshly set up booth.
The last day of re:publica coincided with the first day of LinuxTag 2014, which moved from Fairgrounds to Station Berlin. This brought a lot of new people to visit our booth, which ownCloud shared with openSUSE and KDE, courtesy of our new Community Manager Jos.

On Thursday, Frank and I also got invited and interviewed about ownCloud by the Sondersendung Podcast. If you can understand German, you can listen to our 15 minute interview .

ownCloud is a proud sponsor of LinuxTag 2014.
ownCloud is a proud sponsor of LinuxTag 2014.
At the presentation area of our booth, Arthur and Georg gave workshops on writing your first ownCloud app, while I was covering the details of the synchronization process in depth. Every day of LinuxTag, quite some people took the chance to listen and ask questions.

Others just walked up to our demo point for a quick demonstration of ownClouds capabilities and concepts. Some inquired about the improvements from earlier versions they have used, and most were impressed by the progress that ownCloud 6 and ownCloud Client 1.6 represent. Since LinuxTag joined forces with droidcon, we also had lots of questions on our mobile integration for Android (and iOS, :), for both ownCloud app and calendar/addressbook sync.

In total, LinuxTag has been a really great show this year, which was mostly owed to the co-location with other events and the more central location. We’re looking forward to LinuxTag 2015!

Arthur explaining how to write your own ownCloud app.
Arthur explaining how to write your own ownCloud app.
The workshop on file synchronization.
The workshop on file synchronization.

Fighting Cargo Cult – The Incomplete SSL/TLS Bookmark Collection

Engage Padlock!Throughout the recent months (and particularly: weeks), people have asked me how to properly secure their SSL/TLS communication, particularly on web servers. At the same time I’ve started to look for good¬†literature on SSL/TLS. I noticed that many of the “guides” on how to do a good SSL/TLS setup are¬†actually cargo cult. Cargo cult is a really dangerous thing for two reasons: First of all,¬†security is never a one-size-fits-all solution. Your setup needs to work in your environment, taking into account possible limitation imposed by hardware or software in your infrastructure. And secondly, some of those guides are outdated, e.g. they do neglect the clear need for Perfect Forward Secrecy, or use¬†now-insecure ciphers. At the worst case, they are simply wrong. So I won’t be providing yet another soon-outdated tutorial that leaves you non-the-wiser. Instead, I’ll share my collection of free and for-pay documents, books and resources on the topic which I found particularly useful in the hope that they may help you in gaining some insight.

Introduction to SSL/TLS

If you’re unfamiliar with SSL/TLS, you definitely should take half an hour to read the Crypto primer, and bookmark SSL/TLS Strong Encryption: An Introduction for reference.

Deploying SSL/TLS

So you want to get your hands dirty? Check your server setup with Qualys SSL Labs’ server test. Make sure you fix the most important issues. You should at least be able to get an “A-” grading. If you find yourself in trouble (and are the administrator of an Apache or nginx setup), you should read the OpenSSL cookbook. Professional system administrators should have Bulletproof SSL/TLS and PKI on the shelf/eBook reader.1)

If you find yourself with too little time on your hands, you can skip through to Mozilla’s awesome config tool which will help you with setting up your SSL vhost for Apache, nginx and HAproxy. However, some background¬†may still be needed. You will find it on Mozillla’s Cipher recommendation page and the OpenSSL cookbook.

The SSL, the TLS and the Ugly

If you are a dedicated IT professional, you should not miss the next section. Although it’s not crucial for those wishing to “simply secure their server”, it provides those who are responsible for data security with a clear understanding of the numerous theoretical and practical limitations of SSL/TLS.

Tools and Utilities for Debugging SSL/TLS

Sometimes you need to debug errors during the SSL handshake. While a bit primitive, OpenSSL’s¬†s_client tool is the weapon of choice. When it comes to monitoring SSL/TLS encrypted communications, use mitmproxy or Charles. They need to be added as proxies, but can also intercept PFS connections, due to their active MITM position.

This list is not exhaustive and if you have more suggestions, please go ahead and post them in the comments. I’ll be happy to add them. Finally, just like¬†with system administration in general, you’re never “done” with security. SSL/TLS is a swiftly moving target, and you need to be aware of what is going on. If you are an IT professional, subscribe to security mailing lists and the announcement lists of your vendor. Finally, while I’m aiming to update this page, there’s never a guarantee of up-to-dateness for this list either.

Update (22.04.2014): Don’t miss the discussion on this article over at Hacker News.

Article History

  • 21.04.2014 – Initial version
  • 21.04.2014 –¬†Added “The Case for OCSP-Must-Staple”, Mozilla Cipher suite recommendation
  • 22.04.2014 – Updated to add sslyze and cipherscan, added HN link, fixed typos
  • 02.05.2014 – Add “Analyzing Forged SSL Certificate” paper
  • 19.12.2014 – Add Mozilla SSL Generator, updated text on book availability

1) I do realize that I am courting Ivan a lot in this section and that relying on only an¬†a single¬†external web service that can go away any day is not a good thing. At the same time I think that the handshake simulation and the simple rating process are priceless, as¬†such assessment cannot be trivially done by¬†people whom’s life does not revolve around crypto and security 24/7. At the same time, I’m happy for any pointers towards other, user friendly tools.

2) While blindly following the rating can easily lead to the establishment of cargo cult, ssllabs.com is continuously updated to only give those a good grading that follow the best pactices. Again: Avoid Cargo Cult, make sure you have a good idea of what you are doing.

ownCloud Client 1.6: The Tour

Now that ownCloud 1.6.0 beta1 is out, it’s time to explain the story behind it:

owncloud-icon-256This release was developed under the promise that it would improve performance 1), and we have made tremendous improvements: Using a new Qt-based propagator implementation, we can now perform multiple simultaneous up- and downloads. We still provide the old propagator for certain situation where it’s more suitable, such as for situations where bandwidth limitation is needed.

Furthermore, the sync journal access code has been significantly optimized. It paid tribute to most of the high CPU load during the mandatory interval checks. CPU usage should be much lower now, and the client should be usable with more files at the same time.

Windows users should also find update times improved as the time spent in file stat operations has been reduced. Mac OS X users will enjoy the benefits of a much improved file watcher. To be able to use the more efficient API, 1.6 drops support for Mac OS Snow Leopard (10.6) and now requires Mac OS 10.7 or better.

At the same time, production releases are now using Qt 5 rather than Qt 4 on Windows and Mac OS X2). This fixes a lot of visual bugs in Mac OS X, especially for Mavericks users, and allows us to profit from improvements in the SSL handling, especially on the Mac.

We also implemented an item that was on many peoples wish list: a concise sync log. Next to the database, the sync folder now holds a hidden file called .owncloudsync.log. It will store all sync processes in a minimal CSV file. Contrary to previous logging facilities, it always logs and only collects information relevant to the actual sync algorithm decisions.

Because this tour was not as colorful as the previous one, let’s close this blog post with a feature contributed by Denis Dzyubenko: The settings dialog on Mac OS X now has a native look & feel:

Get ownCloud Client 1.6.0 beta1 now and provide feedback!

1) Now that while the client is multi-threaded, you may find that the transfer time still doesn’t improve as much as you would expect. This is due locking issues on the server which prevent efficient parallel transfers. This has been improved in 1.7, and could potentilly improved even further by implementing support for X-Sendfile/X-Accel-Redirect in SabreDAV, the DAV framework used by ownCloud server.

2) We can’t do the switch even on modern Linux distributions mostly due of the poor support for modern and divergent Systray/Notification area support in Qt5: Even in Qt 4 we could only use it because Canonical had patched their Qt to make QSystemTrayIcon work with Unity, which they have not ported to Qt 5 yet. Gnome 3 also hides away traditional Systray icons way to well, not to speak of Plasma. Any leads would be helpful.

PS: Martin’s blog on the subject indicates that Qt 5.3 might solve the problem.

ownCloud 6 Release Party — Berlin Edition

A packed room listens to the talks at the ownCloud 5 release event.
A packed room listens to the talks at the ownCloud 5 release event.
(Deutsche Version dr√ľben bei Arthur)

With the final release of ownCloud 6 imminent, it is time to celebrate!

This time, BeLUG, who is also running an ownCloud installation for their members, was kind enough to host the Berlin release event. We’ll have short talks by both developers and admins, free pizza and beverages at affordable prices.

Talks (~20 min each):

  • ownCloud 6 Tour — Arthur Schiwon, ownCloud
  • Improvements in ownCloud Client 1.5 — Daniel Molkentin, ownCloud
  • ownCloud @ BeLUG, an Admin Perspective — tba, BeLUG


Please give a short shout in the comments if you want to join us.

See you there!

PS: There will be parties in other places as well:

ownCloud Client 1.5 Tour

It’s been quite a while since my last post about ownCloud Client 1.4. Now that ownCloud Client 1.5 beta1 has been released, it’s time to demonstrate what’s in for you this time.

The New Propagator

owncloud-icon-256First of all, we have completely redesigned The Propagator. It’s the component that is responsible for actually performing all the changes that earlier phases in a sync run have determined to be required. It is vital that the propagator does things in a clever way, and the new design allows just that. The new propagator writes changes to the sync journal as they happen, and does not rewrite the journal after every run. This means that sync runs can be paused or even terminated, and on the next start, the client will pick up where we left it. This is especially important for the initial sync which may take quite a while.

Next, we sped up sync runs significantly. If you are using an up-to-date server version, ownCloud Client 1.5 only requires one instead of three round trips to get a simple file uploaded, since the server can now accept the modification time as a header value. This will especially help with small files.

Another thing this release gets straight is support for remote moves: The old propagator handled them in terms of delete and re-download, which is a bit silly to begin with. Finally, with the new propagator, we can correctly handle moves for what they are, which turns pushing Megabytes of files into a simple mv instruction. In order to detect moves reliably, we now use file IDs next to ETags other meta data, which requires ownCloud 6.0 on the server side.

When you deleted folders, the old propagator would work strongly recursive, meaning a deletion one-by-one. This had several implications, as the non-atomic way of the old approach was problematic as it allowed for unexpected errors to happen. Also every file would be moved to the trash separately (assuming you had the trash app activated), making restore rather painful. The new propagator does away with all this: If you delete one directory, only the directory with all its structures will be moved to trash. As a side effect, this makes the delete operation on the wire much faster.

Handling Error Conditions

Ignored and blacklisted files now get featured more prominently.
Ignored and blacklisted files now get featured more prominently.
There are some situations where syncing a file cannot succeed, for instance when the shared folder on the server cannot be written to. Previously, we would try again and again and again which caused system load.

Now in cases like read-only shared folders we actually know that we will never succeed — until someone changes permissions on the server, that is. So now the client will put files it cannot write to on a black list. Only when the file or one of its parent directories changes, we check again if we are now allowed to write. This should improve traffic and CPU load a lot.

state-information-64state-ok-64Another issue we addressed was our new handling of files that are on the local ignore list or which contain characters that cannot be replicated to other operating systems (which is an ongoing discussion). Most people were well aware that it would never work, making the (i)-indicator we were showing an annoyance. We also indicated the failure in a log dialog, which turned out to be too well-buried.

In the new release, the sync log has been renamed “Sync Activity” and was placed more prominently as a top level item. It shows all files that have been synced, as well a items that could not be synced at all. The systray icon will not show the (i) icon anymore.

One Account to rule them all

Another major change won’t be visible to you until you look at the source: The Account. It has been introduced as a first step to implement support for multiple accounts in forthcoming versions. For now, it suffices to say that this change has made the internals easier to understand and extend.

Password handling and Signing out

The client when singed in...
The client when singed in…
A direct implication is that the client now has a folder-spanning notion of being on- or offline. In addition to that, you can now also sign out. This means your password will be locally discarded. Should the password change or should you have signed out and want to sign back in, you will be queried to enter your password.

The Statistics

... and when signed out.
… and when signed out.
This release addresses more than 50 issues (features, enhancements, bugs), so this tour is by no means complete.

We hope you like the new client, and we appreciate your feedback. Please head over to the issue tracker to tell us about bugs or regressions.

ownCloud Client 1.4 – A Visual Guide

Only slightly more than a month has passed since the release of ownCloud 1.3.0 and it’s been quite a ride. But now, we are at the first milestone of the 1.4.0 release that we would like to share with you: Beta 1. The focus in this release, apart from the usual fixes, was to provide improved user feedback, and to extend both the user interface and the backend in a way that would allow us to bring the client forward. For the UI, this meant the introduction of a Settings dialog. For the backend, it meant a lot of refactoring.


All important information is now available in the context menu.

Previous versions of the ownCloud client were quite sparse in terms of feedback. You would have to wait for a complete sync to finish in order to receive your results, and to understand what the client actually did, you would have to resort to running with the --logwindow option. No more! The new beta features feedback about the current sync progress and already processed items in the context menu. If this is not enough, you can choose Details... from the Recent Changes. This will pop up a dialog that will give you the complete details about all uploaded, downloaded, deleted and moved files and directories. On top, the information will be available as it the sync happens, but only as a result afterwards like before. There is also feedback on the upload progress both in the context menu and in the account details. When there are problems with the sync like running out of quota, the system tray icon will now change to indicate a problem.

Finally, we introduced a Help... item, which points to the official online manual. We are currently updating it to make it more useful, but it already
contains many important hints for trouble shooting. If you want to help out with documentation, check Issue #788.


occ_14_defaulticonIcons going native

It’s hardly believable, but so far the Client went without a settings dialog. Under the hood, we had features piling up such as switching to native/monochrome icons, but they were only available as command line switches. Now they are all pretty check boxes. On top, there are now options to disable pop ups resulting from syncs, as well as auto-starting the Client on all o systems on a per-user basis. Auto start is now only default if an account has been successfully configured.

Refactoring for Features

Detailed quota and button leading to the Ignored Files Editor.

This release features the most changes in terms of lines of code since the beginning of the Client development. Code has been refactored to enable new functionality, such as bandwidth control, aforementioned quota visualization, or to allow for Custom Ignore patterns. Not only can you add new file patterns that will be ignored by the client, but you can also define them as discardable. For instance locally created meta-files such as .DS_Store (Mac) or Thumbs.db (Windows) would not be deleted which the folder was removed remotely, rendering the client incapable of removing the directory. Ignored files marked as discardable will now be removed without warning, making the sync experience a lot smoother.

New in 1.4: Setting up bandwidth limits.

Another, still ongoing effort is the introduction of a smarter scheduler that ensures that sync runs will only be performed whenever there are changes on the server. Before, we could only detect local changes. In order to achieve this, we leverage the E-Tag, a Unique ID provided by the ownCloud WebDAV server. This should result in significantly reduced CPU load and networking traffic. No more sync runs every 30 seconds. Instead, the root E-Tags on the server are being checked, and a sync run is only started if they changed. Also, we have lowered the thread priority for the actual sync run to provide a smoother experience.

There are a lot more changes in this release, which are summarized in the ChangeLogs for Mirall and OCSync as well as the open and closed issues for the 1.4 milestone in GitHub.

FrOSCon 2013: Call for Projects, Papers

frosconThe Free and Open Source Software Conference (FrOSCon) will take place on the 24th and 25th of August this year, kindly hosted by the Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin.

We are looking for projects that would like to present in the exhibition area, as well as interesting talks or workshops. Additionally, you can request a project room if you want to sit together to do some hacking, or have your own, topic specific course of lectures. Sign up now!

But even more so, we are looking for speakers. The focus this year is on Seamless Computing, How Free Software should deal with closed ecosystems and How to let the computer do your chores. Even if your topic is not in one of those domains: Submit your proposal.

ownCloud Client 1.2.3

owncloud-icon-256Today, along with the server updates, we have released ownCloud Client 1.2.3. This release will continue working users of the 4.5 series. Clearly the most important and single-most annoying bug this release addresses is the creation of illegitimate conflict files. This can happen due to several reasons. Lately, it has been reported due to a regression in 5.0.0 (which was fixed in 5.0.3). We now compare the files for equality, which avoids illegitimate conflict files on the client side.

We also fixed the sync handling of Microsoft Office Documents and possibly other locked files on Windows: Before, when a file was locked (e.g. by an Office application), the client would think that the file was gone, deleting it from the server (and hence other clients) until the application would release the file again. In 1.2.2, when a file is locked, the Client simply skips it. We all ignore the lock files.

People have reported this behavior as a bug for OpenOffice files already, because they think that we should transport the lock file. However, this is wrong: Suppose you sync while someone else has a lock on the file and then you go on the train: You will not be able to open the file due to the lock file. Also, by the time the lock file is synced, the original author might already have closed down the Application that was locking it. This is a matter of principle: unlike on a local file system, two people editing the same file is perfectly ok — it will result in a legitimate conflict during sync, and only on one of the sides.

Finally, a quick tour through the additional changes: We fixed some of the crashes seen especially on Mac OS when setting up a folder. Also, Linux users using Nautilus will now have their sync folder will be automatically added to Nautilus during sync setup. Finally, the Client now works with screen readers on Windows. For those observing crashes after some time on Linux, please read this post for a description of workarounds.

The packages are available at the usual place and pose a significant improvement over 1.2.1 for many users, especially when using a 5.0 server (which you should also upgrade). For the next release, we are working on chunked syncing, and an overhaul of the user interface, which also brings a lot of code improvements along the way. Stay tuned.