In this open series, you can contribute shows that are on the topic of Privacy and Security
Two recent events have shed light on some fundamental issues in getting security in Open Source projects. One of them is a serious bug referred to as "Heartbleed", and the other is the first part of a security audit of the TrueCrypt encryption program. By looking at both of these together and doing a Lessons Learned we can draw some conclusions about what is needed to have security in Open Source projects.
Comment #1 posted on 2014-06-12 09:44:43 by Ken Fallon
Disagree with your comments on LibReSSL
As ever I enjoyed this show and broadly agree with many of your points. I would first like to strongly disagree with the section where you discuss LibReSSL before adding another suggestion.
Your Ad hominem argument against Theo de Raadt (10:23:00) was unmerited, and while people may criticise his tact, he has a long history of producing and managing secure projects.
From https://en.wikipedia.org/wiki/Theo_de_Raadt: "He is the founder and leader of the OpenBSD and OpenSSH projects".
To address your other points in that section.
1. "I would stick with OpenSSL and give LibReSSL a pass, at least until such a time as they show a long track record of success".
The developers of LibReSSL have a long track record of success as they are the same people that bring you OpenSSH. They are noted for been able to produce secure software. So much so that Linux Torvalds said of the team, "I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them." From a practical point of view everyone running any Linux Distribution is more than likely already running and trusting OpenSSH.
From https://www.libressl.org/: "LibReSSL is primarily developed by the OpenBSD Project".
From https://www.openssh.com/: OpenSSH is developed by the OpenBSD Project.
2. "A good general rule in security is that new code is much more dangerous than code that has been around for a long time"
The whole heart bleed issue proves this to be false. The rule itself is based on the premise that if it's around a long time, many people have reviewed the code and many bugs have been fixed. This is a rewording of "given enough eyeballs, all bugs are shallow", Linus's_Law argument which you yourself criticise in the episode.
Many people skip the caveats in the formal version "Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone". The heart bleed code was caused by not having a large enough beta-tester and co-developer base, and yet this is exactly what the OpenBSD Project brings to the table.
Furthermore LibReSSL is not "new code", but rather it is "forked code". They are fixing old existing bugs on OpenSSL and nothing is preventing OpenSSL from implementing the fixes they discover.
3. Your argue that removing code and "stuff they don't care about" doesn't look good.
I would point out that that is not a bad thing and was widely supported when LibreOffice forked OpenOffice. Michael Meeks celebrates this by publicising 38,714 known unused methods in LibreOffice. "One of the unfortunate things that LibreOffice inherited, as part of the several decades worth of unpaid technical debt, is unused code that has been left lying around indefinitely. This was particularly unhelpful when mingled with the weight and depth of the useful code we have around the place."
So finally I would suggest that "Mono Culture is bad" should be added to your list. Having (cooperating) competing systems, is not a bad thing as it encourages development, and innovation. Over reliance on any one piece of software is a bad thing. We saw that in the fields of Operating systems, web browsers and now in Security Libraries.
Comment #2 posted on 2014-06-13 01:15:53 by Kevin O'Brien
Why I said that...
I still maintain that tested code is better than untested code, and nothing tsts it more than time. The OpenSSL code worked well for a long time until a very subtle error crept in. So the question is whether we will more quickly get to a more secure state by developing a mature code base or by throwing it out and starting over. In general, I maintain that fixing the mature code is a better practice.
Second, announcing almost immediately that that you have thrown out 90,000 lines of code (the number I recall) does not tom e suggest careful thought so much as a hack-and-slash mentality, and I don't like that in security. Calm deliberation usually works better.
Third, from the reports I read one of the things that was discarded as an unnecessary feature was Windows compatibility. Even though I support free software, I recognize that we need to share the Internet with a shit-ton of Windows machines, and I would prefer that they be as secure as possible.
I totally agree on Mono culture, and I hope that LibreSSL provides good competition. I did not mean to imply that LibreSSL should not exist, only that Iwould be very cautious about adopting anything unproven.
Comment #3 posted on 2014-06-13 16:00:04 by Ken Fallon
You many not have researched this enough
To be clear this is *not*, as you say, "starting over". They started with the existing OpenSSL code and worked from their. So they are using the exact same "mature code" and any issue they find are also issues in OpenSSL. Once fixed they are added to LibReSSL and _remain_ in OpenSSL. However, Do not mistake "old code" for "tested code". It was not tested, which was your third point "Bugs are not shallow if the eyeballs are not there." Now that people *are* looking at the code bugs are been found. Case in point the new team brought on board by the Linux Foundation to help fix OpenSSL reported on 5th June 2014: CVE-2014-0224: "a Man-in-the-middle (MITM) attack where the attacker can decrypt and modify traffic from the attacked client and server."
So let's look at what was actually removed in the first week. arstechnica have an interview with Theo de Raadt and he describes exactly what they removed.
de Raadt said there were "Thousands of lines of VMS support. Thousands of lines of ancient WIN32 support. Nowadays, Windows has POSIX-like APIs and does not need something special for sockets. Thousands of lines of FIPS support, which downgrade ciphers almost automatically."
There were also "thousands of lines of APIs that the OpenSSL group intended to deprecate 12 years or so ago and [are] still left alone."
De Raadt told ZDNet that his team has removed 90,000 lines of C code. "Even after all those changes, the codebase is still API compatible," he said. "Our entire ports tree (8,700 applications) continue to compile and work after all these changes."
So to summarise the facts:
90,000 lines of *unused* or *obsolete* code is removed from the LibReSSL code base.
90,000 lines of *unused* or *obsolete* code remains in the OpenSSL code base.
If the Industry Average is 17.5 errors per 1000 lines of code*, and there are 90,000 lines of code, we can calculate that the OpenSSL code has 1,575 MORE errors than LibReSSL
( 90000 / 1000 ) x ( ( 50 - 15 ) / 2 ) = 1,575
* "Code Complete" by Steve McConnell.
Blame Charles in NJ if my math is wrong.
Comment #4 posted on 2014-06-14 20:49:15 by Kevin O'Brien
It looks like I may have been a bit hasty in my comments about the code being removed. Your points certainly seem reaosnable. Correction accepted.
Note to Verbose Commenters
If you can't fit everything you want to say in the comment below then you really should record a response show instead.
Note to Spammers
All comments are moderated. All links are checked by humans. We strip out all html. Feel free to record a show about yourself, or your industry, or any other topic we may find interesting. We also check shows for spam :).