Chapter 4. Open Source and Security

Ben Laurie

More than two years ago, in a fit of frustration over the state of open source security, I wrote my first and only blog entry[1] (for O’Reilly’s Developer Weblogs):

June and July were bad months for free software. First Apache chunked encoding vulnerability,[2] and just when we’d finished patching that, we get the OpenSSH hole.[3] Both of these are pretty scary—the first making every single web server potentially exploitable, and the second makes every remotely managed machine vulnerable.

But we survived that, only to be hit just days later with the BIND resolver problems.[4] Would it ever end? Well, there was a brief respite, but then, at the end of July, we had the OpenSSL buffer overflows.[5]

All of these were pretty agonising, but it seems we got through it mostly unscathed, by releasing patches widely as soon as possible. Of course, this is painful for users and vendors alike, having to scramble to patch systems before exploits become available. I know that pain only too well: at The Bunker,[6] we had to use every available sysadmin for days on end to fix the problems, which seemed to be arriving before we’d had time to catch our breath from the previous one.

But I also know the pain suffered by the discoverer of such problems, so I thought I’d tell you a bit about that. First, I was involved in the Apache chunked encoding problem. That was pretty straightforward, because the vulnerability was released without any consultation with the Apache Software Foundation, a move I consider most ill advised, but it did at least simplify our options: we had to get a patch out as fast as possible. Even so, we thought we could take a little bit of time to produce a fix, since all we were looking at was a denial-of-service attack, and let’s face it, Apache doesn’t need bugs to suffer denial of service—all this did was make it a little cheaper for the attacker to consume your resources.

That is, until Gobbles[7] came out with the exploit for the problem. Now, this really is the worst possible position to be in. Not only is there an exploitable problem, but the first you know of it is when you see the exploit code. Then we really had to scramble. First we had to figure out how the exploit worked. I figured that out by attacking myself and running Apache under gdb. I have to say that the attack was rather marvelously cunning, and for a while I forgot the urgency of the problem while I unravelled its inner workings. Having worked that out, we were in a position to finally fix the problem, and also, perhaps more importantly, more generically prevent the problem from occurring again through a different route. Once we had done that, it was just a matter of writing the advisory, releasing the patches, and posting the advisory to the usual places.

The OpenSSL problems were a rather different story. I found these whilst working on a security review of OpenSSL commissioned by DARPA[8] and the USAF.[9] OpenSSL is a rather large and messy piece of code that I had, until DARPA funded it, hesitated to do a security review of, partly because it was a big job, but also partly because I was sure I was going to find stuff. And sure enough, I found problems (yes, I know this flies in the face of conventional wisdom—many eyes may be a good thing, but most of those eyes are not trained observers, and the ones that are do not necessarily have the time or energy to check the code in the detail that is required). Not as many as I expected, but then, I haven’t finished yet (and perhaps I never will, it does seem to be a never-ending process). Having found some problems, which were definitely exploitable, I was then faced with an agonising decision: release them and run the risk that I would find more, and force the world to go through the process of upgrading again, or sit on them until I’d finished, and run the risk that someone else would discovered them and exploit them.

In fact, I dithered on this question for at least a month—then one of the problems I’d found was fixed in the development version without even being noted as a security fix, and another was reported as a bug. I decided life was getting too dangerous and decided to release the advisory, complete or not. Now, you might think that not being under huge time pressure is a good thing, but in some ways it is not. The first problem came because various other members of the team thought I should involve various other security alerting mechanisms—for example, CERT[10] or a mailing list operated by most of the free OS vendors.[11] But there’s a problem with this: CERT’s process is slow and cumbersome and I was already nervous about delay. Vendor security lists are also dangerous because you can’t really be sure who is reading them and what their real interests are. And, more deeply, I have to wonder why vendors should have the benefit of early notification, when it is my view that they should arrange things so that their users could use my patches as easily as I can. I build almost everything from original source, so patches tend to be very easy to use. RPMs[12] and ports[13] make this harder, and vendors who release no source at all clearly completely screw up their customers. Why should I help people who are getting in the way of the people who matter (i.e., the users of the software)?

Then, to make matters worse, one of the more serious problems was reported independently to the OpenSSL team by CERT, who had been alerted by Defcon.[14] I was going, and there was no way I was delaying release of the patches until after DeFcon. So, the day before I got on a plane, I finally released the advisory. And the rest is history.

So, what’s the point of all this? Well, the point is this: it was a complete waste of time. I needn’t have agonised over CERT or delay or any of the rest of it. Because half the world didn’t do a damn thing about the fact they were vulnerable, and because of that, as of yesterday, a worm is spreading through the Net like wildfire.

Why do I bother?

Two years later, I am still bothering, so I suppose that I do think there’s some point. But there are interesting questions to ask about open source security—is it really true that “many eyes” doesn’t work? How do we evaluate claims about the respective virtues of open and closed source security? Has anything changed in those two years? What is the future of open source security?

Many Eyes

Although it’s still often used as an argument, it seems quite clear to me that the “many eyes” argument,[15] when applied to security, is not true. It is worth remembering what was originally said: “Many eyes make all bugs shallow” (Eric S. Raymond). I believe this is actually true, if read in the right context. Once you have found a bug, many eyes will, and indeed, do, make fixing it quick and easy.

Security vulnerabilities are no different in this respect—once they are found, they are generally easy to track down and fix (the Apache chunked encoding vulnerability was the hardest I’ve ever had to track, and even that took only one long day’s work). But vulnerabilities aren’t like bugs in that sense—until they are discovered. Once you find them, you have a recipe for making the software behave unexpectedly. Until that time, what do you have? A piece of software that does what you expect.

The idea that bugs and security vulnerabilities are really the same thing is quite wrong—and it’s an idea that I suspect has been perpetrated by the reliability community,[16] sensing a new source of funding. Software is reliable if it does what is expected when operated as expected. It is secure if it does what is expected under all circumstances. This is a very critical difference, indeed. Nonsecurity bugs have a significant qualitative difference from security bugs—people don’t go out of their way to find bizarre things to do to make the software go wrong just for the fun of it. And if they do, and it’s not a security hole...well, yes, that’s interesting, and we’ll fix it one day but, in the meantime, you didn’t need that functionality, so just stop poking yourself in the eye and it will stop hurting.

What has happened is that advocates of open source have taken the “many eyes” argument to mean that because the source is available, many people will examine it for weaknesses. This simply isn’t true: most people never look at the source at all (until it doesn’t work), and even if they do, most do not have the experience to find the problems. The argument simply does not hold water, and it’s time we, as a community, abandon it.

However, there is an important sense in which the “many eyes” theory holds a grain of truth: those who want to look at the source to check for vulnerabilities, can. The interesting question is whether those who want to look the the code are generally the good guys or the bad guys. But this is a question I will come to later, when I compare open and closed source.

Open Versus Closed Source

Since I wrote my rant, Microsoft has decided that security is important (at least for sales), and as a result, there’s been a sudden increased interest in the truth of the claim that open source is “more secure” than closed source—and, of course, the counterclaim of the opposite.

But this claim is not easy to examine, for all sorts of reasons. First, what do we mean by “more secure”? We could mean that there are fewer security bugs, but surely we have to take severity of the bugs into account, and then we’re being subjective. We could mean that when bugs are found, they get fixed faster, or they damage fewer people. Or we might not be talking about bugs at all. We might mean that the security properties of the system are better in some way, or that we can more easily evaluate our exposure to security problems.

I expect that, at some point, almost everyone with a serious interest in this question will choose one of these definitions, and at some other point a completely different one.

Who Is the Audience?

It is also important to recognize that there are at least two completely different reasons to ask the question “is A more secure than B?” One is that you are trying to sell A to an audience that just wants to tick the “secure” box on their checklist, and the other is because you actually care about whether your product/web site/company/whatever is secure, and are in a position to have an informed opinion.

It is, perhaps, unkind to split the audience in this way but, sadly, it appears to be a very real split. Most people, if asked whether they think the software they use should be secure will say, “Oh yeah, security, that’s definitely a good thing, we want that.” But this does not stop them from clicking Yes to the dialog box that says “Would you like me to install this Trojan now?” or running products with a widely known and truly dismal security record.

However, it is a useful distinction to make. If you are trying to sell to an audience that wants to tick the security box, you will use quite different tactics than if the audience truly cares about security. This gives rise to the kind of analysis I see more and more. For example, http://dotnetjunkies.com/WebLog/stefandemetz/archive/2004/10/11/28280.aspx has an article titled “Myth debunking: SQL Server vs. MySQL security 2003-2004 (SQL Server has less bugs!!).” The first sentence of the article gives the game away: “Seems that yet again a MS product has less bugs that (sic) the corresponding LAMP[17] product.” What is this telling us? Someone found an example of a closed source product that is “better” at security than the corresponding open source one. Therefore, all closed source products are “better” at security than open source products. If we keep on saying it, it must be true, right?

Even if I ignore the obviously selective nature of this style of analysis, I still have to question the value of simply counting vulnerabilities. I know that if you do that, Apache appears to have a worse record than IIS recently (though not over longer periods).

But I also know that the last few supposed vulnerabilities in Apache have been either simple denial-of-service (DoS) attacks[18] or vulnerabilities in obscure modules that very few people use. Certainly I didn’t even bother to upgrade my servers for any of the last half-dozen or so; they simply weren’t affected.

So, for this kind of analysis to be meaningful, you have to get into classifying vulnerabilities for severity. Unfortunately, there’s not really any correct way to do this. Severity is in the eye of the beholder. For example, my standard threat model (i.e., the one I use for my own servers, and generally advise my clients to use, at least as a basis) is that all local users[19] have root,[20] whether you gave it to them or not. So, local vulnerabilities[21] are not vulnerabilities at all in my threat model. But, of course, not everyone sees it that way. Some think they can control local users, so to them, these holes matter.

Incidentally, you might wonder why I dismiss DoS attacks; that is because it is essentially impossible to prevent DoS attacks, even on perfectly functioning servers, since their function is to provide a service available to all, and simply using that service enough will cause a DoS. They are unavoidable, as people subject to sustained DoS attacks know to their pain.

Time to Fix

Another measure that I consider quite revealing is “time to fix”—that is, the time between a vulnerability becoming known and a fix for it coming available. There are really two distinct measures here, because we must differentiate between private and public disclosure. If a problem is disclosed only to the “vendor,”[22] the vendor has the leisure to take time fixing it, bearing in mind that if one person found it, so will others—meaning “leisure” is not the same as “forever,” as some vendors tend to think. The time to fix then becomes a matter of negotiation between vendor and discloser (an example of a reasonably widely accepted set of guidelines for disclosure can be found at http://www.wiretrip.net/rfp/policy.html, though the guidelines are not, by any means, universally accepted) and really isn’t of huge significance in any case, because the fix and the bug will be revealed simultaneously.

What is interesting to measure is the time between public disclosures (also known as zero-days) and the corresponding fixes. What we find here is quite interesting. Some groups care about security a lot more than others! Apache, for example, has never, to my knowledge, taken more than a day to fix such a problem, but Gaim[23] recently left a widely known security hole open for more than a month. Perhaps the most interesting thing is that whenever time to fix is studied, we see commercial vendors—Sun and Microsoft, for example—pitted against open source packagers—Red Hat and Debian and the like—but this very much distorts the picture. Packagers will almost always be slower than the authors of the software, for the obvious reason that they can’t make their packages until the authors have released the fix.

This leads to another area of debate. A key difference between open and closed source is the number of "vendors” a package has. Generally, closed source has but a single vendor, but because of the current trend towards packagers of open source, any particular piece of software appears, to the public anyway, to have many different vendors. This leads to an unfortunate situation: open source packagers would like to be able to release their packages at the same time as the authors of the packages. I’ve never been happy with this idea, for a variety of reasons. First, there are so many packagers that it is very difficult to convince myself that they will keep the details of the problem secret, which is critical if the users are not to be exposed to the Bad Guys. Second, how do you define what a packager is? It appears that the critical test I am supposed to apply is whether they make money from packaging or not![24] This is not only blatantly unfair, but it also flies in the face of what open source is all about. Why should the person who participates fully in the open source process by building from source be penalized in favor of mere middlemen who encourage people not to participate?[25]

Of course, the argument, then, is that I should care more about packagers because if they are vulnerable, it affects more people. I should choose whom I involve in the release process on the basis of how many actual users will be affected, either positively or negatively, depending on whether I include the packager or not, by my choice. I should also take into account the importance of these users. A recent argument has been that I should involve organizations such as the National Infrastructure Security Co-ordination Centre (NISCC), a UK body that does pretty much what it says on the tin, and runs the UK CERT (see http://www.niscc.gov.uk for more information) because they represent users of more critical importance than mere mortals. This is an argument I actually have some sympathy with. After all, I also depend on our infrastructure. But in practice, we soon become mired in vested interests and commercial considerations because, guess what? Our infrastructure uses software from packagers of various kinds, so obviously I must protect the bottom line by making sure they don’t look to be lagging behind these strange people who give away security fixes to just anyone.

If these people really cared about users, they would be working to find ways that enable the users to get the fixes directly from the authors, without needing the packager to get its act together before the user can have a fix. But they don’t, of course. They care about their bank balance, which is the saddest thing about security today: it is seen as a source of revenue, not an obligation.

Incidentally, a recent Forrester Research report claims that packagers are actually quite slow—as slow as or slower than closed source companies—at getting out fixes. This doesn’t surprise me, because a packager generally has to wait for the (fast!) response of the authors before doing its own thing.

Visibility of Bugs and Changes

There is argument that lack of source is actually a virtue for security. Potential attackers can’t examine it for bugs, and when vulnerabilities are found, they can’t see what, exactly, was changed.

The idea that vulnerabilities are found by looking at the source is an attractive one, but is not really borne out by what we see in the real world. For a start, reading the source to anything substantial is really hard work. I know—I did it for OpenSSL, as I said earlier. In fact, vulnerabilities are usually found when software misbehaves, given unusual input or environment. The attacker follows up, investigating why that misbehavior occurred and using the bug thus revealed for their own evil ends. The “chunked encoding” bug I mentioned earlier is a great example of this. This was found by the common practice of feeding programs large numbers of the same character repeatedly. When Apache was fed A lots of times, it ended up treating it as a count of characters in hex, and it came out negative, which turns out to be a Bad Thing. In this case, all that was needed was eight characters, but the problem was found by feeding Apache several thousand.[26]

So, not having the source might slow down an attacker slightly, but given the availability of excellent tools like IDA (a very capable disassembler) and Ollydbg (a powerful [and free] debugger), not by very much.

What about updates? The argument is that when source is available, the attacker can compare the old and new versions of the source to see what has changed, and then use that to craft software that can exploit unfixed versions of the package. In fact, because most open source uses version control software, and often has an ethos of checking in changes that are as small as possible, usually the attacker can find just the exact changes that fixed the problem without any clutter arising from unrelated changes.

But does this argument hold water? Not really, as, for example, Halvar Flake has demonstrated very clearly with his Binary Difference Analysis tool. What this does is take two versions of a program, before and after a fix, disassembles them, and then uses graph isomorphisms to work out what has changed. I’ve seen this tool in action, and it is very impressive. Halvar claims (and I believe him) that he can have an exploit out for a patched binary in one to eight hours from seeing the new version.

Review

Another important aspect to security is the ability to assess the risks. With closed source, this can be done only on the basis of history and reputation, but with open source, it is possible to go and look for yourself. Although you are not likely to find bugs this way, as I stated earlier, you can get a good idea about the quality of the code, the way it has been written, and how careful the author is about security. And, of course, you still have history and reputation to aid you.

Who’s the Boss?

Finally, probably the most important thing about open source is the issue of who is in control. When a security problem is found, what happens if the author doesn’t fix it? If the product is a closed source one, that generally is that. The user is doomed. He must either stop using it, find a way around the problem, or remain vulnerable. In contrast, with open source, users are never at the mercy of the maintainer. They can always fix the problem themselves.

It is often argued that this isn’t a real choice for end users; usually end users are not programmers, so they cannot fix these problems themselves. This is true, but it completely misses the point. Just as the average driver isn’t a car mechanic but still has a reasonably free choice of who fixes his car,[27] he can also choose a software maintainer to fix his software for him. In practice, this is rarely needed because (at least for any widely used software) there’s almost always someone willing to take on the task.

Digression: Threat Models

I mentioned threat models earlier. Because not all my readers will be security experts, it is worth spending a moment to explain what I mean. When you evaluate a threat to your systems, you have to have a context in which to do it. Simply saying “I have a security hole” tells you almost nothing useful about it. What you want to know is how bad it is, how fast you have to fix it, what it will cost if you don’t fix it, and what it will cost if you do fix it.

To make that assessment, there are various things you need to know. The obvious ones are what systems you are running; what the value of each component is; what impact the vulnerability will have on each component; how likely you are to be attacked; and so forth. But less obvious is the question of whether you actually care about the attack at all—and this is where threat models come in. They characterize what you have already assumed yourself to be vulnerable to and how you are vulnerable it.

So, as I mentioned, my threat model is that local users have root. Because root can do, essentially, anything she wants, this means that any vulnerability that can only be exploited by a local user, no matter what it is, and no matter how bad, is irrelevant to me. They could do that already.

Threat models can get quite complicated, and you may well find that when a new vulnerability comes along, you have to consider what your model actually is, because you don’t already know. For example, suppose there’s an attack on the domain name service that allows it to be faked. Do you care? Was that something you assumed had to be correct when you built your system, or is incorrectness merely a nuisance?

Anyway, I don’t want to turn this chapter into a textbook on security, so suffice it to say that threat models are important, everyone’s is different, and you can’t evaluate the impact of vulnerabilities without one—which means, really, that the whole question of which is better is one only you can answer.

The Future

Prediction is difficult, especially about the future.

Niels Bohr/Mark Twain[28]

There are two futures: the one we should have, and the one we’re going to get. I’ll talk about the one we should have first, because it’s more fun, more interesting, and definitely more secure.

Today’s operating systems and software are based on decades of experience with developing software that was run by nice guys on machines over which they controlled access relatively easily (whether as users or nonusers interacting with the machine or software in some way). This was a world where your biggest security threat was a student playing a prank. We learned a great deal about how to write software that did clever things, was easy to use, and had pretty interfaces.

Unfortunately, we learned almost nothing about how to write secure software. And in the meantime, we built up a huge amount of insecure software. Worse, we used insecure languages to write the insecure software in. And worse even than that, we used languages thath there’s no real prospect of securing. And we continue to use them, and the same insecure operating systems we wrote, with ever-increasing teetering towers of software piled on top of them.

So, in my Brave New World, we get smart enough to scrap all this and use an idea invented in the 1960s: capabilities. Unfortunately, academics decided very early on that capabilities had all sorts of problems, and this has prevented their widespread adoption. Mark Miller and Jon Shapiro, in “Paradigm Regained: Abstraction Mechanisms for Access Control” (http://www.erights.org/talks/asian03/paradigm-revised.pdf), have very effectively debunked these criticisms, though I have to admit to being bemused by how anyone could believe them in the first place, since they are so easily solved.

In any case, there are still some of us around who believe in capabilities, and I entertain the fond hope that we may start using them on a larger scale. The foremost project using capabilities at the moment is the E language (http://www.erights.org), which, as well as being a capability language from the ground up, has some very nice features for distributed computing, and is well worth a look. Unfortunately, I do not believe a language with such esoteric (and ever-changing) syntax will ever be widely used. It seems that privilege belongs to a very few. Perhaps more promising from the point of view of likelihood of adoption is my own nascent CAPerl (think “Kapow!”) project, which adds capabilities to Perl. Although this is far less elegant and satisfying, it has the virtue of looking almost exactly like Perl to the experienced programmer, and so I do have some hope that it might actually get used. I don’t have a web site for it yet, so I invite you to Google for it.

No discussion of capabilities in the 21st century would be complete without mentioning EROS (http://www.eros-os.org). Funnily enough, EROS is short for Extremely Reliable Operating System, since its author, Jon Shapiro, thought that was what was important about it when he started writing it. Now, though, we are far more interested in its security properties than in its reliability. EROS, like E, implements capabilities from the ground up. More importantly, it runs on PCs. Unfortunately, it seems it is a project that won’t be finished. Work is, however, starting soon on the second attempt.

Of course, if I really think this will happen, I’m on crack. Not enough people care enough about security to contemplate throwing everything away and starting again (make no mistake, that’s what it takes). But I can (and do) hope that people will start writing new things using capabilities. And I hope that drawing them to your attention will assist that.

Now I’ll move on to what I think will really happen. Certainly people have become more aware of security as an issue, and the increasing use of open source in corporate environments also increases the pressure on security. It seems likely that this will drive open source toward better ways to deliver updates faster. I don’t think it is actually possible to drastically improve open source’s record on fixing security issues. I believe that by any measure, that open source is ahead of closed source. But the flow from author to end user is not yet a smooth one.

Interestingly, the fix for that is strongly related to the fix for another widely acknowledged problem with open source: package management systems. We do not yet have the ultra-smooth systems to handle installation and update of systems in a way that makes it a no-brainer for end users. Open source and closed source present interestingly different problems. Open source packages of any complexity tend to depend on other open source packages, usually with a completely different set of authors and release cycles. Managing installation in this environment is much harder than in the closed source situation, where one vendor—even one that buys components from others—is in control of the whole package. I think the open source world is moving toward better package management, and this will automatically improve the end user’s management of security.

However, for corporate environments, this probably makes little difference. In such situations, there are almost always elaborate procedures for rolling out new versions which are almost unchanged when using open source. Even so, clearer visibility of dependencies and, therefore, what needs to be upgraded when a fix comes out, would be useful.

I also hope that better package management would reduce the dependency of users (at least, if they choose to have their dependency reduced) on packagers. Although packagers, in theory, add value, they also add latency. Perhaps worse, they damage the open source model by introducing dozens of slightly different versions of each package, through the widespread practice of applying patches to the packages instead of contributing them back to the original authors, which reduces the effectiveness of community development by splitting the community into many smaller subcommunities.

As always, there is a price to be paid for better package management. Automated updates are a fantastic vector to mount automated attacks. We know well how to prevent such attacks using public key cryptography, but once more, the complexity of multiple authors introduces problems of key management to which there aren’t really good answers, at least, so far.[29]

One thing that does seem certain is that the increasing trend of concern about security by end users will continue. The seemingly never-ending rise of spam, adware, and Trojans, if nothing else, has put it on everyone’s agenda, and that doesn’t seem likely to change.

Interesting Projects

I’ve already mentioned some projects in passing, but no chapter on open source security would be complete without mentioning some of the more interesting projects out there. I’ll start with the obvious ones and move on to the more esoteric. This list probably reflects my current obsession with privacy and anonymity:

OpenSSL

Well known, but still essential. This library implements most known cryptographic algorithms, as well as the SSL and TLS protocols. It is very widely used in both free and non-free software, and at the time of this writing was in the final stages of obtaining FIPS-140 certification. http://www.openssl.org.

Apache 2

Of course, we’ve all known and loved Apache for years. Finally, Apache 2 has HTTPS support out of the box. http://www.apache.org.

Mozilla

A suite of web browser, mail, and news reading software, and related utilities. You probably don’t think of this as security software, but it is probably second only to Apache in the number of financial transactions it protects. And it does it with a minimum of fuss. What’s more, it isn’t plagued with its closed-source rivals’ fondness for installing evil software you never intended to install! http://www.mozilla.org.

GnuPG

Implementing the OpenPGP standard under the GPL. Primarily used for email, but also the mainstay for validation of open source packages (using, of course, public key cryptography). http://www.gnupg.org.

Enigmail

Small, but (almost) perfectly formed. This is a plug-in for the increasingly popular (and, of course, open source) email client, Thunderbird, providing a nicely streamlined interface for GnuPG. http://enigmail.mozdev.org.

CVE

Common Vulnerabilities and Exposures. This is a database of security problems, both commercial and open source. The idea is to provide a uniform reference for each problem, so it’s easy to tell if two different people are talking about the same bug. http://cve.mitre.org.

TOR

The onion router. Onion routing has been a theoretical possibility for a long time, providing a way to make arbitrary connections anonymously. Zero Knowledge Systems spectacularly failed to exploit it commercially, but now it has come from a most unlikely source: the U.S. Navy. The Navy’s funding recently ran out, but the Electronic Frontier Foundation stepped up to take over. Well worth a look. http://tor.eff.org.

Conclusion

In the end, it seems to me there’s little to be sensibly said that, from the viewpoint of security, truly differentiates between open and closed source. The points I believe are critical are my ability to review the code for myself and my ability to fix it myself when it is broken. By “myself” I do, of course, include “or anyone of my choice.” What I don’t believe in—at all—is the often-quoted but never-proven “many eyes” theory.

In the digression on threat models, I mentioned that the only person who can really answer the question of whether open source is better for security is you. Leave the camp of people who think security is a good thing that we should all have more of, and join the camp of people who have thought about what it means to them, what they value, and so, what they choose.



[6] Back in those days, The Bunker belonged to A.L. Digital Ltd., and it wasn’t called The Bunker Secure Hosting.

[7] A hacker (or group of hackers, it is not known which).

[8] The United States Defense Advanced Research Projects Agency, responsible for spending a great deal of money on national security—in this case, for a thing known as CHATS, or Composable High Assurance Trusted Systems.

[9] Yes, I do mean the United States Air Force.

[10] CERT is an organization funded to characterize security issues and alert the appropriate parties— a job they do not do very well, in my opinion.

[11] Apparently, I’m not one, so I’m not on this list.

[12] One of those recursive definitions programmers love: RPM Package Manager, a widely used system for distributing packaged open source software, particularly for various flavors of Linux.

[13] FreeBSD’s package management system. Also used by other BSDs.

[14] DefCon is a popular hacker’s convention, held annually in Las Vegas.

[15] The argument is that if enough people look at the code, bugs (and hence security issues) will be found before they bite you.

[16] Academics who study the reliability, as opposed to the security, of computer systems.

[17] LAMP stands for Linux, Apache, MySQL, Perl (or PHP) and is common shorthand for the cluster of open source commonly used to develop web sites.

[18] In a DoS attack, the attacker prevents access by legitimate users of a service by loading the service so heavily that it cannot handle the demand. This is often achieved by a distributed denial of service (DDoS) attack, in which the attacker uses a network of “owned” (i.e., under the control of the attacker and not the legitimate owner) machines to simultaneously attack the victim’s server.

[19] That is, people with user accounts on the machine, rather than visitors to web pages or people with mail accounts, for example.

[20] Root is the all-powerful administrative account on a Unix machine.

[21] A local vulnerability is one that only a local user can exploit.

[22] A term I am not at all fond of, since, although I am described as a “vendor” of Apache, OpenSSL, and so forth, I’ve never sold any of them.

[23] A popular open source instant messaging client.

[24] Of course, not all packagers make money, but I’ve only experienced this kind of pressure from those that do.

[25] This is because vendors tend to encourage users to treat them as traditional closed source businesses—with their own support, their own versions of software, and so forth—instead of engaging the users with the actual authors of the software they are using.

[26] This particular method is popular because it is so easy: perl -e "print 'A'x10000" | target.

[27] This is a metaphor that is rapidly going out-of-date, as car manufacturers make cars more and more computerized and harder and harder for anyone not sanctioned by the manufacturer to work on. Who knows—perhaps this will lead to an open source culture in the car world.

[28] Apparently it’s difficult about the past too—we don’t know which of these people said this!

[29] I should perhapsat this point plug KeyMan, a package I designed to solve this problem, but since it has singularly failed to take off, that might be inappropriate.

Get Open Sources 2.0 now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.