Chapter 1. Legal and Ethics Issues

In the summer of 2005, systems administrators and security researchers from all over the world gathered in Las Vegas, Nevada for Black Hat, one of the largest computer security conferences in the world. On the morning of the first day, Michael Lynn, one of the authors of this book, was scheduled to speak about vulnerabilities in Cisco routers. These vulnerabilities were serious: an attacker could take over the machines and force them to run whatever program the attacker wanted.

Cisco did not want Lynn to give the presentation. After last-minute negotiations with Lynn’s employer, ISS, the companies agreed that Lynn would have to change his talk. A small battalion of legal interns converged on the convention floor the night before the speech and seized the CDs that contained Lynn’s presentation slides for the talk and removed the printed materials out of the conference program.

Lynn, however, still wanted to give the original speech. He thought it was critical that system administrators know about the router flaw. A simple software upgrade could fix the problem, but few, if any, knew about the vulnerability. Lynn thought disclosure would make the Internet more secure. So, he quit his job at ISS and gave the talk he originally planned.

That evening, Cisco and ISS slapped Lynn, and the Black Hat conference, with a lawsuit.

We live in the Information Age, which means that information is money. There are more laws protecting information now than there were 25 years ago, and more information than ever before is protected by law. Cisco and ISS alleged that Lynn had violated several of these laws, infringing copyrights, disclosing trade secrets, and breaching his employment contract with ISS.

Lynn came to me because I’ve spent the last 10 years studying the law as it relates to computer security. I’ve advised coders, hackers, and researchers about staying out of trouble, and I’ve represented clients when trouble found them anyway. I’ve given speeches on computer trespass laws, vulnerability disclosure, and intellectual property protection at Black Hat, to the National Security Agency, at the Naval Postgraduate School, to the International Security Forum, and at Australia’s Computer Emergency Response Team conference. I’ve been a criminal defense attorney for nine years and have taught full time at Stanford Law School for the last six years.

I believe in the free flow of information and generally disapprove of rules that stop people from telling the truth, for whatever reason. But I understand that exploit code can also put a dangerous tool in the hands of a malicious, but otherwise inept, attacker. I believe companies need to protect their trade secrets, but also that the public has a right to know when products or services put them at risk.

Lynn told me that Cisco employees who had vetted the information were themselves unable to create a usable exploit from the information he gave them. But Lynn wanted to show people that he knew what he was talking about and that he could do what he said could be done. He included just enough information to make those points.

I know a lot about computer security for a lawyer, but not as much as a real security engineer, so I asked a couple of Black Hat attendees about the substance of Lynn’s presentation. They confirmed that Lynn’s presentation did not give away exploit code, or even enough information for listeners to readily create any exploit code. After a marathon weekend of negotiating, we were able to settle the case in a manner that protected my client from the stress and expense of being sued by a huge company.

Core Issues

I began this exploration of security ethics and issues with Michael Lynn and the Black Hat affair, not because of its notoriety in security circles, and certainly not to embarrass or promote him or the companies that filed suit, but because the case really does raise fascinating legal issues that the security marketplace is going to see again and again. You can substitute one company’s name for another, or one defendant for another, and the issues remain just as current. This chapter is going to review these legal issues in an open-minded way. Let’s begin with a few simple items from the Lynn case.

One of the allegations was the misappropriation of trade secrets. A trade secret is information that:

(1) Derives independent economic value, actual or potential, from not being generally known to the public or to other persons who can obtain economic value from its disclosure or use; and (2) Is the subject of efforts that are reasonable under the circumstances to maintain its secrecy.

What was the secret? Lynn did not have access to Cisco source code. He had the binary code, which he decompiled. Decompiling publicly distributed code doesn’t violate trade secret law.

Could the product flaw itself be a protected trade secret? In the past, attorneys for vendors with flawed products have argued that researchers would be violating trade secret law by disclosing the problems. For example, in 2003, the door access control company Blackboard claimed a trade secret violation and obtained a temporary restraining order preventing two researchers from disclosing security flaws in the company’s locks at the Interz0ne II conference in Atlanta, Georgia. What if we had the same rule with cars? Imagine arguing that the fact that a car blows up if someone rear ends you is a protected secret, because the market value drops when the public knows the vehicle is dangerous. No thoughtful judge would accept this argument (but judges don’t always think more clearly than zealous attorneys do).

Even if there is some kind of trade secret, did Lynn misappropriate it? Misappropriation means acquisition by improper means, or disclosure without consent by a person who used improper means to acquire the knowledge.

As used in this title, unless the context requires otherwise: (a) Improper means includes theft, bribery, misrepresentation, breach or inducement of a breach of a duty to maintain secrecy, or espionage through electronic or other means. Reverse engineering or independent derivation alone shall not be considered improper means.

The law specifically says that reverse engineering “alone,” which includes decompiling, is a proper, not improper, means of obtaining a trade secret.

What does it mean to use reverse engineering or independent derivation alone? Lynn reverse-engineered, but the complaint suggested that Cisco thought decompiling was improper because the company distributes the router binary with an End User License Agreement (EULA). that prohibits reverse engineering

What legal effect does such a EULA term have? Probably 99.9 percent of people in the world who purchase software do not care to reverse engineer it. But I maintain that society is better off because of the .1 percent of people who do. Reverse engineering improves customer information about how a product really works, promotes security, allows the creation of interoperable products and services, and enables market competition that drives down prices while providing, in theory, better protects. Lawmakers recognize the importance of reverse engineering, which is why the practice is a fair use under the copyright law, and why statutes go out of their way to state that reverse engineering does not violate trade secret law. Yet, despite these market forces, the trade secret owner has little or no incentive to allow reverse engineering. Indeed, customers generally do not demand the right. Increasingly, EULAs cite no reverse engineering. Should vendors be allowed to bypass the public interest with a EULA? It’s a serious issue.

The Lynn case illustrates that a simple decision by a researcher to tell what he knows can be very complicated both legally and ethically. The applicable legal rules are complicated, there isn’t necessarily any precedent, and what rules there are may be in flux. One answer might be simply to do what you think is right and hope that the law agrees. This, obviously, is easier said than done. I was persuaded that Lynn did the right thing because a patch was available, the company was dragging its feet, the flaw was important, and he took pains to minimize the risk that another person would misuse what he had found. But making ethical choices about security testing and disclosure can be subtle and context-specific. Reasonable people will sometimes disagree about what is right.

In this chapter, I talk about a few of the major legal doctrines that regulate security research and disclosure. I will give you some practical tips for protecting yourself from claims of illegal activity. Many of these tips may be overcautious. My fervent hope is not to scare you but to show you how to steer a clean, legal path. Inevitably, you will be confronted by a situation that you cannot be sure is 100 percent legal. The uncertainty of the legal doctrines and the complexity of computer technology, especially for judges and juries, mean that there will be times when the legal choice is not clear, or the clear choice is simply impractical. In these situations, consult a lawyer. This chapter is meant to help you spot those instances, not to give you legal advice.

Furthermore, this chapter discusses ethical issues that will arise for security practitioners. Ethics is related to but is not the same as the law. Ideally, the law imposes rules that society generally agrees are ethical. In this field, rules that were meant to stop computer attacks also impact active defense choices, shopping bots, using open wireless networks, and other common or commonly accepted practices. Where the laws are fuzzy and untested as in the area of computer security, then prosecutors, judges, and juries will be influenced by their perceptions of whether the defendant acted ethically.

That having been said, frequently ethics is a matter of personal choice, a desire to act for the betterment of security, as opposed to the private interests of oneself or one’s employer. Some readers may disagree with me about what is ethical, just as some lawyers might disagree with me about what is legal. My hope is that by reasoning through and highlighting legal and ethical considerations, readers will be better equipped to make a decision for themselves when the time arises, regardless of whether they arrive at the same conclusions I do. Now, I must give you once last disclaimer. This chapter is a general overview. It does not constitute legal advice, and it could never serve as a replacement for informed legal assistance about your specific situation.

Be Able to Identify These Legal Topics

You should be better able to identify when your security practices may implicate the following legal topics:

  • Computer trespass and unauthorized access

  • Reverse engineering, copyright law, EULAs and NDAs, and trade secret law

  • Anti-circumvention under the Digital Millennium Copyright Act (DMCA)

  • Vulnerability reporting and regulation of code publication

Because these concepts are complicated and the law is untested and ill-formed, readers will not find all the answers they need for how to be responsible security practitioners within the law. Sometimes the law over-regulates, sometimes it permits practices that are ill-advised. There will almost certainly be times when you do not know whether what you are about to do is legal. If you aren’t sure, you should ask a lawyer. (If you are sure, perhaps you haven’t been paying attention.)

Let’s investigate these four areas, beginning with trespass.

Computer Trespass Laws: No “Hacking” Allowed

Perhaps the most important rule for penetration testers and security researchers to understand is the prohibition against computer trespass.

There are both common law rules and statutes that prohibit computer trespass under certain circumstances. (Common law rules are laws that have developed over time and are made by judges, while statutes are written rules enacted by legislatures—both types of laws are equally powerful.) There are also Federal (U.S.) statutes and statutes in all 50 U.S. states that prohibit gaining access to computers or computer networks without authorization or without permission.

Many people informally call this trespassing hacking into a computer. While hacking has come to mean breaking into computers, the term clouds the legal and ethical complexities of laws that govern use of computers. Some hacking is legal and valuable, some is illegal and destructive. For this reason, this chapter uses the terms computer trespass and trespasser or unauthorized access and attacker to demarcate the difference between legal and illegal hacking.

All statutes that prohibit computer trespass have two essential parts, both of which must be true for the user to have acted illegally. First, the user must access or use the computer. Second, the access or use must be without permission. The federal statute has an additional element of damage. Damage includes nonmonetary harm such as altering medical records or interfering with the operation of a computer system used for the administration of justice. Damage also includes causing loss aggregating at least $5,000 during any one-year period.[1] In practice, plaintiffs do not have much trouble proving damage because most investigations of a computer intrusion will cost more than $5,000 in labor and time.[2]

Some state statutes define criminal behavior, which means that the attacker can be charged with an offense by the government and, if found guilty, incarcerated. Some state statutes and the federal law define both a crime and a civil cause of action, for which the owner of the computer system could sue the attacker for money.

Pen testers and security researchers discover ways to gain access to computers without authorization. Learning how to get access isn’t illegal, but using that information might be. Whether a particular technique is illegal depends on the meaning of access and authorization. For example, let’s pose two not-so-hypothetical instances:

  1. A maker of electronic voting machines has left source code for the machines on an anonymous FTP server. I believe the company may have done so inadvertently, but I want to analyze the source code for security flaws. May I download it?

  2. I am the system administrator of a network under attack from zombie machines infected by the Code Red worm. I want to use a tool that will stop the zombies by installing code on them by exploiting the same vulnerability used by Code Red to infect. May I use this tool?

What Does It Mean to Access or Use a Computer?

The concept of unauthorized access appears to be deceptively simple. In the real world, shared social values and understandings of property make it relatively clear when someone is trespassing upon the land or property of another. But even here, the trespass rule isn’t a bright line. You can go on someone’s property to ring the doorbell. It may be acceptable to cut through private property to get to the beach. If a store is open, you can enter even if you don’t see a salesclerk inside. When we were kids, we played in all the neighbors’ yards, even if they didn’t have children themselves. These social conventions have evolved over time, and people tend to understand them, though there are still areas of disagreement.

Computers are much newer than land, and we have less history and less shared understandings about our rights and responsibilities with regards to networked machines. What does it mean to access or use a computer? Is port scanning access or use? What about sending email, visiting, or having someone visit my web page? Metaphorically, you send email to another person’s machine, but we would not say that setting up a web page gains access to visitors’ machines. Technically, in each case, two networked machines exchange electrons. Is either, or are both, accessing computers?

The law has taken an expansive view of access, one based on the physical exchange of electrons and the uses of computing cycles. Essentially, every use of a networked computer is access. Cases say accessing computers includes:

  • Port scanning

  • Reading web pages

  • Using spiders or searchbots

  • Sending email

  • Automated searching of web-published databases

Because basically every communication with a networked computer is access, the dividing line between legal and illegal behavior is whether the user has permission or authorization.

What Is Adequate Authorization to Access a Computer?

Some statutes use the word authorization, others use permission. The idea is that access without permission is improper and therefore should be illegal.

Obviously, we rarely get explicit permission to use a networked computer. Usually, we assume we have permission—otherwise, why would the machine be on a network? However, there are times when files are physically accessible but other circumstances suggest that the owner does not want people to look at them. There are times when we stumble upon something we think the owner would rather we didn’t have; for example, candid audio recordings of the governor talking about his ideas on immigration policy, a misplaced password file, or the source code for controversial electronic voting machines. Do we always assume that a user has permission to access unless the owner specifies otherwise? Should we assume that users do not have permission unless the owner clearly states that they do? Or is there some middle ground?

The law has tried to distinguish between situations where users can assume permission and ones where otherwise accessible files remain off limits. Files that are password-protected are off limits, even if someone with an account allows you to use their information to log on.[3] A former employee who signs a noncompete agreement cannot access the company web site to do price research for his new employer.[4] If the owner decides that a user should not be searching the site and sues, that alone is proof that the user did not have permission.[5] An employee who knows he is leaving the business cannot access customer lists for the purposes of taking that information to his new employer.[6] However, a union organizer can access membership rolls to bring that information to a rival union.[7]

Even lawyers find these rules confusing, contradictory, and unworkable. Bright-line rules are clear but are inevitably either under- or over-protective. More flexible standards get the answer right when the cases fall in a grey area, but make it difficult to predict what the legal outcome will be. Computer trespass law seems to have the worst of both worlds.

One problem is that it is hard to define when access is acceptable and when it is not. Another problem may be with the fundamental idea that computer access should be controlled by the owner’s personal preferences, particularly if the owner is not willing to invest in security measures to protect its information or system. Consider this hypothetical example:

I have a web site that talks about my illegal sales of narcotics. When you visit my site, there’s a banner that says you may only visit this site if you are not a cop. If law enforcement visits, have they violated the law because they accessed my web site without my permission?

Real-world examples abound: unsecured machines store the code for flawed electronic voting machines; or documents showing cigarette companies were aware of and took advantage of the addictive effects of nicotine; or files proving that the telephone company is giving customer calling records and copies of sent email to the government for warrant-less surveillance. Owners may not want us to have this information, but does that mean the law should make it off-limits?

Common Law Computer Trespass

There are also common law rules that prohibit computer trespass. At common law, there was a tort called trespass to chattel. (A tort is a civil wrong, for which you can be sued. A chattel is an item of personal property, like a car or an ox.) The rule was that if you take someone else’s personal property, or use it in such a way that the owner’s control and enjoyment over that item is diminished, you could be sued for trespass to chattels.

The trespass to chattels tort fell out of use for several decades, until spam came along. Enterprising lawyers decided to reinvigorate the tort to attack spam, arguing that unwanted bulk email interfered with ISPs right to control their computer servers. These claims were basically successful, until the case of Intel v. Hamidi.[8] In that case, Mr. Hamidi wanted to send email to current Intel employees complaining about the company’s labor policies. Intel tried to block Hamidi’s emails, and when he circumvented their efforts, they sued him in California, claiming that by sending the email he was trespassing on their computer system. The California Supreme Court ultimately rejected that claim, holding that in California, the tort required the plaintiff to show some harm to the chattel, and Intel failed to show that Hamidi’s emails harmed their computer system in any noticeable way. They only showed that his emails were distracting to employees and system administrators.

The lesson from Hamidi is that common law, like the federal statute, requires some kind of harm to the computer system or to some government interests. Remember, though, that state statutes are rarely so limited. Under most state statutes, the plaintiff need not show any damage, only unauthorized access or use. Many state statutes allow both civil and criminal claims. Even if you are certain that your use of a networked computer isn’t going to do any harm to the computer system or to data stored there, in theory, you might still cross the legal line in your state or in the state in which the target computer is located.

Case Study: Active Defense

You are a system administrator for a university. Your network is getting bombarded with traffic from zombie computers infected with a computer virus. There is software on the market that you can use to stop the attack. The software will infiltrate the zombie machines through the same vulnerability that allowed the virus to infect them. It will then install code on the zombies that will stop the attack. Is it legal to use this “active defense” tool to protect your system?

Let’s look at U.S. Federal law. Section 1030 prohibits the intentional transmission without authorization of a software program that causes damage to a computer used in interstate commerce. You would intentionally use the active defense software against the zombies. Code would then be placed on the zombie machines without the owners’ permission. Damage means any impairment to the integrity of a computer system. Integrity is implicated when the system is altered in any way, even if no data is taken. To sue, a plaintiff would need $5,000 in damage. Damage costs can include the cost of investigation and of returning the system to its condition prior to the attack.

If I owned a zombie machine affected by your active defense program, I’d have the basic elements of a legal claim. I might not sue, of course. There may not be enough money at stake, I may not be able to prove that you were the cause of harm, instead of the virus or some other contaminant. Probably no prosecutor would be interested in a case like this. But active defense arguably crosses the legal line.

There are some legal defenses you could raise. The common law recognizes necessity and self-defense as excuses for otherwise illegal behavior. Both defenses are pretty narrow. You have to show that you had no other option, and that your response was proportionate to the harm being done to you and did no more harm than necessary.

There have never been any cases analyzing the legality of active defense-type programs or of the applicability of these defenses to computer security practices. This example is not intended to scare network administrators away from using active defense. I use this to illustrate that the law of computer trespass is broad and covers a lot of behavior you might otherwise think is legitimate. Perhaps no one will ever sue over active defense, and society and the courts will come to accept it as perfectly legitimate. The point the reader should be able to identify is that it is possible to make a logical argument that active defense violates the law. This risk is one that sys admins must take into account.

Law and Ethics: Protecting Yourself from Computer Trespass Claims

Despite this gloomy view of the functionality of the computer trespass law, there are ways that you can greatly reduce the chances of getting sued or worse:

  • Get permission first.

  • Do research on your own machines.

  • Don’t cause harm to a victim.

  • Report findings directly to the system administrator or vendor.

  • Don’t ask for money for your findings.

  • Report to people likely to fix it or heed the information, not to people likely to misuse it.

Remember, the litmus test in computer trespass is that the user does not have authorization or permission. Before you pen test or do research, get permission. Get it in writing. The more detailed the permission, the less there is to fight about later on. The permission can list the tasks you’ll perform and the machines on which you’ll perform them.

If you can’t get permission to test on someone else’s machine, do the research on your own machines. Then you can give yourself permission.

For those times when you are not going to be able to get permission from the owner of the computer you must access, you will do better if you do not take any actions to harm the interests of the computer owner beyond the mere trespass. While state law may not require proof of damage, prosecutors, judges, and juries are influenced by whether they think the user was a good guy or a bad guy.

For example, in 1997, I represented a young man who was learning about computer security and wanted to test whether his ISP’s web site had a popular misconfiguration that allowed access to the encrypted password file. He typed in the URL where the password file was often improperly stored and found the file. Technically, that completed the crime. He accessed the password file and he did not have permission to do so. I doubt than any federal prosecutor would have been very interested in the case at this point.

What happened next was that my client ran a password-cracking tool against the file and distributed the cracked username and password pairs over an open IRC channel. The ISP did not like this and neither did the FBI investigators or the Department of Justice. In my opinion, my client would not have been charged if he had not distributed the cracked passwords to the public in the chat room. Doing so is not an element of the crime. However, it did make my client look like a bad guy, out to hurt the ISP.

In reality, the perceived ethics of the user (perceived by a jury or a judge) affect whether he will be charged and convicted. For example, in 2002, the U.S. Attorney in Texas charged Stefan Puffer with violating federal law after Puffer demonstrated to the Harris County District Court clerk that the court’s wireless system was readily accessible to attackers. A jury acquitted Stefan Puffer in about 15 minutes. One juror said she believed that Puffer intended to improve the court’s wireless security, not to cause damage. In another case, in 2006, the Los Angeles United States Attorney’s Office criminally charged a man who found a database programming error in a University of Southern California online application web site, and then copied seven applicants’ personal records and anonymously sent them to a reporter to prove that the problem existed. The prosecutor said during a press conference that he didn’t fault the man for accessing the database to test whether it was secure. “He went beyond that and gained additional information regarding the personal records of the applicant.” The man eventually pled guilty.

These cases illustrate that the technical definitions of access and authorization matter less than doing what seems right. In today’s computer trespass law, remember that ethics carries equal weight to written and common law: do not act to intentionally harm the interests of the computer owner, no matter how insecure the machine may be.

Reverse Engineering

The human race has the ability and perhaps even the innate urge to study its environment, take it apart, and figure out how things work. One might argue it is why we are who we are. Reverse engineering is one expression of this tinkering impulse.

However, when you consider reverse engineering in the field of computers and software, the practice can conflict with legal rules designed to protect intellectual property. While intellectual property law generally recognizes reverse engineering as legitimate, there are some important exceptions that have ramifications for security engineers and professionals. There are three intellectual property rules that may affect your ability to legally reverse engineer: copyright law, trade secret law, and the anti-circumvention provisions of the Digital Millennium Copyright Act.

Copyright Law and Reverse Engineering

A fundamental technique used by security researchers is to take a “known product and working backward to divine the process which aided in its development or manufacture.”[9] The Ninth Circuit Court of Appeal has defined reverse engineering in the context of software engineering as:

(1) reading about the program;

(2) observing the program in operation by using it on a computer;

(3) performing a static examination of the individual computer instructions contained within the program; and

(4) performing a dynamic examination of the individual computer instructions as the program is being run on a computer.

So, many methods of reverse engineering pose no legal risk of copyright infringement. However, emulating, decompilation, and disassembly will require at least partial reproduction of the original code. And copyright law protects software. Copyright law grants to the copyright owner certain exclusive rights in the work, even when copies of the item are given away or sold. These rights include: the right to reproduce the work; the right to prepare derivative works; the right to distribute copies of the work; the right to perform the work publicly; and the right to display the work publicly.[10] Thus, some reverse engineering will create infringing copies of a software program.

Two defenses to copyright infringement nonetheless allow the practice of reverse engineering. First, an owner of a copy of a computer program is allowed to reproduce or adapt the program if reproduction or adaptation is necessary for the program to be used in conjunction with a machine.[11] This exception is relatively limited because it applies only to an owner seeking to adapt his own copy of the program. However, it protects some reverse engineering from infringement claims.

The second defense to copyright infringement is if a legitimate owner of a software program is allowed to make fair use of the program. Fair use is defined by a four-factor test, rather than a list of acceptable practices:

  • The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes;

  • The nature of the copyrighted work;

  • Amount and substantiality of the portion used in relation to the copyrighted work as a whole; and,

  • The effect of the use upon the potential market for or value of the copyrighted work.

Reverse engineering is generally recognized as a fair use. While the expressive part of software programs is copyright-protected, function and ideas contained in programs are not. If reverse engineering is required to gain access to those unprotected elements, any intermediate copies made as part of reverse engineering are fair use. Here are some examples:

Sega Enterprises v. Accolade[12]

Reverse engineering is a fair use when “no alternative means of gaining an understanding of those ideas and functional concepts exists.”

Sony Computer Entertainment v. Connectix[13]

A Sony competitor could legally copy and reverse engineer the Sony BIOS for Playstation, as part of an effort to develop and sell an emulator that would run Playstation games on a computer.

Regardless, reverse engineering will not protect you from a copyright infringement claim if you are not legitimately in possession of the software, or if you use copyrighted code in your final product. Here are some examples:

Atari Games Corp. v. Nintendo of America, Inc., 975 F.2d 832 (Fed. Cir. 1992)

The researching company lied to the Copyright Office to get a copy of the source code. The court found this copy was infringing.

Compaq Computer Corp. v. Procom Technology, Inc., 908 F. Supp. 1409 (S.D. Tex. 1995)

Copyrighted code was reproduced verbatim on competitor’s own hard drives to facilitate interoperability. The company could have made copies to understand the software and create its own interoperable program, but the verbatim copies were infringing.

Cable/Home Communication Corp. v. Network Productions, Inc., 902 F.2d 829 (11th Cir. 1990)

A creator of chips designed to enable display of satellite television services without subscription did not qualify as a fair use in part because they contained 86 percent of the copyright code. Probably another consideration was that the court did not approve of the product.

What to do to protect yourself with fair use

Whether reverse engineering is a fair use depends on the facts of the case. Therefore, to ensure that your reverse engineering is protected by fair use, make sure that the program you are working on is legitimately obtained, make intermediary copies as needed in order to understand the program, but do not infringe the program in your final product.

  • Copies made during reverse engineering should be necessary for figuring out how a program works, and for accessing ideas, facts, and functional concepts contained in the software.

  • Copies should be intermediate. Do not use copyrighted code in the final product.

  • Do not steal the copy of the software that you are reverse engineering.

Reverse Engineering, Contracts, and Trade Secret Law

Despite the legal protections for reverse engineering as a fair use, two newer developments threaten to limit the protection rule. These are trade secret and contract law, and the anti-circumvention provisions of the Digital Millennium Copyright Act (DMCA).

As we saw in Michael Lynn’s case, companies sometimes make trade secret claims against security researchers, despite the fact that reverse engineering is specifically protected in both copyright and trade secret law.

One way to understand the relationship between trade secret law and reverse engineering is to view trade secret protection as a prohibition against theft or misuse of certain kinds of information, rather than a rule that says certain information is private property for all purposes. Information may be a trade secret one day, but if the public legitimately learns the information, it ceases to be protected as such. This explains why reverse engineering generally doesn’t violate trade secret law. It is a fair and honest means of learning information.

The question becomes more complicated when a EULA or nondisclosure agreement (NDA) prohibits reverse engineering. If a researcher reverse engineers in violation of a legal instrument, is the technique still a fair and honest practice allowed in trade secret law?

Can a EULA or NDA:

  • Prevent the researcher from raising a fair use defense to a claim of copyright infringement?

  • Prevent the researcher from claiming fair and legitimate discovery defense in response to a trade secret misappropriation claim?

  • Subject the researcher to a breach of contract claim if he reverse engineers in contravention to the terms of that document?

The answer to these questions depends on whether the terms of the EULAs or NDAs are enforceable. Even if enforceable, the question remains whether a person who has violated those terms merely breaches the EULA or NDA contract, or actually infringes copyright or misappropriates trade secrets, both more serious claims. Full discussion of this issue is beyond the scope of this chapter. However, I do want to explain some basic contract principles so readers can see the interrelationship with trade secret law.

A EULA purports to be a contract between the vendor and the purchaser. Contract law is based on a mythological meeting of two entities with equal bargaining power that come together and strike a deal in which each gives something to get something. A EULA does not look much like the arm’s length negotiation I’ve just described. Instead, the vendor issues small print terms and conditions that the purchaser sees only when he opens the box, or upon install. The purchaser can then return the product or “accept” the terms. People who’ve never seen the terms or agreed to them then use the product.

Additionally, companies that want to protect their trade secrets often enter into nondisclosure agreements (NDAs) that regulate how signers will treat source code. This is the only way that a team of people can work on a project and the company can still keep information confidential.

The important thing to note is that researchers may be subject to contractual provisions contained in shrink-wrap, click-wrap, and browse-wrap licenses, and that violation of those provisions in the service of security work could undermine the applicability of legal defenses you would otherwise be able to use.

Perhaps there are some contract terms the law will enforce, and some it will not. One factor may be whether the contracts were truly negotiated or just offered to the public on a take it or leave it basis. A few cases have ruled that the terms in software mass market licenses are enforceable if the user has an opportunity to view them and accept or return the product at some point prior to use. Thus, even if intellectual property law says you can do something, a court may punish you if a contract says you cannot.

What to do to protect yourself

As you can see, it’s pretty important to legally possess a copy of the software you are working on and to comply with any promises that you’ve made in conjunction with obtaining the right to use that software (in a click-wrap, shrink-wrap, browse-wrap, or NDA contract, for example). Failure to do so can result in legal liability, either for breaking the promise or for otherwise legal activities that are no longer protected by IP law.

In my opinion, companies should not use EULAs to terminate public right of access to ideas and functionality of code. We should not depend on the intellectual property rights holder to make socially beneficial decisions about reverse engineering. Once software is out on the market, the vendor should not be able to bind the public at large to a license term that deprives society of the benefits of reverse engineering.

Enforcing terms limiting reverse engineering or controlling dissemination of information obtained by reverse engineering makes sense when the only way the researcher got access to the original code was under an individually negotiated NDA. But even there, restrictions that prevent people from learning about flaws in electronic voting machines or the routers that run the Internet may need to yield to the greater good of public access.

Breaching a contract does not customarily carry the negative connotation that committing a tort or a crime does. The purpose of contract is to smooth out commercial interactions, and walking away from a contract if there is a better deal is part of doing business. Traditionally, breaches could be fixed with money damages sufficient to give the contracting party the benefit of the bargain and punitive damages were not granted. So, it’s a bit odd to let a breach of contract translate into trade secret and copyright damages. It is important for you to know that the law will develop further in this area over the next few years. As always, if you recognize a potential grey area, get real legal advice from an attorney.

Reverse Engineering and Anti-Circumvention Rules

Section 1201, the anti-circumvention provisions of the DMCA, prohibits circumvention of technological protection measures that effectively control access to copyrighted works and prohibit the distribution of tools that are primarily designed, valuable, or marketed for such circumvention.[14] What this means is that you generally are not allowed to break software locks that control how you use copyrighted materials. There are other parts to the DMCA, including the safe harbor/notice and take down provisions for copyright infringing materials, so to distinguish from these other sections, I refer to the anti-circumvention provisions as “Section 1201,” rather than as the DMCA.

Congress’ purpose in passing Section 1201 was to prohibit breaking copyright owners’ digital rights management schemes, so that companies would be more comfortable releasing works in digital format. However, the statute prohibits far more than digital rights management; for example, circumventing both access and copy controls. As we saw previously in the computer trespass context, access is a broad concept. Any use is deemed access. Thus, Section 1201 prohibits circumvention of technology that controls how customers use digital music, movies, and games.

Some commentators have called Section 1201 para-copyright because it in effect gives copyright owners the ability to control behaviors that the copyright law does not. The copyright law does not assure to the owner the right to control access, but Section 1201 in effect gives owners that right, if they can enshrine their access preferences in a technological protection measure or with digital rights management (DRM) technology.

Because of the broad nature of access and because software is a copyright-protected work, there have been many Section 1201 claims challenging security research or reporting.

  • In September 2000, Princeton computer science professor Edward Felten and a team of researchers succeeded in removing digital watermarks on music. When the team tried to present their results at an academic conference, the industry group that promoted the watermarking technology threatened the researchers with a DMCA suit.

  • In October 2003, SunnComm threatened a Princeton graduate student with a DMCA lawsuit after he published a report revealing that merely holding down the Shift key on a Windows PC defeats SunnComm’s CD copy protection technology.

  • In 2002, Hewlett-Packard threatened SNOsoft, a research collective, when they published a security flaw in HP’s Tru64 Unix operating system.

  • In April 2003, educational software company Blackboard, Inc. used a DMCA threat to stop the presentation of research on security flaws in the Blackboard ID card system at the InterzOne II conference in Atlanta.

  • In 2003, U.S. publisher John Wiley & Sons dropped plans to publish Andrew “bunnie” Huang’s book on Xbox modding, which Huang discovered as part of his doctoral research at M.I.T. Huang eventually self-published the book in mid-2003 and was subsequently able to get the book published by No Starch Press.

Despite the widespread use of the statute in cease-and-desist letters, there have not been many actual court decisions applying it to security research. In advising researchers in this area then, there are two essential issues to bear in mind: what the statute says and how it has been used.

Theoretically, Section 1201 could be used in many computer trespass situations, effectively supplanting Section 1030 (the Federal law barring intentional transmission without authorization of a software program that causes damage to a computer used in interstate commerce). Any unauthorized access that involves circumvention of a security protocol, and thus allows use of the copyrighted software on a computer, is arguably a 1201 violation. While getting authorization avoids a Section 1030 claim, getting permission is practically much more difficult in a Section 1201 context. Authorization is relatively easy to get when you are penetration testing or doing research on a particular computer system. But when your research is on DRM or other encryption schemes, authorization will not be forthcoming. Who at Sony could you call for authorization to reverse engineer the spyware root kits they were distributing with each music CD in 2005? Applying Section 1201 in a trespass context is highly problematic, for this and other reasons.

Courts have found the following practices and technologies to be illegal under the anti-circumvention provisions:

Mod chips for PlayStation and Xbox

Chips that allow the user to run any games or code on the machines without checking for an authentication handshake

DeCSS

A software program that decrypts DVDs

Adobe eBook Processor

A software program that decrypts Adobe eBooks

Companies that produce interoperable after-market products such as printer cartridges and garage door openers (Lexmark v. Static Control Components[15], Chamberlain v. Skylink[16]) have also faced DMCA suits. Owners use encryption to check that customers are using approved aftermarket products, while competitors circumvent this encryption so that customers can use the products they like, and that circumvention allows customers to operate code inside the printer or garage door opener. Thus, the lawsuit claims that the after-market competitors are circumventing a technological protection-measure (encryption) that controls access to (use of) a copyrighted work (code in the printer, garage door opener). In these cases, the competitors have prevailed on the grounds that customers have the right to access code in the machines they’ve purchased. As more cases are brought, we will see what effect EULAs denying the right to access will have in this area as well as in trade secret law.

In practice, the few DMCA cases on the books suggest that the statute is more likely to be enforced when your research focuses on DRM or other technological protection measures that control access to video games, music, and movies. Researchers in these fields of DRM and applied encryption must be particularly careful because the few research exceptions in Section 1201 that exist are very narrow: reverse engineering, security research, and encryption research.

Congress recognized that the anti-circumvention provisions could prohibit reverse engineering, so it put an exception to the rule in the statute for some kinds of reverse engineering. If you have lawfully obtained the right to use a computer program, you may circumvent and disclose information obtained through circumvention for the sole purpose of creating an interoperable, noninfringing computer program, providing your work falls within these guidelines:

  • Sole purpose is interoperability

  • Necessary

  • Independently created computer program

  • Not previously readily available to the person engaging in the circumvention

  • Such acts of identification and analysis are not an infringement

This exception has been read very narrowly. For example, the District Court in the DeCSS case (Universal City Studios v. Reimerdes) held that DeCSS was not protected under the reverse engineering exception because DeCSS runs under both Linux and Windows, and thus could not have been for the sole purpose of achieving interoperability between Linux and DVDs.”[17]

The encryption research exception applies only when:

  • Circumvention is of a technological protection measure that controls access to a copy, phonorecord, a performance, or display of a published work

  • Necessary

  • A researcher sought advance permission

  • Research is necessary to advance the state of knowledge in the field

With a few additional factors, including whether:

  • Publishing results promotes infringement or advances the state of knowledge or development of encryption technology

  • The person is a professional cryptographer

  • The person provides the copyright owner with notice and the research

Finally, the security research exception in Section 1201 says it is legal to access a computer network solely for the purpose of good-faith testing and that correcting a vulnerability, with authorization, is not an infringement or other violation of law. The key factors include whether:

  • The information is used solely to promote the security of the owner of the tested computer system, or the information is shared directly with the developer of the system.

  • The information is distributed in a way that might enable copyright infringement or other legal violations.

The statute also says that security tools may be created and disseminated for the sole purpose of performing the described acts of security testing, unless the tool:

  • Is primarily designed for circumventing

  • Has only limited commercially significant purpose other than to circumvent

Or:

  • Is marketed for circumvention

What to do to protect yourself when working in DMCA

The various offenses, defenses, and factors contributing to defense are pretty complicated. But there are a few points that I can distill from this statutory scheme with which you can try to comply to make it less likely you’ll be successfully sued for violating Section 1201.

  • Do not market for circumventing purposes.

  • Do not design solely for circumvention.

  • Seek advance permission if possible, even if you know they will deny you.

  • Publish in a manner that advances the state of knowledge and does not enable infringement.

  • Be careful when creating products that allow customers to break the law.

Vulnerability Reporting

One of the more vigorous public policy debates in the security field centers on publication of information about security vulnerabilities. Some argue that vulnerability publication should be restricted in order to limit the number of people with the knowledge and tools needed to attack computer systems. Restriction proponents are particularly concerned with information sufficient to enable others to breach security, especially including exploit or proof-of-concept code.

The benefits of publication restrictions theoretically include denying script kiddies attack tools, reducing the window of vulnerability before a patch is available, and managing public overreaction to a perception of widespread critical insecurity.

Opponents of publication restrictions argue that the public has a right to be aware of security risks, and that publication enables system administrator remediation while motivating vendors to patch. They also question whether restricting white hat researchers actually deprives black hats of tools needed to attack, under the theory that attackers are actively developing vulnerability information on par with legitimate researchers.

Today many, if not most, security researchers have voluntarily adopted a delayed publication policy. While these policies may differ in detail, they come under the rubric of responsible disclosure. The term has come to mean that there’s disclosure but no distribution of proof-of-concept code until the vendor issues a patch.[18] Once the patch is issued, it in itself can be reverse engineered to reveal the security problem, so there is little point in restricting publication after that time. In return, responsible vendors will work quickly to fix the problem and credit the researcher with the find.

Various businesses that buy and sell vulnerabilities are threatening this uneasy balance, as are researchers and vendors that refuse to comply. For example, in the month of January 2007, two researchers published daily flaws in Apple’s operating system without giving advance notice of those flaws to the company.

Can we regulate security information? The dissemination of pure information is protected in the U.S. by the First Amendment. Many cases have recognized that source code, and even object code, are speech-protected by the First Amendment, and as a general principle, courts have been loath to impose civil or criminal liability for truthful speech even if it instructs on how to commit a crime. (The infrequent tendency of speech to encourage unlawful acts does not constitute justification for banning it.)

On the other hand, information about computer security is different from information in other fields of human endeavor because of its reliance on code to express ideas.[19] Code has a dual nature. It is both expressive and functional. Legislatures have tried to regulate the functionality of code similar to tools that can be used to commit criminal acts.[20] But the law cannot regulate code without impacting expression because the two are intertwined.

While current case law says that laws that regulate the functionality of code are acceptable under the First Amendment if they are content-neutral, lawmakers have advocated or even passed some laws that regulate publication. For example, the Council of Europe’s new Cybercrime Treaty requires signatories to criminalize the production, sale, procurement for use, import, and distribution of a device or program designed or adapted primarily for the purpose of committing unauthorized access or data intercept. Signatories can exempt tools possessed for the authorized testing or protection of a computer system. The United States is a signatory.

As previously discussed, the U.S. government and various American companies have used Section 1201 (which regulates the distribution of software primarily designed to circumvent technological protection measures that control access to a work protected under copyright laws) to squelch publication of information about security vulnerabilities. But where there is no particular statute, then security tools, including exploit code, are probably legal to possess and to distribute.

Nevertheless, companies and the government have tried to target people for the dissemination of information using either the negligence tort, conspiracy law, or aiding and abetting.

To prove negligence, the plaintiff has to establish:

  • Duty of care

  • Breach of that duty

  • Causation

  • Harm

Duty of care means that a court says that the general public has a responsibility not to publish exploit code just because it’s harmful, or that the particular defendants have a responsibility not to publish exploit code because of something specific about their relationship with the company or the customers. Yet, the first amendment protects the publication of truthful information, even in code format. Code is a bit different because code works, it doesn’t just communicate information. No case has ever held that someone has a legal duty to refrain from publishing information to the general public if the publisher has no illegal intent. I think that would be hard to get a court to establish, given the general practice of the community and the prevailing free speech law. I can imagine, however, a situation in which a court would impose a duty of care on a particular researcher with a prior relationship with a vendor. This hasn’t happened yet.

With regard to evidence of conspiracy, the charge requires proof of an agreement. If you publish code as part of an agreement to illegally access computers, that is a crime. The government recently proved conspiracy against animal rights activists by using evidence of web site language supporting illegal acts in protest of inhumane treatment (Stop Huntingdon Animal Cruelty). The convictions are decried as a violation of the First Amendment, but there were illegal activities, and while the web site operators were not directly tied to those activities, the web site discussed, lauded, and claimed joint responsibility (by using the word “we” with regard to the illegal acts).

Aiding and abetting requires the government to show an intent to further someone else’s illegal activity. Intent, as always, is inferred from circumstances.

Rarely does the government infer illegal intent from mere publication to the general public, but it has happened. For example, some courts have inferred a speaker’s criminal intent from publication to a general audience, as opposed to a coconspirator or known criminal, if the publisher merely knows that the information will be used as part of a lawless act (United States v. Buttorff, 572 F.2d 619 [8th Cir.], cert. denied, 437 U.S. 906 [1978] [information aiding tax protestors]; or, United States v. Barnett, 667 F.2d 835 [9th Cir. 1982] [instructions for making PCP]). Both Buttorff and Barnett suggest that the usefulness of the defendant’s information, even if distributed to people with whom the defendant had no prior relationship or agreement, is a potential basis for aiding and abetting liability, despite free speech considerations.

In contrast, in Herceg v. Hustler Magazine, 814 F.2d 1017 (5th Cir. 1987), a magazine was not liable for publishing an article describing autoerotic asphyxiation after a reader followed the instructions and suffocated. The article included details about how the act is performed, the kind of physical pleasure those who engage in it seek to achieve, and 10 different warnings that the practice is dangerous. The Court held that the article did not encourage imminent illegal action, nor did it incite, so it was First Amendment-protected.

Legitimate researchers are not comforted by this lack of legal clarity. Security researchers frequently share vulnerability information on web pages or on security mailing lists. These communities are open to the public and include both white-hat and black-hat hackers. The publishers know that some of the recipients may use the information for crimes. Nonetheless, the web sites properly advise that the information is disseminated for informational purposes and to promote security and knowledge in the field, rather than as a repository of tools for attackers.

A serious problem is that prosecutors and courts might weigh the social perception of the legitimacy of the publisher’s “hacker” audience or the respectability of the publisher himself, in deciding whether the researcher published with a criminal intent.

In one example, in 2001 a Los Angeles-based Internet messaging company convinced the U.S. Department of Justice to prosecute a former employee who informed the company’s customers of a security flaw in its webmail service. The company claimed that the defendant was responsible for its lost business. As a result, security researcher Bret McDanel was convicted of a violation of 18 U.S.C. § 1030(a)(5)(A), which prohibits the transmission of code, programs, or information with the intent to cause damage to a protected computer, for sending email to customers of his former employer informing them that the company’s web messaging service was insecure. The government’s argument at trial was that McDanel impaired the integrity of his former employer’s messaging system by informing customers about the security flaw. I represented Mr. McDanel on appeal.

On appeal, the government disavowed this view and agreed with the defendant that a conviction could only be based on evidence that the “defendant intended his messages to aid others in accessing or changing the system or data.”[21]. McDanel’s conviction was overturned on appeal, but not before he served 16 months in prison. Nothing in the statute says that Section 1030 requires proof of intent, but because McDanel’s actions were speech, the government had to read that requirement into the statute to maintain its constitutionality.

In late 2006, Chris Soghoian published an airline “boarding pass generator” on his web site. The generator took a Northwest boarding pass, which the airline distributes in a modifiable format, and allowed users to type their own name on the document. Though the Transportation Safety Administration (TSA) had long been aware of the ease of forging boarding passes, they had done nothing and the problem was not widely know. After Soghoian’s publication, there was something of a public outcry, and Congress called for improved security. The Department of Homeland Security paid a visit to Soghoian, investigating whether he was aiding and abetting others in fraudulently entering the secured area of an airport. Because Soghoian had never used his fake boarding passes, nor provided it to anyone, and because the language on his web site made clear that his purpose was to critique the security of the boarding pass checkpoint, the Department of Homeland Security recognized that the publication was not criminal. Nonetheless, they sent a cease-and-desist letter to his ISP, which promptly removed the page.

The blunt lesson from these cases is that it’s risky to be a smart ass. You have a right to embarrass the TSA or to show how a company is hurting its customers, but being a gadfly garners attention, and not all attention is positive. The powers that be do not like being messed with, and if the laws are unclear or confusing, they’ll have even more to work with if they want to teach you a lesson. This isn’t to say there is no place for being clever, contrary, or even downright ornery. Some of the most important discoveries in network security and other fields have been made by people whose motivation was to outsmart and humiliate others. If this is your approach, be aware you are inviting more risk than someone who works within the established parameters. You may also get more done. Talk to a lawyer. A good one will point out the ways in which what you are doing is risky. A great one will help you weigh various courses of action, so you can decide for yourself.

What to do to protect yourself when reporting vulnerabilities

Be aware there may be statutes in your state that apply to publications that are beyond the scope of this chapter, that have arisen since this book was last printed, or that apply to your special circumstance. In general:

  • Publish only what you have reason to believe is true.

  • Publish to the vendor or system administrator first, if possible.

  • Don’t ask for money in exchange for keeping the information quiet. I’ve had clients accused of extortion after saying they would reveal the vulnerability unless the company wants to pay a finder’s fee or enter into a contract to fix the problem.

  • Do not publish to people you know intend to break the law. Publish to a general audience, even though some people who receive the information might intend to break the law.

If you are thinking about publishing in a manner that is not commonly done today, consult a lawyer.

What to Do from Now On

I cannot cover this topic completely without devoting an entire book to it, and perhaps not even then. Security practices do not fit neatly into white hat or black hat categories. There are legal and ethical gray areas where most of you live and work. This book intends to give you technical skills in using an assortment of security tools, but it’s how you use those tools that create the legal and ethical challenges with which this chapter, the legal system, and society grapple.

Any bozo can file a lawsuit, but you will usually receive some notification first, in the form of a demand or cease and desist letter. If you receive one of these, get advice from a lawyer. Perhaps the suit can be prevented or settled ahead of time.

Criminal charges often come without any advance notice to you. The FBI may show up at your door asking questions; they may have a warrant to seize your computers; they may ask permission to take your machines. You may never hear anything further from them, or you may get arrested months later. Local law enforcement investigates differently. If law enforcement comes to question you, ask for a lawyer immediately. You may have done nothing wrong, and you may want to cooperate, but that is something that a skilled attorney must help you with. Sometimes the police tell you that getting a lawyer is just making matters worse for you. Actually, it makes matters worse for them, because there’s someone looking out for your interests and making sure that they keep their promises to you.

In less extreme situations, consider following the basic “What to do to protect yourself” bullet points throughout this chapter. They are certainly obvious, but you’d be surprised how seldom they are considered.

Ask for permission. Do not take things you are not intended to take. Do not break things. Publish your findings in open forums, using not-for-profit language and with good intent. Do not fake passwords. When you tinker with programs, make sure they are yours, do your research on your own time, on your own computers, without intent to gain financially or destroy something someone else has built.

Finally, there may be times you will not be able to follow these edicts. But as my best friend wrote when she gave me an etiquette book for my wedding, its best to know the rules before you break them. The legalities and ethics of the network security field is in its infancy. If I haven’t said it enough times already, here it is once more: if you are operating in a grey area and something feels strange, get legal advice from a practicing lawyer in the field.

—Jennifer Stisa Granick



[1] See 18 U.S.C. 1030 for full text of the federal statute.

[2] For more on calculating loss in computer crime cases, see “Faking It: Calculating Loss in Computer Crime Cases,” published in I/S: A Journal of Law and Policy for the Information Society, Cybersecurity, Volume 2, Issue 2 (2006), available at http://www.is-journal.org/v02i02/2isjlp207-granick.pdf.

[3] Konop v. Hawaiian Airlines, 302 F.3d 868 (9th Cir. 2002).

[4] EF Cultural Travel B.V. v. Zefer Corp., 318 F.3d 58 (1st Cir. 2003).

[5] Register.com, Inc. v. Verio, Inc., 356 F.3d 393 (2d Cir. 2004).

[6] Shurgard Storage Centers Inc. v. Safeguard Self Storage Inc., 119 F.upp.2 1121 (W.D. Wash. 2000).

[7] Int’l Assoc. of Machinists and Aerospace Workers v. Werner-Matsuda, 390 F.Supp.2d 479 (D. Md. 2005).

[8] Intel v. Hamidi, 30 Cal.4th 1342 (2003).

[9] Kewanee Oil Co. v. Bicron Corp. (1974) 416 U.S. 470, 476.

[10] 17 U.S.C. 106.

[11] 17 U.S.C. 117; DSC Communications v. Pulse Communications, 170 F.3d 1354, 1361 (Fed Cir. 1999).

[12] 977 F.2d 1510 (9th Cir. 1992).

[13] 203 F.3d 596 (9th Cir. 2000).

[14] 17 U.S.C. 1201 (1998).

[15] 387 F.3d 522 (6th Cir. 2004).

[16] 381 F.3d 1178, 1191 (Fed.Cir. 2004).

[17] 111 F.Supp.2d 294, 320 (SDNY 2000), upheld on appeal, Universal City Studios v. Corley, 272 F.3d 429 (2d Cir. 2001).

[18] Paul Roberts, Expert Weighs Code Release In Wake Of Slammer Worm, IDG News Service, Jan. 30, 2003, available at http://www.computerworld.com/securitytopics/security/story/0,10801,78020,00.html; Kevin Poulsen, Exploit Code on Trial, SecurityFocus, Nov. 23, 2003, at http://www.securityfocus.com/news/7511.

[19] See 49 U.C.L.A. L.Rev. 871, 887–903.

[20] See, e.g., 18 U.S.C. 2512(1)(b) (illegal to possess eavesdropping devices); Cal. Penal Code §; 466 (burglary tools).

[21] Government’s Motion for Reversal of Conviction, United States v. McDanel, No. 03-50135 (9th Cir. 2003), available at http://cyberlaw.stanford.edu/about/cases/001625.shtml

Get Security Power Tools now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.